* [PATCH v2 01/14] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-25 16:26 ` [PATCH v2 02/14] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
` (14 subsequent siblings)
15 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
The current definition of the SMMU_CR0_RESERVED mask is incorrect.
It mistakenly treats bit 10 (DPT_WALK_EN) as a reserved bit while
treating bit 9 (RES0) as an implemented bit.
According to the SMMU architecture specification, the layout for CR0 is:
| 31:11| RES0 |
| 10 | DPT_WALK_EN |
| 9 | RES0 |
| 8:6 | VMW |
| 5 | RES0 |
| 4 | ATSCHK |
| 3 | CMDQEN |
| 2 | EVENTQEN |
| 1 | PRIQEN |
| 0 | SMMUEN |
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lists.gnu.org/archive/html/qemu-arm/2025-06/msg00088.html
---
hw/arm/smmuv3-internal.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index b6b7399347..516f2ffa75 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -120,7 +120,8 @@ REG32(CR0, 0x20)
FIELD(CR0, EVENTQEN, 2, 1)
FIELD(CR0, CMDQEN, 3, 1)
-#define SMMU_CR0_RESERVED 0xFFFFFC20
+#define SMMU_CR0_RESERVED 0xFFFFFA20
+
REG32(CR0ACK, 0x24)
REG32(CR1, 0x28)
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH v2 02/14] hw/arm/smmuv3: Correct SMMUEN field name in CR0
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
2025-09-25 16:26 ` [PATCH v2 01/14] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-26 12:27 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands Tao Tang
` (13 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
The FIELD macro for the SMMU enable bit in the CR0 register was
incorrectly named SMMU_ENABLE.
The ARM SMMUv3 Architecture Specification (both older IHI 0070.E.a and
newer IHI 0070.G.b) consistently refers to the SMMU enable bit as SMMUEN.
This change makes our implementation consistent with the manual.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 516f2ffa75..71a3c0c02c 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -116,7 +116,7 @@ REG32(IDR5, 0x14)
REG32(IIDR, 0x18)
REG32(AIDR, 0x1c)
REG32(CR0, 0x20)
- FIELD(CR0, SMMU_ENABLE, 0, 1)
+ FIELD(CR0, SMMUEN, 0, 1)
FIELD(CR0, EVENTQEN, 2, 1)
FIELD(CR0, CMDQEN, 3, 1)
@@ -182,7 +182,7 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
static inline int smmu_enabled(SMMUv3State *s)
{
- return FIELD_EX32(s->cr[0], CR0, SMMU_ENABLE);
+ return FIELD_EX32(s->cr[0], CR0, SMMUEN);
}
/* Command Queue Entry */
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 02/14] hw/arm/smmuv3: Correct SMMUEN field name in CR0
2025-09-25 16:26 ` [PATCH v2 02/14] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
@ 2025-09-26 12:27 ` Eric Auger
0 siblings, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-26 12:27 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> The FIELD macro for the SMMU enable bit in the CR0 register was
> incorrectly named SMMU_ENABLE.
>
> The ARM SMMUv3 Architecture Specification (both older IHI 0070.E.a and
> newer IHI 0070.G.b) consistently refers to the SMMU enable bit as SMMUEN.
>
> This change makes our implementation consistent with the manual.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
> ---
> hw/arm/smmuv3-internal.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 516f2ffa75..71a3c0c02c 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -116,7 +116,7 @@ REG32(IDR5, 0x14)
> REG32(IIDR, 0x18)
> REG32(AIDR, 0x1c)
> REG32(CR0, 0x20)
> - FIELD(CR0, SMMU_ENABLE, 0, 1)
> + FIELD(CR0, SMMUEN, 0, 1)
> FIELD(CR0, EVENTQEN, 2, 1)
> FIELD(CR0, CMDQEN, 3, 1)
>
> @@ -182,7 +182,7 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
>
> static inline int smmu_enabled(SMMUv3State *s)
> {
> - return FIELD_EX32(s->cr[0], CR0, SMMU_ENABLE);
> + return FIELD_EX32(s->cr[0], CR0, SMMUEN);
> }
>
> /* Command Queue Entry */
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
2025-09-25 16:26 ` [PATCH v2 01/14] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register Tao Tang
2025-09-25 16:26 ` [PATCH v2 02/14] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-27 10:29 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 04/14] refactor: Move ARMSecuritySpace to a common header Tao Tang
` (12 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
The Arm SMMUv3 architecture defines a set of registers and commands for
managing secure transactions and context.
This patch introduces the definitions for these secure registers and
commands within the SMMUv3 device model internal header.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 72 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 71 insertions(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 71a3c0c02c..3820157eaa 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -38,7 +38,7 @@ typedef enum SMMUTranslationClass {
SMMU_CLASS_IN,
} SMMUTranslationClass;
-/* MMIO Registers */
+/* MMIO Registers. The offsets are shared by Non-secure/Realm/Root states. */
REG32(IDR0, 0x0)
FIELD(IDR0, S2P, 0 , 1)
@@ -121,6 +121,7 @@ REG32(CR0, 0x20)
FIELD(CR0, CMDQEN, 3, 1)
#define SMMU_CR0_RESERVED 0xFFFFFA20
+#define SMMU_S_CR0_RESERVED 0xFFFFFC12
REG32(CR0ACK, 0x24)
@@ -180,6 +181,75 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
#define A_IDREGS 0xfd0
+/* Secure registers. The offsets are begin with SMMU_SECURE_BASE_OFFSET */
+#define SMMU_SECURE_BASE_OFFSET 0x8000
+
+REG32(S_IDR0, 0x8000)
+REG32(S_IDR1, 0x8004)
+ FIELD(S_IDR1, S_SIDSIZE, 0 , 6)
+ FIELD(S_IDR1, SEL2, 29, 1)
+ FIELD(S_IDR1, SECURE_IMPL, 31, 1)
+
+REG32(S_IDR2, 0x8008)
+REG32(S_IDR3, 0x800c)
+REG32(S_IDR4, 0x8010)
+
+REG32(S_CR0, 0x8020)
+ FIELD(S_CR0, SMMUEN, 0, 1)
+ FIELD(S_CR0, EVENTQEN, 2, 1)
+ FIELD(S_CR0, CMDQEN, 3, 1)
+
+REG32(S_CR0ACK, 0x8024)
+REG32(S_CR1, 0x8028)
+REG32(S_CR2, 0x802c)
+
+REG32(S_INIT, 0x803c)
+ FIELD(S_INIT, INV_ALL, 0, 1)
+/* Alias for the S_INIT offset to match in the dispatcher switch */
+#define A_S_INIT_ALIAS 0x3c
+
+REG32(S_GBPA, 0x8044)
+ FIELD(S_GBPA, ABORT, 20, 1)
+ FIELD(S_GBPA, UPDATE, 31, 1)
+
+REG32(S_IRQ_CTRL, 0x8050)
+ FIELD(S_IRQ_CTRL, GERROR_IRQEN, 0, 1)
+ FIELD(S_IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
+
+REG32(S_IRQ_CTRLACK, 0x8054)
+
+REG32(S_GERROR, 0x8060)
+ FIELD(S_GERROR, CMDQ_ERR, 0, 1)
+
+#define SMMU_GERROR_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
+#define SMMU_GERROR_IRQ_CFG2_RESERVED 0x000000000000003F
+
+#define SMMU_STRTAB_BASE_RESERVED 0x40FFFFFFFFFFFFC0
+#define SMMU_QUEUE_BASE_RESERVED 0x40FFFFFFFFFFFFFF
+#define SMMU_EVENTQ_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
+
+REG32(S_GERRORN, 0x8064)
+REG64(S_GERROR_IRQ_CFG0, 0x8068)
+REG32(S_GERROR_IRQ_CFG1, 0x8070)
+REG32(S_GERROR_IRQ_CFG2, 0x8074)
+REG64(S_STRTAB_BASE, 0x8080)
+REG32(S_STRTAB_BASE_CFG, 0x8088)
+ FIELD(S_STRTAB_BASE_CFG, LOG2SIZE, 0, 6)
+ FIELD(S_STRTAB_BASE_CFG, SPLIT, 6, 5)
+ FIELD(S_STRTAB_BASE_CFG, FMT, 16, 2)
+
+REG64(S_CMDQ_BASE, 0x8090)
+REG32(S_CMDQ_PROD, 0x8098)
+REG32(S_CMDQ_CONS, 0x809c)
+ FIELD(S_CMDQ_CONS, ERR, 24, 7)
+
+REG64(S_EVENTQ_BASE, 0x80a0)
+REG32(S_EVENTQ_PROD, 0x80a8)
+REG32(S_EVENTQ_CONS, 0x80ac)
+REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
+REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
+REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
+
static inline int smmu_enabled(SMMUv3State *s)
{
return FIELD_EX32(s->cr[0], CR0, SMMUEN);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands
2025-09-25 16:26 ` [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands Tao Tang
@ 2025-09-27 10:29 ` Eric Auger
2025-09-28 4:46 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-27 10:29 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> The Arm SMMUv3 architecture defines a set of registers and commands for
> managing secure transactions and context.
>
> This patch introduces the definitions for these secure registers and
> commands within the SMMUv3 device model internal header.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 72 +++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 71 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 71a3c0c02c..3820157eaa 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -38,7 +38,7 @@ typedef enum SMMUTranslationClass {
> SMMU_CLASS_IN,
> } SMMUTranslationClass;
>
> -/* MMIO Registers */
> +/* MMIO Registers. The offsets are shared by Non-secure/Realm/Root states. */
s/The offsets are shared/Shared ?
>
> REG32(IDR0, 0x0)
> FIELD(IDR0, S2P, 0 , 1)
> @@ -121,6 +121,7 @@ REG32(CR0, 0x20)
> FIELD(CR0, CMDQEN, 3, 1)
>
> #define SMMU_CR0_RESERVED 0xFFFFFA20
> +#define SMMU_S_CR0_RESERVED 0xFFFFFC12
>
>
> REG32(CR0ACK, 0x24)
> @@ -180,6 +181,75 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
>
> #define A_IDREGS 0xfd0
>
> +/* Secure registers. The offsets are begin with SMMU_SECURE_BASE_OFFSET */
Start of secure-only registers? At least it deserves some reworking.
> +#define SMMU_SECURE_BASE_OFFSET 0x8000
> +
> +REG32(S_IDR0, 0x8000)
> +REG32(S_IDR1, 0x8004)
> + FIELD(S_IDR1, S_SIDSIZE, 0 , 6)
> + FIELD(S_IDR1, SEL2, 29, 1)
> + FIELD(S_IDR1, SECURE_IMPL, 31, 1)
> +
> +REG32(S_IDR2, 0x8008)
> +REG32(S_IDR3, 0x800c)
> +REG32(S_IDR4, 0x8010)
> +
> +REG32(S_CR0, 0x8020)
> + FIELD(S_CR0, SMMUEN, 0, 1)
> + FIELD(S_CR0, EVENTQEN, 2, 1)
> + FIELD(S_CR0, CMDQEN, 3, 1)
> +
> +REG32(S_CR0ACK, 0x8024)
> +REG32(S_CR1, 0x8028)
> +REG32(S_CR2, 0x802c)
> +
> +REG32(S_INIT, 0x803c)
> + FIELD(S_INIT, INV_ALL, 0, 1)
> +/* Alias for the S_INIT offset to match in the dispatcher switch */
what is the S_INIT_ALIAS purpose? At this stage of the reading I don't
understand above comment. This it does not match any actual reg, I would
move this defintion in the patch that actually uses it.
> +#define A_S_INIT_ALIAS 0x3c
> +
> +REG32(S_GBPA, 0x8044)
> + FIELD(S_GBPA, ABORT, 20, 1)
> + FIELD(S_GBPA, UPDATE, 31, 1)
> +
> +REG32(S_IRQ_CTRL, 0x8050)
> + FIELD(S_IRQ_CTRL, GERROR_IRQEN, 0, 1)
> + FIELD(S_IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
> +
> +REG32(S_IRQ_CTRLACK, 0x8054)
> +
> +REG32(S_GERROR, 0x8060)
> + FIELD(S_GERROR, CMDQ_ERR, 0, 1)
> +
> +#define SMMU_GERROR_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
> +#define SMMU_GERROR_IRQ_CFG2_RESERVED 0x000000000000003F
> +
> +#define SMMU_STRTAB_BASE_RESERVED 0x40FFFFFFFFFFFFC0
> +#define SMMU_QUEUE_BASE_RESERVED 0x40FFFFFFFFFFFFFF
> +#define SMMU_EVENTQ_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
> +
> +REG32(S_GERRORN, 0x8064)
> +REG64(S_GERROR_IRQ_CFG0, 0x8068)
> +REG32(S_GERROR_IRQ_CFG1, 0x8070)
> +REG32(S_GERROR_IRQ_CFG2, 0x8074)
> +REG64(S_STRTAB_BASE, 0x8080)
> +REG32(S_STRTAB_BASE_CFG, 0x8088)
> + FIELD(S_STRTAB_BASE_CFG, LOG2SIZE, 0, 6)
> + FIELD(S_STRTAB_BASE_CFG, SPLIT, 6, 5)
> + FIELD(S_STRTAB_BASE_CFG, FMT, 16, 2)
> +
> +REG64(S_CMDQ_BASE, 0x8090)
> +REG32(S_CMDQ_PROD, 0x8098)
> +REG32(S_CMDQ_CONS, 0x809c)
> + FIELD(S_CMDQ_CONS, ERR, 24, 7)
> +
> +REG64(S_EVENTQ_BASE, 0x80a0)
> +REG32(S_EVENTQ_PROD, 0x80a8)
> +REG32(S_EVENTQ_CONS, 0x80ac)
> +REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
> +REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
> +REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
> +
> static inline int smmu_enabled(SMMUv3State *s)
> {
> return FIELD_EX32(s->cr[0], CR0, SMMUEN);
Besides other definitions look good to me
Thanks
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands
2025-09-27 10:29 ` Eric Auger
@ 2025-09-28 4:46 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-28 4:46 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Jean-Philippe Brucker, Mostafa Saleh, Philippe Mathieu-Daudé
Hi Eric,
On 2025/9/27 18:29, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> The Arm SMMUv3 architecture defines a set of registers and commands for
>> managing secure transactions and context.
>>
>> This patch introduces the definitions for these secure registers and
>> commands within the SMMUv3 device model internal header.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3-internal.h | 72 +++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 71 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
>> index 71a3c0c02c..3820157eaa 100644
>> --- a/hw/arm/smmuv3-internal.h
>> +++ b/hw/arm/smmuv3-internal.h
>> @@ -38,7 +38,7 @@ typedef enum SMMUTranslationClass {
>> SMMU_CLASS_IN,
>> } SMMUTranslationClass;
>>
>> -/* MMIO Registers */
>> +/* MMIO Registers. The offsets are shared by Non-secure/Realm/Root states. */
> s/The offsets are shared/Shared ?
Thanks for your review. I'll modify it in the next version:
-/* MMIO Registers */
+/* MMIO Registers. Shared by Non-secure/Realm/Root states. */
>>
>> REG32(IDR0, 0x0)
>> FIELD(IDR0, S2P, 0 , 1)
>> @@ -121,6 +121,7 @@ REG32(CR0, 0x20)
>> FIELD(CR0, CMDQEN, 3, 1)
>>
>> #define SMMU_CR0_RESERVED 0xFFFFFA20
>> +#define SMMU_S_CR0_RESERVED 0xFFFFFC12
>>
>>
>> REG32(CR0ACK, 0x24)
>> @@ -180,6 +181,75 @@ REG32(EVENTQ_IRQ_CFG2, 0xbc)
>>
>> #define A_IDREGS 0xfd0
>>
>> +/* Secure registers. The offsets are begin with SMMU_SECURE_BASE_OFFSET */
> Start of secure-only registers? At least it deserves some reworking.
Agreed. "Start of secure-only registers" is much clearer. And
SMMU_SECURE_REG_START may be much better? I'll update the comment in v3.
/* Start of secure-only registers */
#define SMMU_SECURE_REG_START 0x8000
>> +#define SMMU_SECURE_BASE_OFFSET 0x8000
>> +
>> +REG32(S_IDR0, 0x8000)
>> +REG32(S_IDR1, 0x8004)
>> + FIELD(S_IDR1, S_SIDSIZE, 0 , 6)
>> + FIELD(S_IDR1, SEL2, 29, 1)
>> + FIELD(S_IDR1, SECURE_IMPL, 31, 1)
>> +
>> +REG32(S_IDR2, 0x8008)
>> +REG32(S_IDR3, 0x800c)
>> +REG32(S_IDR4, 0x8010)
>> +
>> +REG32(S_CR0, 0x8020)
>> + FIELD(S_CR0, SMMUEN, 0, 1)
>> + FIELD(S_CR0, EVENTQEN, 2, 1)
>> + FIELD(S_CR0, CMDQEN, 3, 1)
>> +
>> +REG32(S_CR0ACK, 0x8024)
>> +REG32(S_CR1, 0x8028)
>> +REG32(S_CR2, 0x802c)
>> +
>> +REG32(S_INIT, 0x803c)
>> + FIELD(S_INIT, INV_ALL, 0, 1)
>> +/* Alias for the S_INIT offset to match in the dispatcher switch */
> what is the S_INIT_ALIAS purpose? At this stage of the reading I don't
> understand above comment. This it does not match any actual reg, I would
> move this defintion in the patch that actually uses it.
My initial idea was to use this alias to handle banked registers. Since
secure/realm/root registers share the same lower address offset
with non-secure, this alias allowed me to use a single case for all in
the MMIO dispatcher.
However, I agree with you that this approach is not clear and could be
confusing. In the v3 patch, I will remove the alias in patch #03 and use
case (A_S_INIT & 0xfff) directly in patch #09, which is much more
straightforward.
uint32_t reg_offset = offset & 0xfff;
switch (reg_offset) {
case (A_S_INIT & 0xfff):
......
}
Thanks,
Tao
>> +#define A_S_INIT_ALIAS 0x3c
>> +
>> +REG32(S_GBPA, 0x8044)
>> + FIELD(S_GBPA, ABORT, 20, 1)
>> + FIELD(S_GBPA, UPDATE, 31, 1)
>> +
>> +REG32(S_IRQ_CTRL, 0x8050)
>> + FIELD(S_IRQ_CTRL, GERROR_IRQEN, 0, 1)
>> + FIELD(S_IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
>> +
>> +REG32(S_IRQ_CTRLACK, 0x8054)
>> +
>> +REG32(S_GERROR, 0x8060)
>> + FIELD(S_GERROR, CMDQ_ERR, 0, 1)
>> +
>> +#define SMMU_GERROR_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
>> +#define SMMU_GERROR_IRQ_CFG2_RESERVED 0x000000000000003F
>> +
>> +#define SMMU_STRTAB_BASE_RESERVED 0x40FFFFFFFFFFFFC0
>> +#define SMMU_QUEUE_BASE_RESERVED 0x40FFFFFFFFFFFFFF
>> +#define SMMU_EVENTQ_IRQ_CFG0_RESERVED 0x00FFFFFFFFFFFFFC
>> +
>> +REG32(S_GERRORN, 0x8064)
>> +REG64(S_GERROR_IRQ_CFG0, 0x8068)
>> +REG32(S_GERROR_IRQ_CFG1, 0x8070)
>> +REG32(S_GERROR_IRQ_CFG2, 0x8074)
>> +REG64(S_STRTAB_BASE, 0x8080)
>> +REG32(S_STRTAB_BASE_CFG, 0x8088)
>> + FIELD(S_STRTAB_BASE_CFG, LOG2SIZE, 0, 6)
>> + FIELD(S_STRTAB_BASE_CFG, SPLIT, 6, 5)
>> + FIELD(S_STRTAB_BASE_CFG, FMT, 16, 2)
>> +
>> +REG64(S_CMDQ_BASE, 0x8090)
>> +REG32(S_CMDQ_PROD, 0x8098)
>> +REG32(S_CMDQ_CONS, 0x809c)
>> + FIELD(S_CMDQ_CONS, ERR, 24, 7)
>> +
>> +REG64(S_EVENTQ_BASE, 0x80a0)
>> +REG32(S_EVENTQ_PROD, 0x80a8)
>> +REG32(S_EVENTQ_CONS, 0x80ac)
>> +REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
>> +REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
>> +REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>> +
>> static inline int smmu_enabled(SMMUv3State *s)
>> {
>> return FIELD_EX32(s->cr[0], CR0, SMMUEN);
> Besides other definitions look good to me
>
> Thanks
>
> Eric
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 04/14] refactor: Move ARMSecuritySpace to a common header
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (2 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 03/14] hw/arm/smmuv3: Introduce secure registers and commands Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-28 13:19 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
` (11 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
The ARMSecuritySpace enum and its related helpers were defined in the
target-specific header target/arm/cpu.h. This prevented common,
target-agnostic code like the SMMU model from using these definitions
without triggering "cpu.h included from common code" errors.
To resolve this, this commit introduces a new, lightweight header,
include/hw/arm/arm-security.h, which is safe for inclusion by common
code.
The following changes were made:
- The ARMSecuritySpace enum and the arm_space_is_secure() and
arm_secure_to_space() helpers have been moved from target/arm/cpu.h
to the new hw/arm/arm-security.h header.
- Headers for common devices like the SMMU (smmu-common.h) have been
updated to include the new lightweight header instead of cpu.h.
This refactoring decouples the security state definitions from the core
CPU implementation, allowing common hardware models to correctly handle
security states without pulling in heavyweight, target-specific headers.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
include/hw/arm/arm-security.h | 54 +++++++++++++++++++++++++++++++++++
target/arm/cpu.h | 25 +---------------
2 files changed, 55 insertions(+), 24 deletions(-)
create mode 100644 include/hw/arm/arm-security.h
diff --git a/include/hw/arm/arm-security.h b/include/hw/arm/arm-security.h
new file mode 100644
index 0000000000..9664c0f95e
--- /dev/null
+++ b/include/hw/arm/arm-security.h
@@ -0,0 +1,54 @@
+/*
+ * ARM security space helpers
+ *
+ * Provide ARMSecuritySpace and helpers for code that is not tied to CPU.
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef HW_ARM_ARM_SECURITY_H
+#define HW_ARM_ARM_SECURITY_H
+
+#include <stdbool.h>
+
+/*
+ * ARM v9 security states.
+ * The ordering of the enumeration corresponds to the low 2 bits
+ * of the GPI value, and (except for Root) the concat of NSE:NS.
+ */
+
+ typedef enum ARMSecuritySpace {
+ ARMSS_Secure = 0,
+ ARMSS_NonSecure = 1,
+ ARMSS_Root = 2,
+ ARMSS_Realm = 3,
+} ARMSecuritySpace;
+
+/* Return true if @space is secure, in the pre-v9 sense. */
+static inline bool arm_space_is_secure(ARMSecuritySpace space)
+{
+ return space == ARMSS_Secure || space == ARMSS_Root;
+}
+
+/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
+static inline ARMSecuritySpace arm_secure_to_space(bool secure)
+{
+ return secure ? ARMSS_Secure : ARMSS_NonSecure;
+}
+
+#endif /* HW_ARM_ARM_SECURITY_H */
+
+
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1c0deb723d..2ff9343d0b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -31,6 +31,7 @@
#include "exec/page-protection.h"
#include "qapi/qapi-types-common.h"
#include "target/arm/multiprocessing.h"
+#include "hw/arm/arm-security.h"
#include "target/arm/gtimer.h"
#include "target/arm/cpu-sysregs.h"
@@ -2477,30 +2478,6 @@ static inline int arm_feature(CPUARMState *env, int feature)
void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
-/*
- * ARM v9 security states.
- * The ordering of the enumeration corresponds to the low 2 bits
- * of the GPI value, and (except for Root) the concat of NSE:NS.
- */
-
-typedef enum ARMSecuritySpace {
- ARMSS_Secure = 0,
- ARMSS_NonSecure = 1,
- ARMSS_Root = 2,
- ARMSS_Realm = 3,
-} ARMSecuritySpace;
-
-/* Return true if @space is secure, in the pre-v9 sense. */
-static inline bool arm_space_is_secure(ARMSecuritySpace space)
-{
- return space == ARMSS_Secure || space == ARMSS_Root;
-}
-
-/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
-static inline ARMSecuritySpace arm_secure_to_space(bool secure)
-{
- return secure ? ARMSS_Secure : ARMSS_NonSecure;
-}
#if !defined(CONFIG_USER_ONLY)
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 04/14] refactor: Move ARMSecuritySpace to a common header
2025-09-25 16:26 ` [PATCH v2 04/14] refactor: Move ARMSecuritySpace to a common header Tao Tang
@ 2025-09-28 13:19 ` Eric Auger
0 siblings, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-28 13:19 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> The ARMSecuritySpace enum and its related helpers were defined in the
> target-specific header target/arm/cpu.h. This prevented common,
> target-agnostic code like the SMMU model from using these definitions
> without triggering "cpu.h included from common code" errors.
>
> To resolve this, this commit introduces a new, lightweight header,
> include/hw/arm/arm-security.h, which is safe for inclusion by common
> code.
>
> The following changes were made:
>
> - The ARMSecuritySpace enum and the arm_space_is_secure() and
> arm_secure_to_space() helpers have been moved from target/arm/cpu.h
> to the new hw/arm/arm-security.h header.
>
> - Headers for common devices like the SMMU (smmu-common.h) have been
> updated to include the new lightweight header instead of cpu.h.
above is not done in that patch.
>
> This refactoring decouples the security state definitions from the core
> CPU implementation, allowing common hardware models to correctly handle
> security states without pulling in heavyweight, target-specific headers.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
Besides
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Thanks
Eric
> ---
> include/hw/arm/arm-security.h | 54 +++++++++++++++++++++++++++++++++++
> target/arm/cpu.h | 25 +---------------
> 2 files changed, 55 insertions(+), 24 deletions(-)
> create mode 100644 include/hw/arm/arm-security.h
>
> diff --git a/include/hw/arm/arm-security.h b/include/hw/arm/arm-security.h
> new file mode 100644
> index 0000000000..9664c0f95e
> --- /dev/null
> +++ b/include/hw/arm/arm-security.h
> @@ -0,0 +1,54 @@
> +/*
> + * ARM security space helpers
> + *
> + * Provide ARMSecuritySpace and helpers for code that is not tied to CPU.
> + *
> + * Copyright (c) 2003 Fabrice Bellard
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef HW_ARM_ARM_SECURITY_H
> +#define HW_ARM_ARM_SECURITY_H
> +
> +#include <stdbool.h>
> +
> +/*
> + * ARM v9 security states.
> + * The ordering of the enumeration corresponds to the low 2 bits
> + * of the GPI value, and (except for Root) the concat of NSE:NS.
> + */
> +
> + typedef enum ARMSecuritySpace {
> + ARMSS_Secure = 0,
> + ARMSS_NonSecure = 1,
> + ARMSS_Root = 2,
> + ARMSS_Realm = 3,
> +} ARMSecuritySpace;
> +
> +/* Return true if @space is secure, in the pre-v9 sense. */
> +static inline bool arm_space_is_secure(ARMSecuritySpace space)
> +{
> + return space == ARMSS_Secure || space == ARMSS_Root;
> +}
> +
> +/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
> +static inline ARMSecuritySpace arm_secure_to_space(bool secure)
> +{
> + return secure ? ARMSS_Secure : ARMSS_NonSecure;
> +}
> +
> +#endif /* HW_ARM_ARM_SECURITY_H */
> +
> +
> diff --git a/target/arm/cpu.h b/target/arm/cpu.h
> index 1c0deb723d..2ff9343d0b 100644
> --- a/target/arm/cpu.h
> +++ b/target/arm/cpu.h
> @@ -31,6 +31,7 @@
> #include "exec/page-protection.h"
> #include "qapi/qapi-types-common.h"
> #include "target/arm/multiprocessing.h"
> +#include "hw/arm/arm-security.h"
> #include "target/arm/gtimer.h"
> #include "target/arm/cpu-sysregs.h"
>
> @@ -2477,30 +2478,6 @@ static inline int arm_feature(CPUARMState *env, int feature)
>
> void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
>
> -/*
> - * ARM v9 security states.
> - * The ordering of the enumeration corresponds to the low 2 bits
> - * of the GPI value, and (except for Root) the concat of NSE:NS.
> - */
> -
> -typedef enum ARMSecuritySpace {
> - ARMSS_Secure = 0,
> - ARMSS_NonSecure = 1,
> - ARMSS_Root = 2,
> - ARMSS_Realm = 3,
> -} ARMSecuritySpace;
> -
> -/* Return true if @space is secure, in the pre-v9 sense. */
> -static inline bool arm_space_is_secure(ARMSecuritySpace space)
> -{
> - return space == ARMSS_Secure || space == ARMSS_Root;
> -}
> -
> -/* Return the ARMSecuritySpace for @secure, assuming !RME or EL[0-2]. */
> -static inline ARMSecuritySpace arm_secure_to_space(bool secure)
> -{
> - return secure ? ARMSS_Secure : ARMSS_NonSecure;
> -}
>
> #if !defined(CONFIG_USER_ONLY)
> /**
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (3 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 04/14] refactor: Move ARMSecuritySpace to a common header Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-28 14:26 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
` (10 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
Refactor the SMMUv3 state management by introducing a banked register
structure. This change is foundational for supporting multiple security
states (Non-secure, Secure, etc.) in a clean and scalable way.
A new structure, SMMUv3RegBank, is defined to hold the state for a
single security context. The main SMMUv3State now contains an array of
these structures. This avoids having separate fields for secure and
non-secure registers (e.g., s->cr and s->secure_cr).
The primary benefits of this refactoring are:
- Significant reduction in code duplication for MMIO handlers.
- Improved code readability and long-term maintainability.
Additionally, a new enum SMMUSecurityIndex is introduced to represent
the security state of a stream. This enum will be used as the index for
the register banks in subsequent patches.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 33 ++-
hw/arm/smmuv3.c | 484 ++++++++++++++++++++---------------
hw/arm/trace-events | 6 +-
include/hw/arm/smmu-common.h | 14 +
include/hw/arm/smmuv3.h | 34 ++-
5 files changed, 336 insertions(+), 235 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index 3820157eaa..cf17c405de 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -250,9 +250,9 @@ REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
-static inline int smmu_enabled(SMMUv3State *s)
+static inline int smmu_enabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
{
- return FIELD_EX32(s->cr[0], CR0, SMMUEN);
+ return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, SMMUEN);
}
/* Command Queue Entry */
@@ -278,14 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
return smmuv3_ids[regoffset / 4];
}
-static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
+static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
{
- return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
+ return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
}
-static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
+static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
{
- return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
+ return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
}
/* Queue Handling */
@@ -328,19 +330,23 @@ static inline void queue_cons_incr(SMMUQueue *q)
q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
}
-static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
+static inline bool smmuv3_cmdq_enabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
{
- return FIELD_EX32(s->cr[0], CR0, CMDQEN);
+ return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
}
-static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
+static inline bool smmuv3_eventq_enabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
{
- return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
+ return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
}
-static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
+static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type,
+ SMMUSecurityIndex sec_idx)
{
- s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
+ s->bank[sec_idx].cmdq.cons = FIELD_DP32(s->bank[sec_idx].cmdq.cons,
+ CMDQ_CONS, ERR, err_type);
}
/* Commands */
@@ -511,6 +517,7 @@ typedef struct SMMUEventInfo {
uint32_t sid;
bool recorded;
bool inval_ste_allowed;
+ SMMUSecurityIndex sec_idx;
union {
struct {
uint32_t ssid;
@@ -594,7 +601,7 @@ typedef struct SMMUEventInfo {
(x)->word[6] = (uint32_t)(addr & 0xffffffff); \
} while (0)
-void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
+void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info);
/* Configuration Data */
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index bcf8af8dc7..2efa39b78c 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -48,14 +48,14 @@
* @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
*/
static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
- uint32_t gerror_mask)
+ uint32_t gerror_mask, SMMUSecurityIndex sec_idx)
{
bool pulse = false;
switch (irq) {
case SMMU_IRQ_EVTQ:
- pulse = smmuv3_eventq_irq_enabled(s);
+ pulse = smmuv3_eventq_irq_enabled(s, sec_idx);
break;
case SMMU_IRQ_PRIQ:
qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
@@ -65,17 +65,17 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
break;
case SMMU_IRQ_GERROR:
{
- uint32_t pending = s->gerror ^ s->gerrorn;
+ uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
uint32_t new_gerrors = ~pending & gerror_mask;
if (!new_gerrors) {
/* only toggle non pending errors */
return;
}
- s->gerror ^= new_gerrors;
- trace_smmuv3_write_gerror(new_gerrors, s->gerror);
+ s->bank[sec_idx].gerror ^= new_gerrors;
+ trace_smmuv3_write_gerror(new_gerrors, s->bank[sec_idx].gerror);
- pulse = smmuv3_gerror_irq_enabled(s);
+ pulse = smmuv3_gerror_irq_enabled(s, sec_idx);
break;
}
}
@@ -85,24 +85,25 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
}
}
-static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
+static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
+ SMMUSecurityIndex sec_idx)
{
- uint32_t pending = s->gerror ^ s->gerrorn;
- uint32_t toggled = s->gerrorn ^ new_gerrorn;
+ uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
+ uint32_t toggled = s->bank[sec_idx].gerrorn ^ new_gerrorn;
if (toggled & ~pending) {
qemu_log_mask(LOG_GUEST_ERROR,
- "guest toggles non pending errors = 0x%x\n",
- toggled & ~pending);
+ "guest toggles non pending errors = 0x%x sec_idx=%d\n",
+ toggled & ~pending, sec_idx);
}
/*
* We do not raise any error in case guest toggles bits corresponding
* to not active IRQs (CONSTRAINED UNPREDICTABLE)
*/
- s->gerrorn = new_gerrorn;
+ s->bank[sec_idx].gerrorn = new_gerrorn;
- trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
+ trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
}
static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
@@ -142,12 +143,13 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
return MEMTX_OK;
}
-static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
+static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
+ SMMUSecurityIndex sec_idx)
{
- SMMUQueue *q = &s->eventq;
+ SMMUQueue *q = &s->bank[sec_idx].eventq;
MemTxResult r;
- if (!smmuv3_eventq_enabled(s)) {
+ if (!smmuv3_eventq_enabled(s, sec_idx)) {
return MEMTX_ERROR;
}
@@ -161,7 +163,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
}
if (!smmuv3_q_empty(q)) {
- smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
+ smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_idx);
}
return MEMTX_OK;
}
@@ -171,7 +173,7 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
Evt evt = {};
MemTxResult r;
- if (!smmuv3_eventq_enabled(s)) {
+ if (!smmuv3_eventq_enabled(s, info->sec_idx)) {
return;
}
@@ -249,74 +251,104 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
g_assert_not_reached();
}
- trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
- r = smmuv3_write_eventq(s, &evt);
+ trace_smmuv3_record_event(smmu_event_string(info->type),
+ info->sid, info->sec_idx);
+ r = smmuv3_write_eventq(s, &evt, info->sec_idx);
if (r != MEMTX_OK) {
- smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK,
+ info->sec_idx);
}
info->recorded = true;
}
static void smmuv3_init_regs(SMMUv3State *s)
{
+ /* Initialize Non-secure bank (SMMU_SEC_IDX_NS) */
/* Based on sys property, the stages supported in smmu will be advertised.*/
if (s->stage && !strcmp("2", s->stage)) {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
} else if (s->stage && !strcmp("nested", s->stage)) {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
} else {
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
- }
-
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
+ }
+
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, COHACC, 1); /* IO coherent */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTENDIAN, 2); /* little endian */
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STALL_MODEL, 1); /* No stall */
/* terminated transaction will always be aborted/error returned */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TERM_MODEL, 1);
/* 2-level stream table supported */
- s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
-
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
- s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
-
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
- if (FIELD_EX32(s->idr[0], IDR0, S2P)) {
+ s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STLEVEL, 1);
+
+ s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
+ s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
+ s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, CMDQS, SMMU_CMDQS);
+
+ s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, HAD, 1);
+ if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)) {
/* XNX is a stage-2-specific feature */
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, XNX, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, XNX, 1);
}
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
- s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
+ s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, RIL, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, BBML, 2);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
+ /* 44 bits */
+ s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS, SMMU_IDR5_OAS);
/* 4K, 16K and 64K granule support */
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
- s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
-
- s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
- s->cmdq.prod = 0;
- s->cmdq.cons = 0;
- s->cmdq.entry_size = sizeof(struct Cmd);
- s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
- s->eventq.prod = 0;
- s->eventq.cons = 0;
- s->eventq.entry_size = sizeof(struct Evt);
-
- s->features = 0;
- s->sid_split = 0;
+ s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN4K, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN16K, 1);
+ s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN64K, 1);
+
+ /* Initialize Non-secure command and event queues */
+ s->bank[SMMU_SEC_IDX_NS].cmdq.base =
+ deposit64(s->bank[SMMU_SEC_IDX_NS].cmdq.base, 0, 5, SMMU_CMDQS);
+ s->bank[SMMU_SEC_IDX_NS].cmdq.prod = 0;
+ s->bank[SMMU_SEC_IDX_NS].cmdq.cons = 0;
+ s->bank[SMMU_SEC_IDX_NS].cmdq.entry_size = sizeof(struct Cmd);
+ s->bank[SMMU_SEC_IDX_NS].eventq.base =
+ deposit64(s->bank[SMMU_SEC_IDX_NS].eventq.base, 0, 5, SMMU_EVENTQS);
+ s->bank[SMMU_SEC_IDX_NS].eventq.prod = 0;
+ s->bank[SMMU_SEC_IDX_NS].eventq.cons = 0;
+ s->bank[SMMU_SEC_IDX_NS].eventq.entry_size = sizeof(struct Evt);
+ s->bank[SMMU_SEC_IDX_NS].features = 0;
+ s->bank[SMMU_SEC_IDX_NS].sid_split = 0;
s->aidr = 0x1;
- s->cr[0] = 0;
- s->cr0ack = 0;
- s->irq_ctrl = 0;
- s->gerror = 0;
- s->gerrorn = 0;
+ s->bank[SMMU_SEC_IDX_NS].cr[0] = 0;
+ s->bank[SMMU_SEC_IDX_NS].cr0ack = 0;
+ s->bank[SMMU_SEC_IDX_NS].irq_ctrl = 0;
+ s->bank[SMMU_SEC_IDX_NS].gerror = 0;
+ s->bank[SMMU_SEC_IDX_NS].gerrorn = 0;
s->statusr = 0;
- s->gbpa = SMMU_GBPA_RESET_VAL;
+ s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
+
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
@@ -430,7 +462,7 @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
STE *ste)
{
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
if (STE_S2AA64(ste) == 0x0) {
qemu_log_mask(LOG_UNIMP,
@@ -548,7 +580,7 @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
STE *ste, SMMUEventInfo *event)
{
uint32_t config;
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
int ret;
if (!STE_VALID(ste)) {
@@ -625,20 +657,25 @@ bad_ste:
* @sid: stream ID
* @ste: returned stream table entry
* @event: handle to an event info
+ * @cfg: translation configuration
*
* Supports linear and 2-level stream table
* Return 0 on success, -EINVAL otherwise
*/
static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
- SMMUEventInfo *event)
+ SMMUEventInfo *event, SMMUTransCfg *cfg)
{
- dma_addr_t addr, strtab_base;
+ dma_addr_t addr;
uint32_t log2size;
int strtab_size_shift;
int ret;
+ uint32_t features = s->bank[cfg->sec_idx].features;
+ dma_addr_t strtab_base = s->bank[cfg->sec_idx].strtab_base;
+ uint8_t sid_split = s->bank[cfg->sec_idx].sid_split;
- trace_smmuv3_find_ste(sid, s->features, s->sid_split);
- log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
+ trace_smmuv3_find_ste(sid, features, sid_split, cfg->sec_idx);
+ log2size = FIELD_EX32(s->bank[cfg->sec_idx].strtab_base_cfg,
+ STRTAB_BASE_CFG, LOG2SIZE);
/*
* Check SID range against both guest-configured and implementation limits
*/
@@ -646,7 +683,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
event->type = SMMU_EVT_C_BAD_STREAMID;
return -EINVAL;
}
- if (s->features & SMMU_FEATURE_2LVL_STE) {
+ if (features & SMMU_FEATURE_2LVL_STE) {
int l1_ste_offset, l2_ste_offset, max_l2_ste, span, i;
dma_addr_t l1ptr, l2ptr;
STEDesc l1std;
@@ -655,11 +692,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
* Align strtab base address to table size. For this purpose, assume it
* is not bounded by SMMU_IDR1_SIDSIZE.
*/
- strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
- strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
+ strtab_size_shift = MAX(5, (int)log2size - sid_split - 1 + 3);
+ strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
~MAKE_64BIT_MASK(0, strtab_size_shift);
- l1_ste_offset = sid >> s->sid_split;
- l2_ste_offset = sid & ((1 << s->sid_split) - 1);
+ l1_ste_offset = sid >> sid_split;
+ l2_ste_offset = sid & ((1 << sid_split) - 1);
l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
/* TODO: guarantee 64-bit single-copy atomicity */
ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
@@ -688,7 +725,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
}
max_l2_ste = (1 << span) - 1;
l2ptr = l1std_l2ptr(&l1std);
- trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
+ trace_smmuv3_find_ste_2lvl(strtab_base, l1ptr, l1_ste_offset,
l2ptr, l2_ste_offset, max_l2_ste);
if (l2_ste_offset > max_l2_ste) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -700,7 +737,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
addr = l2ptr + l2_ste_offset * sizeof(*ste);
} else {
strtab_size_shift = log2size + 5;
- strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
+ strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
~MAKE_64BIT_MASK(0, strtab_size_shift);
addr = strtab_base + sid * sizeof(*ste);
}
@@ -719,7 +756,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
int i;
SMMUTranslationStatus status;
SMMUTLBEntry *entry;
- uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
+ uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
goto bad_cd;
@@ -834,7 +871,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
/* ASID defaults to -1 (if s1 is not supported). */
cfg->asid = -1;
- ret = smmu_find_ste(s, sid, &ste, event);
+ ret = smmu_find_ste(s, sid, &ste, event, cfg);
if (ret) {
return ret;
}
@@ -964,6 +1001,7 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
* - s2 translation => CLASS_IN (input to function)
*/
class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
+ event->sec_idx = cfg->sec_idx;
switch (ptw_info.type) {
case SMMU_PTW_ERR_WALK_EABT:
event->type = SMMU_EVT_F_WALK_EABT;
@@ -1046,6 +1084,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
.inval_ste_allowed = false};
SMMUTranslationStatus status;
SMMUTransCfg *cfg = NULL;
+ SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
IOMMUTLBEntry entry = {
.target_as = &address_space_memory,
.iova = addr,
@@ -1057,12 +1096,9 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
qemu_mutex_lock(&s->mutex);
- if (!smmu_enabled(s)) {
- if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
- status = SMMU_TRANS_ABORT;
- } else {
- status = SMMU_TRANS_DISABLE;
- }
+ if (!smmu_enabled(s, sec_idx)) {
+ bool abort_flag = FIELD_EX32(s->bank[sec_idx].gbpa, GBPA, ABORT);
+ status = abort_flag ? SMMU_TRANS_ABORT : SMMU_TRANS_DISABLE;
goto epilogue;
}
@@ -1278,14 +1314,14 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
}
}
-static int smmuv3_cmdq_consume(SMMUv3State *s)
+static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
{
SMMUState *bs = ARM_SMMU(s);
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
- SMMUQueue *q = &s->cmdq;
+ SMMUQueue *q = &s->bank[sec_idx].cmdq;
SMMUCommandType type = 0;
- if (!smmuv3_cmdq_enabled(s)) {
+ if (!smmuv3_cmdq_enabled(s, sec_idx)) {
return 0;
}
/*
@@ -1296,11 +1332,12 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
*/
while (!smmuv3_q_empty(q)) {
- uint32_t pending = s->gerror ^ s->gerrorn;
+ uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
Cmd cmd;
trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
- Q_PROD_WRAP(q), Q_CONS_WRAP(q));
+ Q_PROD_WRAP(q), Q_CONS_WRAP(q),
+ sec_idx);
if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
break;
@@ -1319,7 +1356,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
switch (type) {
case SMMU_CMD_SYNC:
if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
- smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
+ smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_idx);
}
break;
case SMMU_CMD_PREFETCH_CONFIG:
@@ -1498,8 +1535,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
if (cmd_error) {
trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
- smmu_write_cmdq_err(s, cmd_error);
- smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
+ smmu_write_cmdq_err(s, cmd_error, sec_idx);
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
+ R_GERROR_CMDQ_ERR_MASK, sec_idx);
}
trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
@@ -1509,31 +1547,33 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
}
static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
- uint64_t data, MemTxAttrs attrs)
+ uint64_t data, MemTxAttrs attrs,
+ SMMUSecurityIndex reg_sec_idx)
{
- switch (offset) {
- case A_GERROR_IRQ_CFG0:
- s->gerror_irq_cfg0 = data;
- return MEMTX_OK;
+ uint32_t reg_offset = offset & 0xfff;
+ switch (reg_offset) {
case A_STRTAB_BASE:
- s->strtab_base = data;
+ /* Clear reserved bits according to spec */
+ s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
return MEMTX_OK;
case A_CMDQ_BASE:
- s->cmdq.base = data;
- s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
- if (s->cmdq.log2size > SMMU_CMDQS) {
- s->cmdq.log2size = SMMU_CMDQS;
+ s->bank[reg_sec_idx].cmdq.base = data;
+ s->bank[reg_sec_idx].cmdq.log2size = extract64(
+ s->bank[reg_sec_idx].cmdq.base, 0, 5);
+ if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
+ s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
}
return MEMTX_OK;
case A_EVENTQ_BASE:
- s->eventq.base = data;
- s->eventq.log2size = extract64(s->eventq.base, 0, 5);
- if (s->eventq.log2size > SMMU_EVENTQS) {
- s->eventq.log2size = SMMU_EVENTQS;
+ s->bank[reg_sec_idx].eventq.base = data;
+ s->bank[reg_sec_idx].eventq.log2size = extract64(
+ s->bank[reg_sec_idx].eventq.base, 0, 5);
+ if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
+ s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
}
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0:
- s->eventq_irq_cfg0 = data;
+ s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
@@ -1544,43 +1584,47 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
}
static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
- uint64_t data, MemTxAttrs attrs)
+ uint64_t data, MemTxAttrs attrs,
+ SMMUSecurityIndex reg_sec_idx)
{
- switch (offset) {
+ uint32_t reg_offset = offset & 0xfff;
+ switch (reg_offset) {
case A_CR0:
- s->cr[0] = data;
- s->cr0ack = data & ~SMMU_CR0_RESERVED;
+ s->bank[reg_sec_idx].cr[0] = data;
+ s->bank[reg_sec_idx].cr0ack = data;
/* in case the command queue has been enabled */
- smmuv3_cmdq_consume(s);
+ smmuv3_cmdq_consume(s, reg_sec_idx);
return MEMTX_OK;
case A_CR1:
- s->cr[1] = data;
+ s->bank[reg_sec_idx].cr[1] = data;
return MEMTX_OK;
case A_CR2:
- s->cr[2] = data;
+ s->bank[reg_sec_idx].cr[2] = data;
return MEMTX_OK;
case A_IRQ_CTRL:
- s->irq_ctrl = data;
+ s->bank[reg_sec_idx].irq_ctrl = data;
return MEMTX_OK;
case A_GERRORN:
- smmuv3_write_gerrorn(s, data);
+ smmuv3_write_gerrorn(s, data, reg_sec_idx);
/*
* By acknowledging the CMDQ_ERR, SW may notify cmds can
* be processed again
*/
- smmuv3_cmdq_consume(s);
+ smmuv3_cmdq_consume(s, reg_sec_idx);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
+ s->bank[reg_sec_idx].gerror_irq_cfg0 =
+ deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
+ s->bank[reg_sec_idx].gerror_irq_cfg0 =
+ deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
- s->gerror_irq_cfg1 = data;
+ s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
- s->gerror_irq_cfg2 = data;
+ s->bank[reg_sec_idx].gerror_irq_cfg2 = data;
return MEMTX_OK;
case A_GBPA:
/*
@@ -1589,66 +1633,81 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
*/
if (data & R_GBPA_UPDATE_MASK) {
/* Ignore update bit as write is synchronous. */
- s->gbpa = data & ~R_GBPA_UPDATE_MASK;
+ s->bank[reg_sec_idx].gbpa = data & ~R_GBPA_UPDATE_MASK;
}
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
- s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
+ s->bank[reg_sec_idx].strtab_base =
+ deposit64(s->bank[reg_sec_idx].strtab_base, 0, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE + 4:
- s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
+ s->bank[reg_sec_idx].strtab_base =
+ deposit64(s->bank[reg_sec_idx].strtab_base, 32, 32, data);
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
- s->strtab_base_cfg = data;
+ s->bank[reg_sec_idx].strtab_base_cfg = data;
if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
- s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
- s->features |= SMMU_FEATURE_2LVL_STE;
+ s->bank[reg_sec_idx].sid_split =
+ FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
+ s->bank[reg_sec_idx].features |= SMMU_FEATURE_2LVL_STE;
}
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
- s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
- s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
- if (s->cmdq.log2size > SMMU_CMDQS) {
- s->cmdq.log2size = SMMU_CMDQS;
+ s->bank[reg_sec_idx].cmdq.base =
+ deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
+ s->bank[reg_sec_idx].cmdq.log2size =
+ extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
+ if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
+ s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
}
return MEMTX_OK;
case A_CMDQ_BASE + 4: /* 64b */
- s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
+ s->bank[reg_sec_idx].cmdq.base =
+ deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
+ return MEMTX_OK;
return MEMTX_OK;
case A_CMDQ_PROD:
- s->cmdq.prod = data;
- smmuv3_cmdq_consume(s);
+ s->bank[reg_sec_idx].cmdq.prod = data;
+ smmuv3_cmdq_consume(s, reg_sec_idx);
return MEMTX_OK;
case A_CMDQ_CONS:
- s->cmdq.cons = data;
+ s->bank[reg_sec_idx].cmdq.cons = data;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
- s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
- s->eventq.log2size = extract64(s->eventq.base, 0, 5);
- if (s->eventq.log2size > SMMU_EVENTQS) {
- s->eventq.log2size = SMMU_EVENTQS;
+ s->bank[reg_sec_idx].eventq.base =
+ deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
+ s->bank[reg_sec_idx].eventq.log2size =
+ extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
+ if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
+ s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
}
+ s->bank[reg_sec_idx].eventq.cons = data;
return MEMTX_OK;
case A_EVENTQ_BASE + 4:
- s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
+ s->bank[reg_sec_idx].eventq.base =
+ deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
+ return MEMTX_OK;
return MEMTX_OK;
case A_EVENTQ_PROD:
- s->eventq.prod = data;
+ s->bank[reg_sec_idx].eventq.prod = data;
return MEMTX_OK;
case A_EVENTQ_CONS:
- s->eventq.cons = data;
+ s->bank[reg_sec_idx].eventq.cons = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0: /* 64b */
- s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
+ s->bank[reg_sec_idx].eventq_irq_cfg0 =
+ deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0 + 4:
- s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
+ s->bank[reg_sec_idx].eventq_irq_cfg0 =
+ deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
+ return MEMTX_OK;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG1:
- s->eventq_irq_cfg1 = data;
+ s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG2:
- s->eventq_irq_cfg2 = data;
+ s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
@@ -1667,13 +1726,14 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
+ SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
switch (size) {
case 8:
- r = smmu_writell(s, offset, data, attrs);
+ r = smmu_writell(s, offset, data, attrs, reg_sec_idx);
break;
case 4:
- r = smmu_writel(s, offset, data, attrs);
+ r = smmu_writel(s, offset, data, attrs, reg_sec_idx);
break;
default:
r = MEMTX_ERROR;
@@ -1685,20 +1745,24 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
}
static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
- uint64_t *data, MemTxAttrs attrs)
+ uint64_t *data, MemTxAttrs attrs,
+ SMMUSecurityIndex reg_sec_idx)
{
- switch (offset) {
+ uint32_t reg_offset = offset & 0xfff;
+ switch (reg_offset) {
case A_GERROR_IRQ_CFG0:
- *data = s->gerror_irq_cfg0;
+ *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
return MEMTX_OK;
case A_STRTAB_BASE:
- *data = s->strtab_base;
+ *data = s->bank[reg_sec_idx].strtab_base;
return MEMTX_OK;
case A_CMDQ_BASE:
- *data = s->cmdq.base;
+ *data = s->bank[reg_sec_idx].cmdq.base;
return MEMTX_OK;
case A_EVENTQ_BASE:
- *data = s->eventq.base;
+ *data = s->bank[reg_sec_idx].eventq.base;
+ return MEMTX_OK;
+ *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
return MEMTX_OK;
default:
*data = 0;
@@ -1710,14 +1774,16 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
}
static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
- uint64_t *data, MemTxAttrs attrs)
+ uint64_t *data, MemTxAttrs attrs,
+ SMMUSecurityIndex reg_sec_idx)
{
- switch (offset) {
+ uint32_t reg_offset = offset & 0xfff;
+ switch (reg_offset) {
case A_IDREGS ... A_IDREGS + 0x2f:
*data = smmuv3_idreg(offset - A_IDREGS);
return MEMTX_OK;
case A_IDR0 ... A_IDR5:
- *data = s->idr[(offset - A_IDR0) / 4];
+ *data = s->bank[reg_sec_idx].idr[(reg_offset - A_IDR0) / 4];
return MEMTX_OK;
case A_IIDR:
*data = s->iidr;
@@ -1726,77 +1792,79 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
*data = s->aidr;
return MEMTX_OK;
case A_CR0:
- *data = s->cr[0];
+ *data = s->bank[reg_sec_idx].cr[0];
return MEMTX_OK;
case A_CR0ACK:
- *data = s->cr0ack;
+ *data = s->bank[reg_sec_idx].cr0ack;
return MEMTX_OK;
case A_CR1:
- *data = s->cr[1];
+ *data = s->bank[reg_sec_idx].cr[1];
return MEMTX_OK;
case A_CR2:
- *data = s->cr[2];
+ *data = s->bank[reg_sec_idx].cr[2];
return MEMTX_OK;
case A_STATUSR:
*data = s->statusr;
return MEMTX_OK;
case A_GBPA:
- *data = s->gbpa;
+ *data = s->bank[reg_sec_idx].gbpa;
return MEMTX_OK;
case A_IRQ_CTRL:
case A_IRQ_CTRL_ACK:
- *data = s->irq_ctrl;
+ *data = s->bank[reg_sec_idx].irq_ctrl;
return MEMTX_OK;
case A_GERROR:
- *data = s->gerror;
+ *data = s->bank[reg_sec_idx].gerror;
return MEMTX_OK;
case A_GERRORN:
- *data = s->gerrorn;
+ *data = s->bank[reg_sec_idx].gerrorn;
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- *data = extract64(s->gerror_irq_cfg0, 0, 32);
+ *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- *data = extract64(s->gerror_irq_cfg0, 32, 32);
+ *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
+ return MEMTX_OK;
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
- *data = s->gerror_irq_cfg1;
+ *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
- *data = s->gerror_irq_cfg2;
+ *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
- *data = extract64(s->strtab_base, 0, 32);
+ *data = extract64(s->bank[reg_sec_idx].strtab_base, 0, 32);
return MEMTX_OK;
case A_STRTAB_BASE + 4: /* 64b */
- *data = extract64(s->strtab_base, 32, 32);
+ *data = extract64(s->bank[reg_sec_idx].strtab_base, 32, 32);
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
- *data = s->strtab_base_cfg;
+ *data = s->bank[reg_sec_idx].strtab_base_cfg;
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
- *data = extract64(s->cmdq.base, 0, 32);
+ *data = extract64(s->bank[reg_sec_idx].cmdq.base, 0, 32);
return MEMTX_OK;
case A_CMDQ_BASE + 4:
- *data = extract64(s->cmdq.base, 32, 32);
+ *data = extract64(s->bank[reg_sec_idx].cmdq.base, 32, 32);
return MEMTX_OK;
case A_CMDQ_PROD:
- *data = s->cmdq.prod;
+ *data = s->bank[reg_sec_idx].cmdq.prod;
return MEMTX_OK;
case A_CMDQ_CONS:
- *data = s->cmdq.cons;
+ *data = s->bank[reg_sec_idx].cmdq.cons;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
- *data = extract64(s->eventq.base, 0, 32);
+ *data = extract64(s->bank[reg_sec_idx].eventq.base, 0, 32);
return MEMTX_OK;
case A_EVENTQ_BASE + 4: /* 64b */
- *data = extract64(s->eventq.base, 32, 32);
+ *data = extract64(s->bank[reg_sec_idx].eventq.base, 32, 32);
return MEMTX_OK;
case A_EVENTQ_PROD:
- *data = s->eventq.prod;
+ *data = s->bank[reg_sec_idx].eventq.prod;
return MEMTX_OK;
case A_EVENTQ_CONS:
- *data = s->eventq.cons;
+ *data = s->bank[reg_sec_idx].eventq.cons;
+ return MEMTX_OK;
return MEMTX_OK;
default:
*data = 0;
@@ -1816,13 +1884,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
+ SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
switch (size) {
case 8:
- r = smmu_readll(s, offset, data, attrs);
+ r = smmu_readll(s, offset, data, attrs, reg_sec_idx);
break;
case 4:
- r = smmu_readl(s, offset, data, attrs);
+ r = smmu_readl(s, offset, data, attrs, reg_sec_idx);
break;
default:
r = MEMTX_ERROR;
@@ -1918,7 +1987,7 @@ static bool smmuv3_gbpa_needed(void *opaque)
SMMUv3State *s = opaque;
/* Only migrate GBPA if it has different reset value. */
- return s->gbpa != SMMU_GBPA_RESET_VAL;
+ return s->bank[SMMU_SEC_IDX_NS].gbpa != SMMU_GBPA_RESET_VAL;
}
static const VMStateDescription vmstate_gbpa = {
@@ -1927,7 +1996,7 @@ static const VMStateDescription vmstate_gbpa = {
.minimum_version_id = 1,
.needed = smmuv3_gbpa_needed,
.fields = (const VMStateField[]) {
- VMSTATE_UINT32(gbpa, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gbpa, SMMUv3State),
VMSTATE_END_OF_LIST()
}
};
@@ -1938,28 +2007,29 @@ static const VMStateDescription vmstate_smmuv3 = {
.minimum_version_id = 1,
.priority = MIG_PRI_IOMMU,
.fields = (const VMStateField[]) {
- VMSTATE_UINT32(features, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].features, SMMUv3State),
VMSTATE_UINT8(sid_size, SMMUv3State),
- VMSTATE_UINT8(sid_split, SMMUv3State),
+ VMSTATE_UINT8(bank[SMMU_SEC_IDX_NS].sid_split, SMMUv3State),
- VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
- VMSTATE_UINT32(cr0ack, SMMUv3State),
+ VMSTATE_UINT32_ARRAY(bank[SMMU_SEC_IDX_NS].cr, SMMUv3State, 3),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].cr0ack, SMMUv3State),
VMSTATE_UINT32(statusr, SMMUv3State),
- VMSTATE_UINT32(irq_ctrl, SMMUv3State),
- VMSTATE_UINT32(gerror, SMMUv3State),
- VMSTATE_UINT32(gerrorn, SMMUv3State),
- VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
- VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
- VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
- VMSTATE_UINT64(strtab_base, SMMUv3State),
- VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
- VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
- VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
- VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
-
- VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
- VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
-
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].irq_ctrl, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerrorn, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg0, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg1, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg2, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].strtab_base, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].strtab_base_cfg, SMMUv3State),
+ VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg0, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg1, SMMUv3State),
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg2, SMMUv3State),
+
+ VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].cmdq, SMMUv3State, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].eventq, SMMUv3State, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
VMSTATE_END_OF_LIST(),
},
.subsections = (const VMStateDescription * const []) {
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index f3386bd7ae..80cb4d6b04 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -35,13 +35,13 @@ smmuv3_trigger_irq(int irq) "irq=%d"
smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
-smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
+smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap, int sec_idx) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d sec_idx=%d"
smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
-smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
-smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
+smmuv3_record_event(const char *type, uint32_t sid, int sec_idx) "%s sid=0x%x sec_idx=%d"
+smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split, int sec_idx) "sid=0x%x features:0x%x, sid_split:0x%x sec_idx=%d"
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 80d0fecfde..3df82b83eb 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -40,6 +40,19 @@
#define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
((addr) & (ent)->entry.addr_mask))
+/*
+ * SMMU Security state index
+ *
+ * The values of this enumeration are identical to the SEC_SID signal
+ * encoding defined in the ARM SMMUv3 Architecture Specification. It is used
+ * to select the appropriate programming interface for a given transaction.
+ */
+typedef enum SMMUSecurityIndex {
+ SMMU_SEC_IDX_NS = 0,
+ SMMU_SEC_IDX_S = 1,
+ SMMU_SEC_IDX_NUM,
+} SMMUSecurityIndex;
+
/*
* Page table walk error types
*/
@@ -116,6 +129,7 @@ typedef struct SMMUTransCfg {
SMMUTransTableInfo tt[2];
/* Used by stage-2 only. */
struct SMMUS2Cfg s2cfg;
+ SMMUSecurityIndex sec_idx; /* cached security index */
} SMMUTransCfg;
typedef struct SMMUDevice {
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index d183a62766..572f15251e 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -32,19 +32,11 @@ typedef struct SMMUQueue {
uint8_t log2size;
} SMMUQueue;
-struct SMMUv3State {
- SMMUState smmu_state;
-
- uint32_t features;
- uint8_t sid_size;
- uint8_t sid_split;
-
+/* Structure for register bank */
+typedef struct SMMUv3RegBank {
uint32_t idr[6];
- uint32_t iidr;
- uint32_t aidr;
uint32_t cr[3];
uint32_t cr0ack;
- uint32_t statusr;
uint32_t gbpa;
uint32_t irq_ctrl;
uint32_t gerror;
@@ -57,12 +49,28 @@ struct SMMUv3State {
uint64_t eventq_irq_cfg0;
uint32_t eventq_irq_cfg1;
uint32_t eventq_irq_cfg2;
+ uint32_t features;
+ uint8_t sid_split;
SMMUQueue eventq, cmdq;
+} SMMUv3RegBank;
+
+struct SMMUv3State {
+ SMMUState smmu_state;
+
+ /* Shared (non-banked) registers and state */
+ uint8_t sid_size;
+ uint32_t iidr;
+ uint32_t aidr;
+ uint32_t statusr;
+
+ /* Banked registers for all access */
+ SMMUv3RegBank bank[SMMU_SEC_IDX_NUM];
qemu_irq irq[4];
QemuMutex mutex;
char *stage;
+ bool secure_impl;
};
typedef enum {
@@ -84,7 +92,9 @@ struct SMMUv3Class {
#define TYPE_ARM_SMMUV3 "arm-smmuv3"
OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
-#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
-#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
+#define STAGE1_SUPPORTED(s) \
+ FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P)
+#define STAGE2_SUPPORTED(s) \
+ FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-09-25 16:26 ` [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
@ 2025-09-28 14:26 ` Eric Auger
2025-09-29 7:22 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-28 14:26 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> Refactor the SMMUv3 state management by introducing a banked register
> structure. This change is foundational for supporting multiple security
> states (Non-secure, Secure, etc.) in a clean and scalable way.
>
> A new structure, SMMUv3RegBank, is defined to hold the state for a
> single security context. The main SMMUv3State now contains an array of
> these structures. This avoids having separate fields for secure and
> non-secure registers (e.g., s->cr and s->secure_cr).
>
> The primary benefits of this refactoring are:
> - Significant reduction in code duplication for MMIO handlers.
> - Improved code readability and long-term maintainability.
>
> Additionally, a new enum SMMUSecurityIndex is introduced to represent
> the security state of a stream. This enum will be used as the index for
> the register banks in subsequent patches.
I guess you chose an enum to prepare for Realm support? will Realm
require a new bank to be introduced? Because otherwise we could have use
a bool NS?
The patch also adds sec_sid info in event, in cfg. This is not described
in the commit msg, neither the motivation.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 33 ++-
> hw/arm/smmuv3.c | 484 ++++++++++++++++++++---------------
> hw/arm/trace-events | 6 +-
> include/hw/arm/smmu-common.h | 14 +
> include/hw/arm/smmuv3.h | 34 ++-
> 5 files changed, 336 insertions(+), 235 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index 3820157eaa..cf17c405de 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -250,9 +250,9 @@ REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
> REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
> REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>
> -static inline int smmu_enabled(SMMUv3State *s)
> +static inline int smmu_enabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> - return FIELD_EX32(s->cr[0], CR0, SMMUEN);
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, SMMUEN);
> }
>
> /* Command Queue Entry */
> @@ -278,14 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
> return smmuv3_ids[regoffset / 4];
> }
>
> -static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> {
> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> }
>
> -static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> {
> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
> }
>
> /* Queue Handling */
> @@ -328,19 +330,23 @@ static inline void queue_cons_incr(SMMUQueue *q)
> q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
> }
>
> -static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_cmdq_enabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> {
> - return FIELD_EX32(s->cr[0], CR0, CMDQEN);
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
> }
>
> -static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
> +static inline bool smmuv3_eventq_enabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> {
> - return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
> }
>
> -static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
> +static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type,
> + SMMUSecurityIndex sec_idx)
> {
> - s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
> + s->bank[sec_idx].cmdq.cons = FIELD_DP32(s->bank[sec_idx].cmdq.cons,
> + CMDQ_CONS, ERR, err_type);
> }
>
> /* Commands */
> @@ -511,6 +517,7 @@ typedef struct SMMUEventInfo {
> uint32_t sid;
> bool recorded;
> bool inval_ste_allowed;
> + SMMUSecurityIndex sec_idx;
> union {
> struct {
> uint32_t ssid;
> @@ -594,7 +601,7 @@ typedef struct SMMUEventInfo {
> (x)->word[6] = (uint32_t)(addr & 0xffffffff); \
> } while (0)
>
> -void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
> +void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info);
not needed
>
> /* Configuration Data */
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index bcf8af8dc7..2efa39b78c 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -48,14 +48,14 @@
> * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
> */
> static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> - uint32_t gerror_mask)
> + uint32_t gerror_mask, SMMUSecurityIndex sec_idx)
> {
>
> bool pulse = false;
>
> switch (irq) {
> case SMMU_IRQ_EVTQ:
> - pulse = smmuv3_eventq_irq_enabled(s);
> + pulse = smmuv3_eventq_irq_enabled(s, sec_idx);
> break;
> case SMMU_IRQ_PRIQ:
> qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
> @@ -65,17 +65,17 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> break;
> case SMMU_IRQ_GERROR:
> {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
> uint32_t new_gerrors = ~pending & gerror_mask;
>
> if (!new_gerrors) {
> /* only toggle non pending errors */
> return;
> }
> - s->gerror ^= new_gerrors;
> - trace_smmuv3_write_gerror(new_gerrors, s->gerror);
> + s->bank[sec_idx].gerror ^= new_gerrors;
> + trace_smmuv3_write_gerror(new_gerrors, s->bank[sec_idx].gerror);
>
> - pulse = smmuv3_gerror_irq_enabled(s);
> + pulse = smmuv3_gerror_irq_enabled(s, sec_idx);
> break;
> }
> }
> @@ -85,24 +85,25 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
> }
> }
>
> -static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
> +static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
> + SMMUSecurityIndex sec_idx)
> {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> - uint32_t toggled = s->gerrorn ^ new_gerrorn;
> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
> + uint32_t toggled = s->bank[sec_idx].gerrorn ^ new_gerrorn;
>
> if (toggled & ~pending) {
> qemu_log_mask(LOG_GUEST_ERROR,
> - "guest toggles non pending errors = 0x%x\n",
> - toggled & ~pending);
> + "guest toggles non pending errors = 0x%x sec_idx=%d\n",
> + toggled & ~pending, sec_idx);
> }
>
> /*
> * We do not raise any error in case guest toggles bits corresponding
> * to not active IRQs (CONSTRAINED UNPREDICTABLE)
> */
> - s->gerrorn = new_gerrorn;
> + s->bank[sec_idx].gerrorn = new_gerrorn;
>
> - trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
> + trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
> }
>
> static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> @@ -142,12 +143,13 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> return MEMTX_OK;
> }
>
> -static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
> +static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
> + SMMUSecurityIndex sec_idx)
> {
> - SMMUQueue *q = &s->eventq;
> + SMMUQueue *q = &s->bank[sec_idx].eventq;
> MemTxResult r;
>
> - if (!smmuv3_eventq_enabled(s)) {
> + if (!smmuv3_eventq_enabled(s, sec_idx)) {
> return MEMTX_ERROR;
> }
>
> @@ -161,7 +163,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
> }
>
> if (!smmuv3_q_empty(q)) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
> + smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_idx);
> }
> return MEMTX_OK;
> }
> @@ -171,7 +173,7 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> Evt evt = {};
> MemTxResult r;
>
> - if (!smmuv3_eventq_enabled(s)) {
> + if (!smmuv3_eventq_enabled(s, info->sec_idx)) {
> return;
> }
>
> @@ -249,74 +251,104 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
> g_assert_not_reached();
> }
>
> - trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
> - r = smmuv3_write_eventq(s, &evt);
> + trace_smmuv3_record_event(smmu_event_string(info->type),
> + info->sid, info->sec_idx);
> + r = smmuv3_write_eventq(s, &evt, info->sec_idx);
> if (r != MEMTX_OK) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK,
> + info->sec_idx);
> }
> info->recorded = true;
> }
>
> static void smmuv3_init_regs(SMMUv3State *s)
> {
> + /* Initialize Non-secure bank (SMMU_SEC_IDX_NS) */
> /* Based on sys property, the stages supported in smmu will be advertised.*/
> if (s->stage && !strcmp("2", s->stage)) {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
use a pointer to s->bank[SMMU_SEC_IDX_NS] and rewrite this as
bank->idr[O] here and bleow.
this will be more readable.
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
> } else if (s->stage && !strcmp("nested", s->stage)) {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
> } else {
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
> - }
> -
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
> + }
> +
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, COHACC, 1); /* IO coherent */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTENDIAN, 2); /* little endian */
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STALL_MODEL, 1); /* No stall */
> /* terminated transaction will always be aborted/error returned */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TERM_MODEL, 1);
> /* 2-level stream table supported */
> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
> -
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
> -
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
> - if (FIELD_EX32(s->idr[0], IDR0, S2P)) {
> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STLEVEL, 1);
> +
> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, CMDQS, SMMU_CMDQS);
> +
> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, HAD, 1);
> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)) {
> /* XNX is a stage-2-specific feature */
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, XNX, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, XNX, 1);
> }
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, RIL, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, BBML, 2);
>
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
> + /* 44 bits */
> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS, SMMU_IDR5_OAS);
> /* 4K, 16K and 64K granule support */
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
> -
> - s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
> - s->cmdq.prod = 0;
> - s->cmdq.cons = 0;
> - s->cmdq.entry_size = sizeof(struct Cmd);
> - s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
> - s->eventq.prod = 0;
> - s->eventq.cons = 0;
> - s->eventq.entry_size = sizeof(struct Evt);
> -
> - s->features = 0;
> - s->sid_split = 0;
> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN4K, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN16K, 1);
> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN64K, 1);
> +
> + /* Initialize Non-secure command and event queues */
> + s->bank[SMMU_SEC_IDX_NS].cmdq.base =
> + deposit64(s->bank[SMMU_SEC_IDX_NS].cmdq.base, 0, 5, SMMU_CMDQS);
> + s->bank[SMMU_SEC_IDX_NS].cmdq.prod = 0;
> + s->bank[SMMU_SEC_IDX_NS].cmdq.cons = 0;
> + s->bank[SMMU_SEC_IDX_NS].cmdq.entry_size = sizeof(struct Cmd);
> + s->bank[SMMU_SEC_IDX_NS].eventq.base =
> + deposit64(s->bank[SMMU_SEC_IDX_NS].eventq.base, 0, 5, SMMU_EVENTQS);
> + s->bank[SMMU_SEC_IDX_NS].eventq.prod = 0;
> + s->bank[SMMU_SEC_IDX_NS].eventq.cons = 0;
> + s->bank[SMMU_SEC_IDX_NS].eventq.entry_size = sizeof(struct Evt);
> + s->bank[SMMU_SEC_IDX_NS].features = 0;
> + s->bank[SMMU_SEC_IDX_NS].sid_split = 0;
> s->aidr = 0x1;
> - s->cr[0] = 0;
> - s->cr0ack = 0;
> - s->irq_ctrl = 0;
> - s->gerror = 0;
> - s->gerrorn = 0;
> + s->bank[SMMU_SEC_IDX_NS].cr[0] = 0;
> + s->bank[SMMU_SEC_IDX_NS].cr0ack = 0;
> + s->bank[SMMU_SEC_IDX_NS].irq_ctrl = 0;
> + s->bank[SMMU_SEC_IDX_NS].gerror = 0;
> + s->bank[SMMU_SEC_IDX_NS].gerrorn = 0;
> s->statusr = 0;
> - s->gbpa = SMMU_GBPA_RESET_VAL;
> + s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
> +
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
> @@ -430,7 +462,7 @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
> static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
> STE *ste)
> {
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
>
> if (STE_S2AA64(ste) == 0x0) {
> qemu_log_mask(LOG_UNIMP,
> @@ -548,7 +580,7 @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
> STE *ste, SMMUEventInfo *event)
> {
> uint32_t config;
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
> int ret;
>
> if (!STE_VALID(ste)) {
> @@ -625,20 +657,25 @@ bad_ste:
> * @sid: stream ID
> * @ste: returned stream table entry
> * @event: handle to an event info
> + * @cfg: translation configuration
> *
> * Supports linear and 2-level stream table
> * Return 0 on success, -EINVAL otherwise
> */
> static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> - SMMUEventInfo *event)
> + SMMUEventInfo *event, SMMUTransCfg *cfg)
> {
> - dma_addr_t addr, strtab_base;
> + dma_addr_t addr;
> uint32_t log2size;
> int strtab_size_shift;
> int ret;
> + uint32_t features = s->bank[cfg->sec_idx].features;
also here you can use a pointer to bank[cfg->sec_idx]
> + dma_addr_t strtab_base = s->bank[cfg->sec_idx].strtab_base;
> + uint8_t sid_split = s->bank[cfg->sec_idx].sid_split;
>
> - trace_smmuv3_find_ste(sid, s->features, s->sid_split);
> - log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
> + trace_smmuv3_find_ste(sid, features, sid_split, cfg->sec_idx);
> + log2size = FIELD_EX32(s->bank[cfg->sec_idx].strtab_base_cfg,
> + STRTAB_BASE_CFG, LOG2SIZE);
> /*
> * Check SID range against both guest-configured and implementation limits
> */
> @@ -646,7 +683,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> event->type = SMMU_EVT_C_BAD_STREAMID;
> return -EINVAL;
> }
> - if (s->features & SMMU_FEATURE_2LVL_STE) {
> + if (features & SMMU_FEATURE_2LVL_STE) {
> int l1_ste_offset, l2_ste_offset, max_l2_ste, span, i;
> dma_addr_t l1ptr, l2ptr;
> STEDesc l1std;
> @@ -655,11 +692,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> * Align strtab base address to table size. For this purpose, assume it
> * is not bounded by SMMU_IDR1_SIDSIZE.
> */
> - strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
> + strtab_size_shift = MAX(5, (int)log2size - sid_split - 1 + 3);
> + strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
> ~MAKE_64BIT_MASK(0, strtab_size_shift);
> - l1_ste_offset = sid >> s->sid_split;
> - l2_ste_offset = sid & ((1 << s->sid_split) - 1);
> + l1_ste_offset = sid >> sid_split;
> + l2_ste_offset = sid & ((1 << sid_split) - 1);
> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
> /* TODO: guarantee 64-bit single-copy atomicity */
> ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
> @@ -688,7 +725,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> }
> max_l2_ste = (1 << span) - 1;
> l2ptr = l1std_l2ptr(&l1std);
> - trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
> + trace_smmuv3_find_ste_2lvl(strtab_base, l1ptr, l1_ste_offset,
> l2ptr, l2_ste_offset, max_l2_ste);
> if (l2_ste_offset > max_l2_ste) {
> qemu_log_mask(LOG_GUEST_ERROR,
> @@ -700,7 +737,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> addr = l2ptr + l2_ste_offset * sizeof(*ste);
> } else {
> strtab_size_shift = log2size + 5;
> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
> + strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
> ~MAKE_64BIT_MASK(0, strtab_size_shift);
> addr = strtab_base + sid * sizeof(*ste);
> }
> @@ -719,7 +756,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
> int i;
> SMMUTranslationStatus status;
> SMMUTLBEntry *entry;
> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
>
> if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
> goto bad_cd;
> @@ -834,7 +871,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
> /* ASID defaults to -1 (if s1 is not supported). */
> cfg->asid = -1;
>
> - ret = smmu_find_ste(s, sid, &ste, event);
> + ret = smmu_find_ste(s, sid, &ste, event, cfg);
> if (ret) {
> return ret;
> }
> @@ -964,6 +1001,7 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
> * - s2 translation => CLASS_IN (input to function)
> */
> class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
> + event->sec_idx = cfg->sec_idx;
> switch (ptw_info.type) {
> case SMMU_PTW_ERR_WALK_EABT:
> event->type = SMMU_EVT_F_WALK_EABT;
> @@ -1046,6 +1084,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> .inval_ste_allowed = false};
> SMMUTranslationStatus status;
> SMMUTransCfg *cfg = NULL;
> + SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
> IOMMUTLBEntry entry = {
> .target_as = &address_space_memory,
> .iova = addr,
> @@ -1057,12 +1096,9 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>
> qemu_mutex_lock(&s->mutex);
>
> - if (!smmu_enabled(s)) {
> - if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
> - status = SMMU_TRANS_ABORT;
> - } else {
> - status = SMMU_TRANS_DISABLE;
> - }
> + if (!smmu_enabled(s, sec_idx)) {
> + bool abort_flag = FIELD_EX32(s->bank[sec_idx].gbpa, GBPA, ABORT);
> + status = abort_flag ? SMMU_TRANS_ABORT : SMMU_TRANS_DISABLE;
> goto epilogue;
> }
>
> @@ -1278,14 +1314,14 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> }
> }
>
> -static int smmuv3_cmdq_consume(SMMUv3State *s)
> +static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> SMMUState *bs = ARM_SMMU(s);
> SMMUCmdError cmd_error = SMMU_CERROR_NONE;
> - SMMUQueue *q = &s->cmdq;
> + SMMUQueue *q = &s->bank[sec_idx].cmdq;
> SMMUCommandType type = 0;
>
> - if (!smmuv3_cmdq_enabled(s)) {
> + if (!smmuv3_cmdq_enabled(s, sec_idx)) {
> return 0;
> }
> /*
> @@ -1296,11 +1332,12 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> */
>
> while (!smmuv3_q_empty(q)) {
> - uint32_t pending = s->gerror ^ s->gerrorn;
> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
> Cmd cmd;
>
> trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
> - Q_PROD_WRAP(q), Q_CONS_WRAP(q));
> + Q_PROD_WRAP(q), Q_CONS_WRAP(q),
> + sec_idx);
>
> if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
> break;
> @@ -1319,7 +1356,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> switch (type) {
> case SMMU_CMD_SYNC:
> if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
> - smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
> + smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_idx);
> }
> break;
> case SMMU_CMD_PREFETCH_CONFIG:
> @@ -1498,8 +1535,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>
> if (cmd_error) {
> trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
> - smmu_write_cmdq_err(s, cmd_error);
> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
> + smmu_write_cmdq_err(s, cmd_error, sec_idx);
> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
> + R_GERROR_CMDQ_ERR_MASK, sec_idx);
> }
>
> trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
> @@ -1509,31 +1547,33 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
> }
>
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecurityIndex reg_sec_idx)
> {
> - switch (offset) {
> - case A_GERROR_IRQ_CFG0:
> - s->gerror_irq_cfg0 = data;
> - return MEMTX_OK;
> + uint32_t reg_offset = offset & 0xfff;
> + switch (reg_offset) {
> case A_STRTAB_BASE:
> - s->strtab_base = data;
> + /* Clear reserved bits according to spec */
> + s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> - s->cmdq.base = data;
> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
> - if (s->cmdq.log2size > SMMU_CMDQS) {
> - s->cmdq.log2size = SMMU_CMDQS;
> + s->bank[reg_sec_idx].cmdq.base = data;
> + s->bank[reg_sec_idx].cmdq.log2size = extract64(
> + s->bank[reg_sec_idx].cmdq.base, 0, 5);
here also use a local variable point to bank[reg_sec_idx]
> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> - s->eventq.base = data;
> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
> - if (s->eventq.log2size > SMMU_EVENTQS) {
> - s->eventq.log2size = SMMU_EVENTQS;
> + s->bank[reg_sec_idx].eventq.base = data;
> + s->bank[reg_sec_idx].eventq.log2size = extract64(
> + s->bank[reg_sec_idx].eventq.base, 0, 5);
> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0:
> - s->eventq_irq_cfg0 = data;
> + s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
> return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> @@ -1544,43 +1584,47 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
>
> static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> - uint64_t data, MemTxAttrs attrs)
> + uint64_t data, MemTxAttrs attrs,
> + SMMUSecurityIndex reg_sec_idx)
> {
> - switch (offset) {
> + uint32_t reg_offset = offset & 0xfff;
> + switch (reg_offset) {
> case A_CR0:
> - s->cr[0] = data;
> - s->cr0ack = data & ~SMMU_CR0_RESERVED;
> + s->bank[reg_sec_idx].cr[0] = data;
> + s->bank[reg_sec_idx].cr0ack = data;
> /* in case the command queue has been enabled */
> - smmuv3_cmdq_consume(s);
> + smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_CR1:
> - s->cr[1] = data;
> + s->bank[reg_sec_idx].cr[1] = data;
> return MEMTX_OK;
> case A_CR2:
> - s->cr[2] = data;
> + s->bank[reg_sec_idx].cr[2] = data;
> return MEMTX_OK;
> case A_IRQ_CTRL:
> - s->irq_ctrl = data;
> + s->bank[reg_sec_idx].irq_ctrl = data;
> return MEMTX_OK;
> case A_GERRORN:
> - smmuv3_write_gerrorn(s, data);
> + smmuv3_write_gerrorn(s, data, reg_sec_idx);
> /*
> * By acknowledging the CMDQ_ERR, SW may notify cmds can
> * be processed again
> */
> - smmuv3_cmdq_consume(s);
> + smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
> + deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
> + deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> - s->gerror_irq_cfg1 = data;
> + s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> - s->gerror_irq_cfg2 = data;
> + s->bank[reg_sec_idx].gerror_irq_cfg2 = data;
> return MEMTX_OK;
> case A_GBPA:
> /*
> @@ -1589,66 +1633,81 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> */
> if (data & R_GBPA_UPDATE_MASK) {
> /* Ignore update bit as write is synchronous. */
> - s->gbpa = data & ~R_GBPA_UPDATE_MASK;
> + s->bank[reg_sec_idx].gbpa = data & ~R_GBPA_UPDATE_MASK;
> }
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> - s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
> + s->bank[reg_sec_idx].strtab_base =
> + deposit64(s->bank[reg_sec_idx].strtab_base, 0, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE + 4:
> - s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
> + s->bank[reg_sec_idx].strtab_base =
> + deposit64(s->bank[reg_sec_idx].strtab_base, 32, 32, data);
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> - s->strtab_base_cfg = data;
> + s->bank[reg_sec_idx].strtab_base_cfg = data;
> if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
> - s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
> - s->features |= SMMU_FEATURE_2LVL_STE;
> + s->bank[reg_sec_idx].sid_split =
> + FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
> + s->bank[reg_sec_idx].features |= SMMU_FEATURE_2LVL_STE;
> }
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
> - if (s->cmdq.log2size > SMMU_CMDQS) {
> - s->cmdq.log2size = SMMU_CMDQS;
> + s->bank[reg_sec_idx].cmdq.base =
> + deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
> + s->bank[reg_sec_idx].cmdq.log2size =
> + extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> }
> return MEMTX_OK;
> case A_CMDQ_BASE + 4: /* 64b */
> - s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
> + s->bank[reg_sec_idx].cmdq.base =
> + deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
> + return MEMTX_OK;
> return MEMTX_OK;
> case A_CMDQ_PROD:
> - s->cmdq.prod = data;
> - smmuv3_cmdq_consume(s);
> + s->bank[reg_sec_idx].cmdq.prod = data;
> + smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> - s->cmdq.cons = data;
> + s->bank[reg_sec_idx].cmdq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
> - if (s->eventq.log2size > SMMU_EVENTQS) {
> - s->eventq.log2size = SMMU_EVENTQS;
> + s->bank[reg_sec_idx].eventq.base =
> + deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
> + s->bank[reg_sec_idx].eventq.log2size =
> + extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> }
> + s->bank[reg_sec_idx].eventq.cons = data;
why is this added?
> return MEMTX_OK;
> case A_EVENTQ_BASE + 4:
> - s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
> + s->bank[reg_sec_idx].eventq.base =
> + deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
> + return MEMTX_OK;
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> - s->eventq.prod = data;
> + s->bank[reg_sec_idx].eventq.prod = data;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> - s->eventq.cons = data;
> + s->bank[reg_sec_idx].eventq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0: /* 64b */
> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
> + s->bank[reg_sec_idx].eventq_irq_cfg0 =
> + deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0 + 4:
> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
> + s->bank[reg_sec_idx].eventq_irq_cfg0 =
> + deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
> + return MEMTX_OK;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG1:
> - s->eventq_irq_cfg1 = data;
> + s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG2:
> - s->eventq_irq_cfg2 = data;
> + s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
> return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> @@ -1667,13 +1726,14 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
> + SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>
> switch (size) {
> case 8:
> - r = smmu_writell(s, offset, data, attrs);
> + r = smmu_writell(s, offset, data, attrs, reg_sec_idx);
> break;
> case 4:
> - r = smmu_writel(s, offset, data, attrs);
> + r = smmu_writel(s, offset, data, attrs, reg_sec_idx);
> break;
> default:
> r = MEMTX_ERROR;
> @@ -1685,20 +1745,24 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> }
>
> static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> - uint64_t *data, MemTxAttrs attrs)
> + uint64_t *data, MemTxAttrs attrs,
> + SMMUSecurityIndex reg_sec_idx)
> {
> - switch (offset) {
> + uint32_t reg_offset = offset & 0xfff;
> + switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> - *data = s->gerror_irq_cfg0;
> + *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> - *data = s->strtab_base;
> + *data = s->bank[reg_sec_idx].strtab_base;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> - *data = s->cmdq.base;
> + *data = s->bank[reg_sec_idx].cmdq.base;
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> - *data = s->eventq.base;
> + *data = s->bank[reg_sec_idx].eventq.base;
> + return MEMTX_OK;
> + *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
> return MEMTX_OK;
> default:
> *data = 0;
> @@ -1710,14 +1774,16 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> }
>
> static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> - uint64_t *data, MemTxAttrs attrs)
> + uint64_t *data, MemTxAttrs attrs,
> + SMMUSecurityIndex reg_sec_idx)
> {
> - switch (offset) {
> + uint32_t reg_offset = offset & 0xfff;
> + switch (reg_offset) {
> case A_IDREGS ... A_IDREGS + 0x2f:
> *data = smmuv3_idreg(offset - A_IDREGS);
> return MEMTX_OK;
> case A_IDR0 ... A_IDR5:
> - *data = s->idr[(offset - A_IDR0) / 4];
> + *data = s->bank[reg_sec_idx].idr[(reg_offset - A_IDR0) / 4];
> return MEMTX_OK;
> case A_IIDR:
> *data = s->iidr;
> @@ -1726,77 +1792,79 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> *data = s->aidr;
> return MEMTX_OK;
> case A_CR0:
> - *data = s->cr[0];
> + *data = s->bank[reg_sec_idx].cr[0];
> return MEMTX_OK;
> case A_CR0ACK:
> - *data = s->cr0ack;
> + *data = s->bank[reg_sec_idx].cr0ack;
> return MEMTX_OK;
> case A_CR1:
> - *data = s->cr[1];
> + *data = s->bank[reg_sec_idx].cr[1];
> return MEMTX_OK;
> case A_CR2:
> - *data = s->cr[2];
> + *data = s->bank[reg_sec_idx].cr[2];
> return MEMTX_OK;
> case A_STATUSR:
> *data = s->statusr;
> return MEMTX_OK;
> case A_GBPA:
> - *data = s->gbpa;
> + *data = s->bank[reg_sec_idx].gbpa;
> return MEMTX_OK;
> case A_IRQ_CTRL:
> case A_IRQ_CTRL_ACK:
> - *data = s->irq_ctrl;
> + *data = s->bank[reg_sec_idx].irq_ctrl;
> return MEMTX_OK;
> case A_GERROR:
> - *data = s->gerror;
> + *data = s->bank[reg_sec_idx].gerror;
> return MEMTX_OK;
> case A_GERRORN:
> - *data = s->gerrorn;
> + *data = s->bank[reg_sec_idx].gerrorn;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - *data = extract64(s->gerror_irq_cfg0, 0, 32);
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - *data = extract64(s->gerror_irq_cfg0, 32, 32);
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
> + return MEMTX_OK;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> - *data = s->gerror_irq_cfg1;
> + *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> - *data = s->gerror_irq_cfg2;
> + *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> - *data = extract64(s->strtab_base, 0, 32);
> + *data = extract64(s->bank[reg_sec_idx].strtab_base, 0, 32);
> return MEMTX_OK;
> case A_STRTAB_BASE + 4: /* 64b */
> - *data = extract64(s->strtab_base, 32, 32);
> + *data = extract64(s->bank[reg_sec_idx].strtab_base, 32, 32);
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> - *data = s->strtab_base_cfg;
> + *data = s->bank[reg_sec_idx].strtab_base_cfg;
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - *data = extract64(s->cmdq.base, 0, 32);
> + *data = extract64(s->bank[reg_sec_idx].cmdq.base, 0, 32);
> return MEMTX_OK;
> case A_CMDQ_BASE + 4:
> - *data = extract64(s->cmdq.base, 32, 32);
> + *data = extract64(s->bank[reg_sec_idx].cmdq.base, 32, 32);
> return MEMTX_OK;
> case A_CMDQ_PROD:
> - *data = s->cmdq.prod;
> + *data = s->bank[reg_sec_idx].cmdq.prod;
> return MEMTX_OK;
> case A_CMDQ_CONS:
> - *data = s->cmdq.cons;
> + *data = s->bank[reg_sec_idx].cmdq.cons;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - *data = extract64(s->eventq.base, 0, 32);
> + *data = extract64(s->bank[reg_sec_idx].eventq.base, 0, 32);
> return MEMTX_OK;
> case A_EVENTQ_BASE + 4: /* 64b */
> - *data = extract64(s->eventq.base, 32, 32);
> + *data = extract64(s->bank[reg_sec_idx].eventq.base, 32, 32);
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> - *data = s->eventq.prod;
> + *data = s->bank[reg_sec_idx].eventq.prod;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> - *data = s->eventq.cons;
> + *data = s->bank[reg_sec_idx].eventq.cons;
> + return MEMTX_OK;
> return MEMTX_OK;
> default:
> *data = 0;
> @@ -1816,13 +1884,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
> + SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>
> switch (size) {
> case 8:
> - r = smmu_readll(s, offset, data, attrs);
> + r = smmu_readll(s, offset, data, attrs, reg_sec_idx);
> break;
> case 4:
> - r = smmu_readl(s, offset, data, attrs);
> + r = smmu_readl(s, offset, data, attrs, reg_sec_idx);
> break;
> default:
> r = MEMTX_ERROR;
> @@ -1918,7 +1987,7 @@ static bool smmuv3_gbpa_needed(void *opaque)
> SMMUv3State *s = opaque;
>
> /* Only migrate GBPA if it has different reset value. */
> - return s->gbpa != SMMU_GBPA_RESET_VAL;
> + return s->bank[SMMU_SEC_IDX_NS].gbpa != SMMU_GBPA_RESET_VAL;
> }
>
> static const VMStateDescription vmstate_gbpa = {
> @@ -1927,7 +1996,7 @@ static const VMStateDescription vmstate_gbpa = {
> .minimum_version_id = 1,
> .needed = smmuv3_gbpa_needed,
> .fields = (const VMStateField[]) {
> - VMSTATE_UINT32(gbpa, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gbpa, SMMUv3State),
> VMSTATE_END_OF_LIST()
> }
> };
> @@ -1938,28 +2007,29 @@ static const VMStateDescription vmstate_smmuv3 = {
> .minimum_version_id = 1,
> .priority = MIG_PRI_IOMMU,
> .fields = (const VMStateField[]) {
> - VMSTATE_UINT32(features, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].features, SMMUv3State),
> VMSTATE_UINT8(sid_size, SMMUv3State),
> - VMSTATE_UINT8(sid_split, SMMUv3State),
> + VMSTATE_UINT8(bank[SMMU_SEC_IDX_NS].sid_split, SMMUv3State),
>
> - VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
> - VMSTATE_UINT32(cr0ack, SMMUv3State),
> + VMSTATE_UINT32_ARRAY(bank[SMMU_SEC_IDX_NS].cr, SMMUv3State, 3),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].cr0ack, SMMUv3State),
> VMSTATE_UINT32(statusr, SMMUv3State),
> - VMSTATE_UINT32(irq_ctrl, SMMUv3State),
> - VMSTATE_UINT32(gerror, SMMUv3State),
> - VMSTATE_UINT32(gerrorn, SMMUv3State),
> - VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
> - VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
> - VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
> - VMSTATE_UINT64(strtab_base, SMMUv3State),
> - VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
> - VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
> - VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
> - VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
> -
> - VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
> - VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
> -
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].irq_ctrl, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerrorn, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg0, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg1, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg2, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].strtab_base, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].strtab_base_cfg, SMMUv3State),
> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg0, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg1, SMMUv3State),
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg2, SMMUv3State),
> +
> + VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].cmdq, SMMUv3State, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].eventq, SMMUv3State, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> VMSTATE_END_OF_LIST(),
> },
> .subsections = (const VMStateDescription * const []) {
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index f3386bd7ae..80cb4d6b04 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -35,13 +35,13 @@ smmuv3_trigger_irq(int irq) "irq=%d"
> smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
> smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
> smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
> -smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
> +smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap, int sec_idx) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d sec_idx=%d"
I would put the sex_idx at the beginning of the head instead of at the
tail. Here and below.
> smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
> smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
> smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
> smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
> -smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
> -smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
> +smmuv3_record_event(const char *type, uint32_t sid, int sec_idx) "%s sid=0x%x sec_idx=%d"
> +smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split, int sec_idx) "sid=0x%x features:0x%x, sid_split:0x%x sec_idx=%d"
> smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
> smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
> smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 80d0fecfde..3df82b83eb 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -40,6 +40,19 @@
> #define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
> ((addr) & (ent)->entry.addr_mask))
>
> +/*
> + * SMMU Security state index
> + *
> + * The values of this enumeration are identical to the SEC_SID signal
> + * encoding defined in the ARM SMMUv3 Architecture Specification. It is used
> + * to select the appropriate programming interface for a given transaction.
Would have been simpler to me to stick to the spec terminology, ie.
SEC_SID instead of renaming this into SEC_IDX.
> + */
> +typedef enum SMMUSecurityIndex {
> + SMMU_SEC_IDX_NS = 0,
> + SMMU_SEC_IDX_S = 1,
> + SMMU_SEC_IDX_NUM,
> +} SMMUSecurityIndex;
> +
> /*
> * Page table walk error types
> */
> @@ -116,6 +129,7 @@ typedef struct SMMUTransCfg {
> SMMUTransTableInfo tt[2];
> /* Used by stage-2 only. */
> struct SMMUS2Cfg s2cfg;
> + SMMUSecurityIndex sec_idx; /* cached security index */
> } SMMUTransCfg;
>
> typedef struct SMMUDevice {
> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
> index d183a62766..572f15251e 100644
> --- a/include/hw/arm/smmuv3.h
> +++ b/include/hw/arm/smmuv3.h
> @@ -32,19 +32,11 @@ typedef struct SMMUQueue {
> uint8_t log2size;
> } SMMUQueue;
>
> -struct SMMUv3State {
> - SMMUState smmu_state;
> -
> - uint32_t features;
> - uint8_t sid_size;
> - uint8_t sid_split;
> -
> +/* Structure for register bank */
> +typedef struct SMMUv3RegBank {
> uint32_t idr[6];
> - uint32_t iidr;
> - uint32_t aidr;
> uint32_t cr[3];
> uint32_t cr0ack;
> - uint32_t statusr;
> uint32_t gbpa;
> uint32_t irq_ctrl;
> uint32_t gerror;
> @@ -57,12 +49,28 @@ struct SMMUv3State {
> uint64_t eventq_irq_cfg0;
> uint32_t eventq_irq_cfg1;
> uint32_t eventq_irq_cfg2;
> + uint32_t features;
> + uint8_t sid_split;
>
> SMMUQueue eventq, cmdq;
> +} SMMUv3RegBank;
> +
> +struct SMMUv3State {
> + SMMUState smmu_state;
> +
> + /* Shared (non-banked) registers and state */
> + uint8_t sid_size;
> + uint32_t iidr;
> + uint32_t aidr;
> + uint32_t statusr;
> +
> + /* Banked registers for all access */
/* Banked registers depending on SEC_SID */?
> + SMMUv3RegBank bank[SMMU_SEC_IDX_NUM];
>
> qemu_irq irq[4];
> QemuMutex mutex;
> char *stage;
> + bool secure_impl;
> };
>
> typedef enum {
> @@ -84,7 +92,9 @@ struct SMMUv3Class {
> #define TYPE_ARM_SMMUV3 "arm-smmuv3"
> OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
>
> -#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
> -#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
> +#define STAGE1_SUPPORTED(s) \
> + FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P)
> +#define STAGE2_SUPPORTED(s) \
> + FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)
>
> #endif
The patch is difficult to review. I would prefer you split this into a
first patch that introduces the bank struct and you convert this
existing code to use
SMMUv3RegBank *bank = s->bank[SMMU_SEC_IDX_NS];
and then you convert all the reg access to back->regn
this is a pure mechanical change that is not supposed to do any
functional change.
Then in a different patch you can start adapting the proto of some
functions that takes sec_sid parameter. With commit title: prepare
function A to work with a differen sec_sid than NS, ...
To me this way of struturing the patch will be less error prone.
Then you can introduce sec_sid in event and cfg. This incremental way
will be helpful to understand the rationale of the different changes I
think altough I acknowledge it complexifies your task, but it is for the
sake of the review process.
Thanks
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
2025-09-28 14:26 ` Eric Auger
@ 2025-09-29 7:22 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 7:22 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/28 22:26, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> Refactor the SMMUv3 state management by introducing a banked register
>> structure. This change is foundational for supporting multiple security
>> states (Non-secure, Secure, etc.) in a clean and scalable way.
>>
>> A new structure, SMMUv3RegBank, is defined to hold the state for a
>> single security context. The main SMMUv3State now contains an array of
>> these structures. This avoids having separate fields for secure and
>> non-secure registers (e.g., s->cr and s->secure_cr).
>>
>> The primary benefits of this refactoring are:
>> - Significant reduction in code duplication for MMIO handlers.
>> - Improved code readability and long-term maintainability.
>>
>> Additionally, a new enum SMMUSecurityIndex is introduced to represent
>> the security state of a stream. This enum will be used as the index for
>> the register banks in subsequent patches.
> I guess you chose an enum to prepare for Realm support? will Realm
> require a new bank to be introduced? Because otherwise we could have use
> a bool NS?
>
> The patch also adds sec_sid info in event, in cfg. This is not described
> in the commit msg, neither the motivation.
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3-internal.h | 33 ++-
>> hw/arm/smmuv3.c | 484 ++++++++++++++++++++---------------
>> hw/arm/trace-events | 6 +-
>> include/hw/arm/smmu-common.h | 14 +
>> include/hw/arm/smmuv3.h | 34 ++-
>> 5 files changed, 336 insertions(+), 235 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
>> index 3820157eaa..cf17c405de 100644
>> --- a/hw/arm/smmuv3-internal.h
>> +++ b/hw/arm/smmuv3-internal.h
>> @@ -250,9 +250,9 @@ REG64(S_EVENTQ_IRQ_CFG0, 0x80b0)
>> REG32(S_EVENTQ_IRQ_CFG1, 0x80b8)
>> REG32(S_EVENTQ_IRQ_CFG2, 0x80bc)
>>
>> -static inline int smmu_enabled(SMMUv3State *s)
>> +static inline int smmu_enabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> {
>> - return FIELD_EX32(s->cr[0], CR0, SMMUEN);
>> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, SMMUEN);
>> }
>>
>> /* Command Queue Entry */
>> @@ -278,14 +278,16 @@ static inline uint32_t smmuv3_idreg(int regoffset)
>> return smmuv3_ids[regoffset / 4];
>> }
>>
>> -static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
>> +static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
>> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
>> }
>>
>> -static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
>> +static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
>> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
>> }
>>
>> /* Queue Handling */
>> @@ -328,19 +330,23 @@ static inline void queue_cons_incr(SMMUQueue *q)
>> q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
>> }
>>
>> -static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
>> +static inline bool smmuv3_cmdq_enabled(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - return FIELD_EX32(s->cr[0], CR0, CMDQEN);
>> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
>> }
>>
>> -static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
>> +static inline bool smmuv3_eventq_enabled(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
>> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
>> }
>>
>> -static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
>> +static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
>> + s->bank[sec_idx].cmdq.cons = FIELD_DP32(s->bank[sec_idx].cmdq.cons,
>> + CMDQ_CONS, ERR, err_type);
>> }
>>
>> /* Commands */
>> @@ -511,6 +517,7 @@ typedef struct SMMUEventInfo {
>> uint32_t sid;
>> bool recorded;
>> bool inval_ste_allowed;
>> + SMMUSecurityIndex sec_idx;
>> union {
>> struct {
>> uint32_t ssid;
>> @@ -594,7 +601,7 @@ typedef struct SMMUEventInfo {
>> (x)->word[6] = (uint32_t)(addr & 0xffffffff); \
>> } while (0)
>>
>> -void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
>> +void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info);
> not needed
>>
>> /* Configuration Data */
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index bcf8af8dc7..2efa39b78c 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -48,14 +48,14 @@
>> * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
>> */
>> static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>> - uint32_t gerror_mask)
>> + uint32_t gerror_mask, SMMUSecurityIndex sec_idx)
>> {
>>
>> bool pulse = false;
>>
>> switch (irq) {
>> case SMMU_IRQ_EVTQ:
>> - pulse = smmuv3_eventq_irq_enabled(s);
>> + pulse = smmuv3_eventq_irq_enabled(s, sec_idx);
>> break;
>> case SMMU_IRQ_PRIQ:
>> qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
>> @@ -65,17 +65,17 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>> break;
>> case SMMU_IRQ_GERROR:
>> {
>> - uint32_t pending = s->gerror ^ s->gerrorn;
>> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
>> uint32_t new_gerrors = ~pending & gerror_mask;
>>
>> if (!new_gerrors) {
>> /* only toggle non pending errors */
>> return;
>> }
>> - s->gerror ^= new_gerrors;
>> - trace_smmuv3_write_gerror(new_gerrors, s->gerror);
>> + s->bank[sec_idx].gerror ^= new_gerrors;
>> + trace_smmuv3_write_gerror(new_gerrors, s->bank[sec_idx].gerror);
>>
>> - pulse = smmuv3_gerror_irq_enabled(s);
>> + pulse = smmuv3_gerror_irq_enabled(s, sec_idx);
>> break;
>> }
>> }
>> @@ -85,24 +85,25 @@ static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
>> }
>> }
>>
>> -static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
>> +static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - uint32_t pending = s->gerror ^ s->gerrorn;
>> - uint32_t toggled = s->gerrorn ^ new_gerrorn;
>> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
>> + uint32_t toggled = s->bank[sec_idx].gerrorn ^ new_gerrorn;
>>
>> if (toggled & ~pending) {
>> qemu_log_mask(LOG_GUEST_ERROR,
>> - "guest toggles non pending errors = 0x%x\n",
>> - toggled & ~pending);
>> + "guest toggles non pending errors = 0x%x sec_idx=%d\n",
>> + toggled & ~pending, sec_idx);
>> }
>>
>> /*
>> * We do not raise any error in case guest toggles bits corresponding
>> * to not active IRQs (CONSTRAINED UNPREDICTABLE)
>> */
>> - s->gerrorn = new_gerrorn;
>> + s->bank[sec_idx].gerrorn = new_gerrorn;
>>
>> - trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
>> + trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
>> }
>>
>> static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
>> @@ -142,12 +143,13 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
>> return MEMTX_OK;
>> }
>>
>> -static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
>> +static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
>> + SMMUSecurityIndex sec_idx)
>> {
>> - SMMUQueue *q = &s->eventq;
>> + SMMUQueue *q = &s->bank[sec_idx].eventq;
>> MemTxResult r;
>>
>> - if (!smmuv3_eventq_enabled(s)) {
>> + if (!smmuv3_eventq_enabled(s, sec_idx)) {
>> return MEMTX_ERROR;
>> }
>>
>> @@ -161,7 +163,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
>> }
>>
>> if (!smmuv3_q_empty(q)) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0, sec_idx);
>> }
>> return MEMTX_OK;
>> }
>> @@ -171,7 +173,7 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
>> Evt evt = {};
>> MemTxResult r;
>>
>> - if (!smmuv3_eventq_enabled(s)) {
>> + if (!smmuv3_eventq_enabled(s, info->sec_idx)) {
>> return;
>> }
>>
>> @@ -249,74 +251,104 @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
>> g_assert_not_reached();
>> }
>>
>> - trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
>> - r = smmuv3_write_eventq(s, &evt);
>> + trace_smmuv3_record_event(smmu_event_string(info->type),
>> + info->sid, info->sec_idx);
>> + r = smmuv3_write_eventq(s, &evt, info->sec_idx);
>> if (r != MEMTX_OK) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK,
>> + info->sec_idx);
>> }
>> info->recorded = true;
>> }
>>
>> static void smmuv3_init_regs(SMMUv3State *s)
>> {
>> + /* Initialize Non-secure bank (SMMU_SEC_IDX_NS) */
>> /* Based on sys property, the stages supported in smmu will be advertised.*/
>> if (s->stage && !strcmp("2", s->stage)) {
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
> use a pointer to s->bank[SMMU_SEC_IDX_NS] and rewrite this as
> bank->idr[O] here and bleow.
> this will be more readable.
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
>> } else if (s->stage && !strcmp("nested", s->stage)) {
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S2P, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P, 1);
>> } else {
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1);
>> - }
>> -
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P, 1);
>> + }
>> +
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, COHACC, 1); /* IO coherent */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, VMID16, 1); /* 16-bit VMID */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TTENDIAN, 2); /* little endian */
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STALL_MODEL, 1); /* No stall */
>> /* terminated transaction will always be aborted/error returned */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, TERM_MODEL, 1);
>> /* 2-level stream table supported */
>> - s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
>> -
>> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
>> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
>> - s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
>> -
>> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
>> - if (FIELD_EX32(s->idr[0], IDR0, S2P)) {
>> + s->bank[SMMU_SEC_IDX_NS].idr[0] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, STLEVEL, 1);
>> +
>> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
>> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
>> + s->bank[SMMU_SEC_IDX_NS].idr[1] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, CMDQS, SMMU_CMDQS);
>> +
>> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, HAD, 1);
>> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)) {
>> /* XNX is a stage-2-specific feature */
>> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, XNX, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, XNX, 1);
>> }
>> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
>> - s->idr[3] = FIELD_DP32(s->idr[3], IDR3, BBML, 2);
>> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, RIL, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[3] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[3], IDR3, BBML, 2);
>>
>> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
>> + /* 44 bits */
>> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS, SMMU_IDR5_OAS);
>> /* 4K, 16K and 64K granule support */
>> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
>> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
>> - s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
>> -
>> - s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
>> - s->cmdq.prod = 0;
>> - s->cmdq.cons = 0;
>> - s->cmdq.entry_size = sizeof(struct Cmd);
>> - s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
>> - s->eventq.prod = 0;
>> - s->eventq.cons = 0;
>> - s->eventq.entry_size = sizeof(struct Evt);
>> -
>> - s->features = 0;
>> - s->sid_split = 0;
>> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN4K, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN16K, 1);
>> + s->bank[SMMU_SEC_IDX_NS].idr[5] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, GRAN64K, 1);
>> +
>> + /* Initialize Non-secure command and event queues */
>> + s->bank[SMMU_SEC_IDX_NS].cmdq.base =
>> + deposit64(s->bank[SMMU_SEC_IDX_NS].cmdq.base, 0, 5, SMMU_CMDQS);
>> + s->bank[SMMU_SEC_IDX_NS].cmdq.prod = 0;
>> + s->bank[SMMU_SEC_IDX_NS].cmdq.cons = 0;
>> + s->bank[SMMU_SEC_IDX_NS].cmdq.entry_size = sizeof(struct Cmd);
>> + s->bank[SMMU_SEC_IDX_NS].eventq.base =
>> + deposit64(s->bank[SMMU_SEC_IDX_NS].eventq.base, 0, 5, SMMU_EVENTQS);
>> + s->bank[SMMU_SEC_IDX_NS].eventq.prod = 0;
>> + s->bank[SMMU_SEC_IDX_NS].eventq.cons = 0;
>> + s->bank[SMMU_SEC_IDX_NS].eventq.entry_size = sizeof(struct Evt);
>> + s->bank[SMMU_SEC_IDX_NS].features = 0;
>> + s->bank[SMMU_SEC_IDX_NS].sid_split = 0;
>> s->aidr = 0x1;
>> - s->cr[0] = 0;
>> - s->cr0ack = 0;
>> - s->irq_ctrl = 0;
>> - s->gerror = 0;
>> - s->gerrorn = 0;
>> + s->bank[SMMU_SEC_IDX_NS].cr[0] = 0;
>> + s->bank[SMMU_SEC_IDX_NS].cr0ack = 0;
>> + s->bank[SMMU_SEC_IDX_NS].irq_ctrl = 0;
>> + s->bank[SMMU_SEC_IDX_NS].gerror = 0;
>> + s->bank[SMMU_SEC_IDX_NS].gerrorn = 0;
>> s->statusr = 0;
>> - s->gbpa = SMMU_GBPA_RESET_VAL;
>> + s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
>> +
>> }
>>
>> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
>> @@ -430,7 +462,7 @@ static bool s2_pgtable_config_valid(uint8_t sl0, uint8_t t0sz, uint8_t gran)
>> static int decode_ste_s2_cfg(SMMUv3State *s, SMMUTransCfg *cfg,
>> STE *ste)
>> {
>> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
>> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
>>
>> if (STE_S2AA64(ste) == 0x0) {
>> qemu_log_mask(LOG_UNIMP,
>> @@ -548,7 +580,7 @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
>> STE *ste, SMMUEventInfo *event)
>> {
>> uint32_t config;
>> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
>> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
>> int ret;
>>
>> if (!STE_VALID(ste)) {
>> @@ -625,20 +657,25 @@ bad_ste:
>> * @sid: stream ID
>> * @ste: returned stream table entry
>> * @event: handle to an event info
>> + * @cfg: translation configuration
>> *
>> * Supports linear and 2-level stream table
>> * Return 0 on success, -EINVAL otherwise
>> */
>> static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> - SMMUEventInfo *event)
>> + SMMUEventInfo *event, SMMUTransCfg *cfg)
>> {
>> - dma_addr_t addr, strtab_base;
>> + dma_addr_t addr;
>> uint32_t log2size;
>> int strtab_size_shift;
>> int ret;
>> + uint32_t features = s->bank[cfg->sec_idx].features;
> also here you can use a pointer to bank[cfg->sec_idx]
>> + dma_addr_t strtab_base = s->bank[cfg->sec_idx].strtab_base;
>> + uint8_t sid_split = s->bank[cfg->sec_idx].sid_split;
>>
>> - trace_smmuv3_find_ste(sid, s->features, s->sid_split);
>> - log2size = FIELD_EX32(s->strtab_base_cfg, STRTAB_BASE_CFG, LOG2SIZE);
>> + trace_smmuv3_find_ste(sid, features, sid_split, cfg->sec_idx);
>> + log2size = FIELD_EX32(s->bank[cfg->sec_idx].strtab_base_cfg,
>> + STRTAB_BASE_CFG, LOG2SIZE);
>> /*
>> * Check SID range against both guest-configured and implementation limits
>> */
>> @@ -646,7 +683,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> event->type = SMMU_EVT_C_BAD_STREAMID;
>> return -EINVAL;
>> }
>> - if (s->features & SMMU_FEATURE_2LVL_STE) {
>> + if (features & SMMU_FEATURE_2LVL_STE) {
>> int l1_ste_offset, l2_ste_offset, max_l2_ste, span, i;
>> dma_addr_t l1ptr, l2ptr;
>> STEDesc l1std;
>> @@ -655,11 +692,11 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> * Align strtab base address to table size. For this purpose, assume it
>> * is not bounded by SMMU_IDR1_SIDSIZE.
>> */
>> - strtab_size_shift = MAX(5, (int)log2size - s->sid_split - 1 + 3);
>> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
>> + strtab_size_shift = MAX(5, (int)log2size - sid_split - 1 + 3);
>> + strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
>> ~MAKE_64BIT_MASK(0, strtab_size_shift);
>> - l1_ste_offset = sid >> s->sid_split;
>> - l2_ste_offset = sid & ((1 << s->sid_split) - 1);
>> + l1_ste_offset = sid >> sid_split;
>> + l2_ste_offset = sid & ((1 << sid_split) - 1);
>> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
>> /* TODO: guarantee 64-bit single-copy atomicity */
>> ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
>> @@ -688,7 +725,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> }
>> max_l2_ste = (1 << span) - 1;
>> l2ptr = l1std_l2ptr(&l1std);
>> - trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
>> + trace_smmuv3_find_ste_2lvl(strtab_base, l1ptr, l1_ste_offset,
>> l2ptr, l2_ste_offset, max_l2_ste);
>> if (l2_ste_offset > max_l2_ste) {
>> qemu_log_mask(LOG_GUEST_ERROR,
>> @@ -700,7 +737,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
>> addr = l2ptr + l2_ste_offset * sizeof(*ste);
>> } else {
>> strtab_size_shift = log2size + 5;
>> - strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK &
>> + strtab_base = strtab_base & SMMU_BASE_ADDR_MASK &
>> ~MAKE_64BIT_MASK(0, strtab_size_shift);
>> addr = strtab_base + sid * sizeof(*ste);
>> }
>> @@ -719,7 +756,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
>> int i;
>> SMMUTranslationStatus status;
>> SMMUTLBEntry *entry;
>> - uint8_t oas = FIELD_EX32(s->idr[5], IDR5, OAS);
>> + uint8_t oas = FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[5], IDR5, OAS);
>>
>> if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
>> goto bad_cd;
>> @@ -834,7 +871,7 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
>> /* ASID defaults to -1 (if s1 is not supported). */
>> cfg->asid = -1;
>>
>> - ret = smmu_find_ste(s, sid, &ste, event);
>> + ret = smmu_find_ste(s, sid, &ste, event, cfg);
>> if (ret) {
>> return ret;
>> }
>> @@ -964,6 +1001,7 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
>> * - s2 translation => CLASS_IN (input to function)
>> */
>> class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
>> + event->sec_idx = cfg->sec_idx;
>> switch (ptw_info.type) {
>> case SMMU_PTW_ERR_WALK_EABT:
>> event->type = SMMU_EVT_F_WALK_EABT;
>> @@ -1046,6 +1084,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>> .inval_ste_allowed = false};
>> SMMUTranslationStatus status;
>> SMMUTransCfg *cfg = NULL;
>> + SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
>> IOMMUTLBEntry entry = {
>> .target_as = &address_space_memory,
>> .iova = addr,
>> @@ -1057,12 +1096,9 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>>
>> qemu_mutex_lock(&s->mutex);
>>
>> - if (!smmu_enabled(s)) {
>> - if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
>> - status = SMMU_TRANS_ABORT;
>> - } else {
>> - status = SMMU_TRANS_DISABLE;
>> - }
>> + if (!smmu_enabled(s, sec_idx)) {
>> + bool abort_flag = FIELD_EX32(s->bank[sec_idx].gbpa, GBPA, ABORT);
>> + status = abort_flag ? SMMU_TRANS_ABORT : SMMU_TRANS_DISABLE;
>> goto epilogue;
>> }
>>
>> @@ -1278,14 +1314,14 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
>> }
>> }
>>
>> -static int smmuv3_cmdq_consume(SMMUv3State *s)
>> +static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> {
>> SMMUState *bs = ARM_SMMU(s);
>> SMMUCmdError cmd_error = SMMU_CERROR_NONE;
>> - SMMUQueue *q = &s->cmdq;
>> + SMMUQueue *q = &s->bank[sec_idx].cmdq;
>> SMMUCommandType type = 0;
>>
>> - if (!smmuv3_cmdq_enabled(s)) {
>> + if (!smmuv3_cmdq_enabled(s, sec_idx)) {
>> return 0;
>> }
>> /*
>> @@ -1296,11 +1332,12 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> */
>>
>> while (!smmuv3_q_empty(q)) {
>> - uint32_t pending = s->gerror ^ s->gerrorn;
>> + uint32_t pending = s->bank[sec_idx].gerror ^ s->bank[sec_idx].gerrorn;
>> Cmd cmd;
>>
>> trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
>> - Q_PROD_WRAP(q), Q_CONS_WRAP(q));
>> + Q_PROD_WRAP(q), Q_CONS_WRAP(q),
>> + sec_idx);
>>
>> if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
>> break;
>> @@ -1319,7 +1356,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> switch (type) {
>> case SMMU_CMD_SYNC:
>> if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
>> - smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0, sec_idx);
>> }
>> break;
>> case SMMU_CMD_PREFETCH_CONFIG:
>> @@ -1498,8 +1535,9 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>>
>> if (cmd_error) {
>> trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
>> - smmu_write_cmdq_err(s, cmd_error);
>> - smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
>> + smmu_write_cmdq_err(s, cmd_error, sec_idx);
>> + smmuv3_trigger_irq(s, SMMU_IRQ_GERROR,
>> + R_GERROR_CMDQ_ERR_MASK, sec_idx);
>> }
>>
>> trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
>> @@ -1509,31 +1547,33 @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
>> }
>>
>> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> - uint64_t data, MemTxAttrs attrs)
>> + uint64_t data, MemTxAttrs attrs,
>> + SMMUSecurityIndex reg_sec_idx)
>> {
>> - switch (offset) {
>> - case A_GERROR_IRQ_CFG0:
>> - s->gerror_irq_cfg0 = data;
>> - return MEMTX_OK;
>> + uint32_t reg_offset = offset & 0xfff;
>> + switch (reg_offset) {
>> case A_STRTAB_BASE:
>> - s->strtab_base = data;
>> + /* Clear reserved bits according to spec */
>> + s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
>> return MEMTX_OK;
>> case A_CMDQ_BASE:
>> - s->cmdq.base = data;
>> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
>> - if (s->cmdq.log2size > SMMU_CMDQS) {
>> - s->cmdq.log2size = SMMU_CMDQS;
>> + s->bank[reg_sec_idx].cmdq.base = data;
>> + s->bank[reg_sec_idx].cmdq.log2size = extract64(
>> + s->bank[reg_sec_idx].cmdq.base, 0, 5);
> here also use a local variable point to bank[reg_sec_idx]
>> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
>> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
>> }
>> return MEMTX_OK;
>> case A_EVENTQ_BASE:
>> - s->eventq.base = data;
>> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
>> - if (s->eventq.log2size > SMMU_EVENTQS) {
>> - s->eventq.log2size = SMMU_EVENTQS;
>> + s->bank[reg_sec_idx].eventq.base = data;
>> + s->bank[reg_sec_idx].eventq.log2size = extract64(
>> + s->bank[reg_sec_idx].eventq.base, 0, 5);
>> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
>> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
>> }
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0:
>> - s->eventq_irq_cfg0 = data;
>> + s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
>> return MEMTX_OK;
>> default:
>> qemu_log_mask(LOG_UNIMP,
>> @@ -1544,43 +1584,47 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> }
>>
>> static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> - uint64_t data, MemTxAttrs attrs)
>> + uint64_t data, MemTxAttrs attrs,
>> + SMMUSecurityIndex reg_sec_idx)
>> {
>> - switch (offset) {
>> + uint32_t reg_offset = offset & 0xfff;
>> + switch (reg_offset) {
>> case A_CR0:
>> - s->cr[0] = data;
>> - s->cr0ack = data & ~SMMU_CR0_RESERVED;
>> + s->bank[reg_sec_idx].cr[0] = data;
>> + s->bank[reg_sec_idx].cr0ack = data;
>> /* in case the command queue has been enabled */
>> - smmuv3_cmdq_consume(s);
>> + smmuv3_cmdq_consume(s, reg_sec_idx);
>> return MEMTX_OK;
>> case A_CR1:
>> - s->cr[1] = data;
>> + s->bank[reg_sec_idx].cr[1] = data;
>> return MEMTX_OK;
>> case A_CR2:
>> - s->cr[2] = data;
>> + s->bank[reg_sec_idx].cr[2] = data;
>> return MEMTX_OK;
>> case A_IRQ_CTRL:
>> - s->irq_ctrl = data;
>> + s->bank[reg_sec_idx].irq_ctrl = data;
>> return MEMTX_OK;
>> case A_GERRORN:
>> - smmuv3_write_gerrorn(s, data);
>> + smmuv3_write_gerrorn(s, data, reg_sec_idx);
>> /*
>> * By acknowledging the CMDQ_ERR, SW may notify cmds can
>> * be processed again
>> */
>> - smmuv3_cmdq_consume(s);
>> + smmuv3_cmdq_consume(s, reg_sec_idx);
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0: /* 64b */
>> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
>> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
>> + deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0 + 4:
>> - s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
>> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
>> + deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG1:
>> - s->gerror_irq_cfg1 = data;
>> + s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG2:
>> - s->gerror_irq_cfg2 = data;
>> + s->bank[reg_sec_idx].gerror_irq_cfg2 = data;
>> return MEMTX_OK;
>> case A_GBPA:
>> /*
>> @@ -1589,66 +1633,81 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> */
>> if (data & R_GBPA_UPDATE_MASK) {
>> /* Ignore update bit as write is synchronous. */
>> - s->gbpa = data & ~R_GBPA_UPDATE_MASK;
>> + s->bank[reg_sec_idx].gbpa = data & ~R_GBPA_UPDATE_MASK;
>> }
>> return MEMTX_OK;
>> case A_STRTAB_BASE: /* 64b */
>> - s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
>> + s->bank[reg_sec_idx].strtab_base =
>> + deposit64(s->bank[reg_sec_idx].strtab_base, 0, 32, data);
>> return MEMTX_OK;
>> case A_STRTAB_BASE + 4:
>> - s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
>> + s->bank[reg_sec_idx].strtab_base =
>> + deposit64(s->bank[reg_sec_idx].strtab_base, 32, 32, data);
>> return MEMTX_OK;
>> case A_STRTAB_BASE_CFG:
>> - s->strtab_base_cfg = data;
>> + s->bank[reg_sec_idx].strtab_base_cfg = data;
>> if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
>> - s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
>> - s->features |= SMMU_FEATURE_2LVL_STE;
>> + s->bank[reg_sec_idx].sid_split =
>> + FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
>> + s->bank[reg_sec_idx].features |= SMMU_FEATURE_2LVL_STE;
>> }
>> return MEMTX_OK;
>> case A_CMDQ_BASE: /* 64b */
>> - s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
>> - s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
>> - if (s->cmdq.log2size > SMMU_CMDQS) {
>> - s->cmdq.log2size = SMMU_CMDQS;
>> + s->bank[reg_sec_idx].cmdq.base =
>> + deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
>> + s->bank[reg_sec_idx].cmdq.log2size =
>> + extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
>> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
>> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
>> }
>> return MEMTX_OK;
>> case A_CMDQ_BASE + 4: /* 64b */
>> - s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
>> + s->bank[reg_sec_idx].cmdq.base =
>> + deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
>> + return MEMTX_OK;
>> return MEMTX_OK;
>> case A_CMDQ_PROD:
>> - s->cmdq.prod = data;
>> - smmuv3_cmdq_consume(s);
>> + s->bank[reg_sec_idx].cmdq.prod = data;
>> + smmuv3_cmdq_consume(s, reg_sec_idx);
>> return MEMTX_OK;
>> case A_CMDQ_CONS:
>> - s->cmdq.cons = data;
>> + s->bank[reg_sec_idx].cmdq.cons = data;
>> return MEMTX_OK;
>> case A_EVENTQ_BASE: /* 64b */
>> - s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
>> - s->eventq.log2size = extract64(s->eventq.base, 0, 5);
>> - if (s->eventq.log2size > SMMU_EVENTQS) {
>> - s->eventq.log2size = SMMU_EVENTQS;
>> + s->bank[reg_sec_idx].eventq.base =
>> + deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
>> + s->bank[reg_sec_idx].eventq.log2size =
>> + extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
>> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
>> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
>> }
>> + s->bank[reg_sec_idx].eventq.cons = data;
> why is this added?
>> return MEMTX_OK;
>> case A_EVENTQ_BASE + 4:
>> - s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
>> + s->bank[reg_sec_idx].eventq.base =
>> + deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
>> + return MEMTX_OK;
>> return MEMTX_OK;
>> case A_EVENTQ_PROD:
>> - s->eventq.prod = data;
>> + s->bank[reg_sec_idx].eventq.prod = data;
>> return MEMTX_OK;
>> case A_EVENTQ_CONS:
>> - s->eventq.cons = data;
>> + s->bank[reg_sec_idx].eventq.cons = data;
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0: /* 64b */
>> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
>> + s->bank[reg_sec_idx].eventq_irq_cfg0 =
>> + deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0 + 4:
>> - s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
>> + s->bank[reg_sec_idx].eventq_irq_cfg0 =
>> + deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
>> + return MEMTX_OK;
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG1:
>> - s->eventq_irq_cfg1 = data;
>> + s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG2:
>> - s->eventq_irq_cfg2 = data;
>> + s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
>> return MEMTX_OK;
>> default:
>> qemu_log_mask(LOG_UNIMP,
>> @@ -1667,13 +1726,14 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>>
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>> + SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>>
>> switch (size) {
>> case 8:
>> - r = smmu_writell(s, offset, data, attrs);
>> + r = smmu_writell(s, offset, data, attrs, reg_sec_idx);
>> break;
>> case 4:
>> - r = smmu_writel(s, offset, data, attrs);
>> + r = smmu_writel(s, offset, data, attrs, reg_sec_idx);
>> break;
>> default:
>> r = MEMTX_ERROR;
>> @@ -1685,20 +1745,24 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>> }
>>
>> static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
>> - uint64_t *data, MemTxAttrs attrs)
>> + uint64_t *data, MemTxAttrs attrs,
>> + SMMUSecurityIndex reg_sec_idx)
>> {
>> - switch (offset) {
>> + uint32_t reg_offset = offset & 0xfff;
>> + switch (reg_offset) {
>> case A_GERROR_IRQ_CFG0:
>> - *data = s->gerror_irq_cfg0;
>> + *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
>> return MEMTX_OK;
>> case A_STRTAB_BASE:
>> - *data = s->strtab_base;
>> + *data = s->bank[reg_sec_idx].strtab_base;
>> return MEMTX_OK;
>> case A_CMDQ_BASE:
>> - *data = s->cmdq.base;
>> + *data = s->bank[reg_sec_idx].cmdq.base;
>> return MEMTX_OK;
>> case A_EVENTQ_BASE:
>> - *data = s->eventq.base;
>> + *data = s->bank[reg_sec_idx].eventq.base;
>> + return MEMTX_OK;
>> + *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
>> return MEMTX_OK;
>> default:
>> *data = 0;
>> @@ -1710,14 +1774,16 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
>> }
>>
>> static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> - uint64_t *data, MemTxAttrs attrs)
>> + uint64_t *data, MemTxAttrs attrs,
>> + SMMUSecurityIndex reg_sec_idx)
>> {
>> - switch (offset) {
>> + uint32_t reg_offset = offset & 0xfff;
>> + switch (reg_offset) {
>> case A_IDREGS ... A_IDREGS + 0x2f:
>> *data = smmuv3_idreg(offset - A_IDREGS);
>> return MEMTX_OK;
>> case A_IDR0 ... A_IDR5:
>> - *data = s->idr[(offset - A_IDR0) / 4];
>> + *data = s->bank[reg_sec_idx].idr[(reg_offset - A_IDR0) / 4];
>> return MEMTX_OK;
>> case A_IIDR:
>> *data = s->iidr;
>> @@ -1726,77 +1792,79 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> *data = s->aidr;
>> return MEMTX_OK;
>> case A_CR0:
>> - *data = s->cr[0];
>> + *data = s->bank[reg_sec_idx].cr[0];
>> return MEMTX_OK;
>> case A_CR0ACK:
>> - *data = s->cr0ack;
>> + *data = s->bank[reg_sec_idx].cr0ack;
>> return MEMTX_OK;
>> case A_CR1:
>> - *data = s->cr[1];
>> + *data = s->bank[reg_sec_idx].cr[1];
>> return MEMTX_OK;
>> case A_CR2:
>> - *data = s->cr[2];
>> + *data = s->bank[reg_sec_idx].cr[2];
>> return MEMTX_OK;
>> case A_STATUSR:
>> *data = s->statusr;
>> return MEMTX_OK;
>> case A_GBPA:
>> - *data = s->gbpa;
>> + *data = s->bank[reg_sec_idx].gbpa;
>> return MEMTX_OK;
>> case A_IRQ_CTRL:
>> case A_IRQ_CTRL_ACK:
>> - *data = s->irq_ctrl;
>> + *data = s->bank[reg_sec_idx].irq_ctrl;
>> return MEMTX_OK;
>> case A_GERROR:
>> - *data = s->gerror;
>> + *data = s->bank[reg_sec_idx].gerror;
>> return MEMTX_OK;
>> case A_GERRORN:
>> - *data = s->gerrorn;
>> + *data = s->bank[reg_sec_idx].gerrorn;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0: /* 64b */
>> - *data = extract64(s->gerror_irq_cfg0, 0, 32);
>> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0 + 4:
>> - *data = extract64(s->gerror_irq_cfg0, 32, 32);
>> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
>> + return MEMTX_OK;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG1:
>> - *data = s->gerror_irq_cfg1;
>> + *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG2:
>> - *data = s->gerror_irq_cfg2;
>> + *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
>> return MEMTX_OK;
>> case A_STRTAB_BASE: /* 64b */
>> - *data = extract64(s->strtab_base, 0, 32);
>> + *data = extract64(s->bank[reg_sec_idx].strtab_base, 0, 32);
>> return MEMTX_OK;
>> case A_STRTAB_BASE + 4: /* 64b */
>> - *data = extract64(s->strtab_base, 32, 32);
>> + *data = extract64(s->bank[reg_sec_idx].strtab_base, 32, 32);
>> return MEMTX_OK;
>> case A_STRTAB_BASE_CFG:
>> - *data = s->strtab_base_cfg;
>> + *data = s->bank[reg_sec_idx].strtab_base_cfg;
>> return MEMTX_OK;
>> case A_CMDQ_BASE: /* 64b */
>> - *data = extract64(s->cmdq.base, 0, 32);
>> + *data = extract64(s->bank[reg_sec_idx].cmdq.base, 0, 32);
>> return MEMTX_OK;
>> case A_CMDQ_BASE + 4:
>> - *data = extract64(s->cmdq.base, 32, 32);
>> + *data = extract64(s->bank[reg_sec_idx].cmdq.base, 32, 32);
>> return MEMTX_OK;
>> case A_CMDQ_PROD:
>> - *data = s->cmdq.prod;
>> + *data = s->bank[reg_sec_idx].cmdq.prod;
>> return MEMTX_OK;
>> case A_CMDQ_CONS:
>> - *data = s->cmdq.cons;
>> + *data = s->bank[reg_sec_idx].cmdq.cons;
>> return MEMTX_OK;
>> case A_EVENTQ_BASE: /* 64b */
>> - *data = extract64(s->eventq.base, 0, 32);
>> + *data = extract64(s->bank[reg_sec_idx].eventq.base, 0, 32);
>> return MEMTX_OK;
>> case A_EVENTQ_BASE + 4: /* 64b */
>> - *data = extract64(s->eventq.base, 32, 32);
>> + *data = extract64(s->bank[reg_sec_idx].eventq.base, 32, 32);
>> return MEMTX_OK;
>> case A_EVENTQ_PROD:
>> - *data = s->eventq.prod;
>> + *data = s->bank[reg_sec_idx].eventq.prod;
>> return MEMTX_OK;
>> case A_EVENTQ_CONS:
>> - *data = s->eventq.cons;
>> + *data = s->bank[reg_sec_idx].eventq.cons;
>> + return MEMTX_OK;
>> return MEMTX_OK;
>> default:
>> *data = 0;
>> @@ -1816,13 +1884,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>>
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>> + SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>>
>> switch (size) {
>> case 8:
>> - r = smmu_readll(s, offset, data, attrs);
>> + r = smmu_readll(s, offset, data, attrs, reg_sec_idx);
>> break;
>> case 4:
>> - r = smmu_readl(s, offset, data, attrs);
>> + r = smmu_readl(s, offset, data, attrs, reg_sec_idx);
>> break;
>> default:
>> r = MEMTX_ERROR;
>> @@ -1918,7 +1987,7 @@ static bool smmuv3_gbpa_needed(void *opaque)
>> SMMUv3State *s = opaque;
>>
>> /* Only migrate GBPA if it has different reset value. */
>> - return s->gbpa != SMMU_GBPA_RESET_VAL;
>> + return s->bank[SMMU_SEC_IDX_NS].gbpa != SMMU_GBPA_RESET_VAL;
>> }
>>
>> static const VMStateDescription vmstate_gbpa = {
>> @@ -1927,7 +1996,7 @@ static const VMStateDescription vmstate_gbpa = {
>> .minimum_version_id = 1,
>> .needed = smmuv3_gbpa_needed,
>> .fields = (const VMStateField[]) {
>> - VMSTATE_UINT32(gbpa, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gbpa, SMMUv3State),
>> VMSTATE_END_OF_LIST()
>> }
>> };
>> @@ -1938,28 +2007,29 @@ static const VMStateDescription vmstate_smmuv3 = {
>> .minimum_version_id = 1,
>> .priority = MIG_PRI_IOMMU,
>> .fields = (const VMStateField[]) {
>> - VMSTATE_UINT32(features, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].features, SMMUv3State),
>> VMSTATE_UINT8(sid_size, SMMUv3State),
>> - VMSTATE_UINT8(sid_split, SMMUv3State),
>> + VMSTATE_UINT8(bank[SMMU_SEC_IDX_NS].sid_split, SMMUv3State),
>>
>> - VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
>> - VMSTATE_UINT32(cr0ack, SMMUv3State),
>> + VMSTATE_UINT32_ARRAY(bank[SMMU_SEC_IDX_NS].cr, SMMUv3State, 3),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].cr0ack, SMMUv3State),
>> VMSTATE_UINT32(statusr, SMMUv3State),
>> - VMSTATE_UINT32(irq_ctrl, SMMUv3State),
>> - VMSTATE_UINT32(gerror, SMMUv3State),
>> - VMSTATE_UINT32(gerrorn, SMMUv3State),
>> - VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
>> - VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
>> - VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
>> - VMSTATE_UINT64(strtab_base, SMMUv3State),
>> - VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
>> - VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
>> - VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
>> - VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
>> -
>> - VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
>> - VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
>> -
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].irq_ctrl, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerrorn, SMMUv3State),
>> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg0, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg1, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].gerror_irq_cfg2, SMMUv3State),
>> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].strtab_base, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].strtab_base_cfg, SMMUv3State),
>> + VMSTATE_UINT64(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg0, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg1, SMMUv3State),
>> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_NS].eventq_irq_cfg2, SMMUv3State),
>> +
>> + VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].cmdq, SMMUv3State, 0,
>> + vmstate_smmuv3_queue, SMMUQueue),
>> + VMSTATE_STRUCT(bank[SMMU_SEC_IDX_NS].eventq, SMMUv3State, 0,
>> + vmstate_smmuv3_queue, SMMUQueue),
>> VMSTATE_END_OF_LIST(),
>> },
>> .subsections = (const VMStateDescription * const []) {
>> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
>> index f3386bd7ae..80cb4d6b04 100644
>> --- a/hw/arm/trace-events
>> +++ b/hw/arm/trace-events
>> @@ -35,13 +35,13 @@ smmuv3_trigger_irq(int irq) "irq=%d"
>> smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
>> smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
>> smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
>> -smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
>> +smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap, int sec_idx) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d sec_idx=%d"
> I would put the sex_idx at the beginning of the head instead of at the
> tail. Here and below.
>> smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
>> smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
>> smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
>> smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
>> -smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
>> -smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
>> +smmuv3_record_event(const char *type, uint32_t sid, int sec_idx) "%s sid=0x%x sec_idx=%d"
>> +smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split, int sec_idx) "sid=0x%x features:0x%x, sid_split:0x%x sec_idx=%d"
>> smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
>> smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
>> smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
>> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
>> index 80d0fecfde..3df82b83eb 100644
>> --- a/include/hw/arm/smmu-common.h
>> +++ b/include/hw/arm/smmu-common.h
>> @@ -40,6 +40,19 @@
>> #define CACHED_ENTRY_TO_ADDR(ent, addr) ((ent)->entry.translated_addr + \
>> ((addr) & (ent)->entry.addr_mask))
>>
>> +/*
>> + * SMMU Security state index
>> + *
>> + * The values of this enumeration are identical to the SEC_SID signal
>> + * encoding defined in the ARM SMMUv3 Architecture Specification. It is used
>> + * to select the appropriate programming interface for a given transaction.
> Would have been simpler to me to stick to the spec terminology, ie.
> SEC_SID instead of renaming this into SEC_IDX.
>> + */
>> +typedef enum SMMUSecurityIndex {
>> + SMMU_SEC_IDX_NS = 0,
>> + SMMU_SEC_IDX_S = 1,
>> + SMMU_SEC_IDX_NUM,
>> +} SMMUSecurityIndex;
>> +
>> /*
>> * Page table walk error types
>> */
>> @@ -116,6 +129,7 @@ typedef struct SMMUTransCfg {
>> SMMUTransTableInfo tt[2];
>> /* Used by stage-2 only. */
>> struct SMMUS2Cfg s2cfg;
>> + SMMUSecurityIndex sec_idx; /* cached security index */
>> } SMMUTransCfg;
>>
>> typedef struct SMMUDevice {
>> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
>> index d183a62766..572f15251e 100644
>> --- a/include/hw/arm/smmuv3.h
>> +++ b/include/hw/arm/smmuv3.h
>> @@ -32,19 +32,11 @@ typedef struct SMMUQueue {
>> uint8_t log2size;
>> } SMMUQueue;
>>
>> -struct SMMUv3State {
>> - SMMUState smmu_state;
>> -
>> - uint32_t features;
>> - uint8_t sid_size;
>> - uint8_t sid_split;
>> -
>> +/* Structure for register bank */
>> +typedef struct SMMUv3RegBank {
>> uint32_t idr[6];
>> - uint32_t iidr;
>> - uint32_t aidr;
>> uint32_t cr[3];
>> uint32_t cr0ack;
>> - uint32_t statusr;
>> uint32_t gbpa;
>> uint32_t irq_ctrl;
>> uint32_t gerror;
>> @@ -57,12 +49,28 @@ struct SMMUv3State {
>> uint64_t eventq_irq_cfg0;
>> uint32_t eventq_irq_cfg1;
>> uint32_t eventq_irq_cfg2;
>> + uint32_t features;
>> + uint8_t sid_split;
>>
>> SMMUQueue eventq, cmdq;
>> +} SMMUv3RegBank;
>> +
>> +struct SMMUv3State {
>> + SMMUState smmu_state;
>> +
>> + /* Shared (non-banked) registers and state */
>> + uint8_t sid_size;
>> + uint32_t iidr;
>> + uint32_t aidr;
>> + uint32_t statusr;
>> +
>> + /* Banked registers for all access */
> /* Banked registers depending on SEC_SID */?
>> + SMMUv3RegBank bank[SMMU_SEC_IDX_NUM];
>>
>> qemu_irq irq[4];
>> QemuMutex mutex;
>> char *stage;
>> + bool secure_impl;
>> };
>>
>> typedef enum {
>> @@ -84,7 +92,9 @@ struct SMMUv3Class {
>> #define TYPE_ARM_SMMUV3 "arm-smmuv3"
>> OBJECT_DECLARE_TYPE(SMMUv3State, SMMUv3Class, ARM_SMMUV3)
>>
>> -#define STAGE1_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S1P)
>> -#define STAGE2_SUPPORTED(s) FIELD_EX32(s->idr[0], IDR0, S2P)
>> +#define STAGE1_SUPPORTED(s) \
>> + FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S1P)
>> +#define STAGE2_SUPPORTED(s) \
>> + FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[0], IDR0, S2P)
>>
>> #endif
> The patch is difficult to review. I would prefer you split this into a
> first patch that introduces the bank struct and you convert this
> existing code to use
> SMMUv3RegBank *bank = s->bank[SMMU_SEC_IDX_NS];
> and then you convert all the reg access to back->regn
>
> this is a pure mechanical change that is not supposed to do any
> functional change.
>
> Then in a different patch you can start adapting the proto of some
> functions that takes sec_sid parameter. With commit title: prepare
> function A to work with a differen sec_sid than NS, ...
> To me this way of struturing the patch will be less error prone.
>
> Then you can introduce sec_sid in event and cfg. This incremental way
> will be helpful to understand the rationale of the different changes I
> think altough I acknowledge it complexifies your task, but it is for the
> sake of the review process.
>
> Thanks
>
> Eric
Thanks a lot for the thorough and very helpful review.
You've raised a crucial point about the patch being too difficult to
review, and I completely agree. I will restructure the work into a
series of smaller patches as you suggested.
SMMUSecurityIndex is indeed designed to prepare for Realm state and will
be used in banked register structure as suggested in v1 series.
The new v3 series will start with the purely mechanical refactoring to
introduce the banked register structure, followed by separate patches to
adapt the function prototypes and finally add the secure state logic.
I'll also make sure to improve the commit messages and comments to be
much clearer and will incorporate all of your other code-level
suggestions in the v3 series.
Thanks again for your guidance.
Best,
Tao
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (4 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 05/14] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 7:44 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware Tao Tang
` (9 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
According to the Arm architecture, SMMU-originated memory accesses,
such as fetching commands or writing events for a secure stream, must
target the Secure Physical Address (PA) space. The existing model sends
all DMA to the global address_space_memory.
This patch introduces the infrastructure to differentiate between secure
and non-secure memory accesses. A weak global symbol,
arm_secure_address_space, is added, which can be provided by the
machine model to represent the Secure PA space.
A new helper, smmu_get_address_space(), selects the target address
space based on the is_secure context. All internal DMA calls
(dma_memory_read/write) are updated to use this helper. Additionally,
the attrs.secure bit is set on transactions targeting the secure
address space.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 8 ++++++++
hw/arm/virt.c | 5 +++++
include/hw/arm/smmu-common.h | 20 ++++++++++++++++++++
3 files changed, 33 insertions(+)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 62a7612184..24db448683 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -30,6 +30,14 @@
#include "hw/arm/smmu-common.h"
#include "smmu-internal.h"
+/* Global state for secure address space availability */
+bool arm_secure_as_available;
+
+void smmu_enable_secure_address_space(void)
+{
+ arm_secure_as_available = true;
+}
+
/* IOTLB Management */
static guint smmu_iotlb_key_hash(gconstpointer v)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 02209fadcf..805d9aadb7 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -92,6 +92,8 @@
#include "hw/cxl/cxl_host.h"
#include "qemu/guest-random.h"
+AddressSpace arm_secure_address_space;
+
static GlobalProperty arm_virt_compat[] = {
{ TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
};
@@ -2243,6 +2245,9 @@ static void machvirt_init(MachineState *machine)
memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
UINT64_MAX);
memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
+ address_space_init(&arm_secure_address_space, secure_sysmem,
+ "secure-memory-space");
+ smmu_enable_secure_address_space();
}
firmware_loaded = virt_firmware_init(vms, sysmem,
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index 3df82b83eb..cd61c5e126 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -53,6 +53,26 @@ typedef enum SMMUSecurityIndex {
SMMU_SEC_IDX_NUM,
} SMMUSecurityIndex;
+extern AddressSpace __attribute__((weak)) arm_secure_address_space;
+extern bool arm_secure_as_available;
+void smmu_enable_secure_address_space(void);
+
+static inline AddressSpace *smmu_get_address_space(SMMUSecurityIndex sec_sid)
+{
+ switch (sec_sid) {
+ case SMMU_SEC_IDX_S:
+ {
+ if (arm_secure_as_available) {
+ return &arm_secure_address_space;
+ }
+ }
+ QEMU_FALLTHROUGH;
+ case SMMU_SEC_IDX_NS:
+ default:
+ return &address_space_memory;
+ }
+}
+
/*
* Page table walk error types
*/
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-09-25 16:26 ` [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
@ 2025-09-29 7:44 ` Eric Auger
2025-09-29 8:33 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 7:44 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> According to the Arm architecture, SMMU-originated memory accesses,
> such as fetching commands or writing events for a secure stream, must
> target the Secure Physical Address (PA) space. The existing model sends
> all DMA to the global address_space_memory.
>
> This patch introduces the infrastructure to differentiate between secure
> and non-secure memory accesses. A weak global symbol,
> arm_secure_address_space, is added, which can be provided by the
> machine model to represent the Secure PA space.
>
> A new helper, smmu_get_address_space(), selects the target address
> space based on the is_secure context. All internal DMA calls
> (dma_memory_read/write) are updated to use this helper. Additionally,
> the attrs.secure bit is set on transactions targeting the secure
> address space.
The last sentence does not seem to be implemented in that patch?
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 8 ++++++++
> hw/arm/virt.c | 5 +++++
> include/hw/arm/smmu-common.h | 20 ++++++++++++++++++++
> 3 files changed, 33 insertions(+)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 62a7612184..24db448683 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,14 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +/* Global state for secure address space availability */
> +bool arm_secure_as_available;
> +
> +void smmu_enable_secure_address_space(void)
> +{
> + arm_secure_as_available = true;
> +}
> +
> /* IOTLB Management */
>
> static guint smmu_iotlb_key_hash(gconstpointer v)
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 02209fadcf..805d9aadb7 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -92,6 +92,8 @@
> #include "hw/cxl/cxl_host.h"
> #include "qemu/guest-random.h"
>
> +AddressSpace arm_secure_address_space;
> +
> static GlobalProperty arm_virt_compat[] = {
> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
> };
> @@ -2243,6 +2245,9 @@ static void machvirt_init(MachineState *machine)
> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
> UINT64_MAX);
> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
> + address_space_init(&arm_secure_address_space, secure_sysmem,
> + "secure-memory-space");
> + smmu_enable_secure_address_space();
> }
>
> firmware_loaded = virt_firmware_init(vms, sysmem,
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 3df82b83eb..cd61c5e126 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -53,6 +53,26 @@ typedef enum SMMUSecurityIndex {
> SMMU_SEC_IDX_NUM,
> } SMMUSecurityIndex;
>
> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
> +extern bool arm_secure_as_available;
> +void smmu_enable_secure_address_space(void);
> +
> +static inline AddressSpace *smmu_get_address_space(SMMUSecurityIndex sec_sid)
> +{
> + switch (sec_sid) {
> + case SMMU_SEC_IDX_S:
> + {
> + if (arm_secure_as_available) {
> + return &arm_secure_address_space;
> + }
don't you want to return NULL or at least emit an error in case
!arm_secure_as_available. When adding Realm support this will avoid to
return NS AS.
> + }
> + QEMU_FALLTHROUGH;
> + case SMMU_SEC_IDX_NS:
> + default:
Maybe return an error here in case of other value than NS
> + return &address_space_memory;
> + }
> +}
> +
> /*
> * Page table walk error types
> */
Thanks
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-09-29 7:44 ` Eric Auger
@ 2025-09-29 8:33 ` Tao Tang
2025-09-29 8:54 ` Eric Auger
0 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-29 8:33 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 15:44, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> According to the Arm architecture, SMMU-originated memory accesses,
>> such as fetching commands or writing events for a secure stream, must
>> target the Secure Physical Address (PA) space. The existing model sends
>> all DMA to the global address_space_memory.
>>
>> This patch introduces the infrastructure to differentiate between secure
>> and non-secure memory accesses. A weak global symbol,
>> arm_secure_address_space, is added, which can be provided by the
>> machine model to represent the Secure PA space.
>>
>> A new helper, smmu_get_address_space(), selects the target address
>> space based on the is_secure context. All internal DMA calls
>> (dma_memory_read/write) are updated to use this helper. Additionally,
>> the attrs.secure bit is set on transactions targeting the secure
>> address space.
> The last sentence does not seem to be implemented in that patch?
You are right to point this out, and my apologies for the confusion. As
I was preparing the series, the patches were intertwined, and I didn't
manage their boundaries clearly. This led me to mistakenly describe a
feature in this commit message that is only implemented in a subsequent
patch #07.
I'm very sorry for the confusion and the unnecessary time this has cost
you. In all future community interactions, I will pay special attention
to ensuring each patch and its description are atomic and self-contained
to reduce the review burden for everyone. Thank you for your guidance on
this.
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 8 ++++++++
>> hw/arm/virt.c | 5 +++++
>> include/hw/arm/smmu-common.h | 20 ++++++++++++++++++++
>> 3 files changed, 33 insertions(+)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index 62a7612184..24db448683 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -30,6 +30,14 @@
>> #include "hw/arm/smmu-common.h"
>> #include "smmu-internal.h"
>>
>> +/* Global state for secure address space availability */
>> +bool arm_secure_as_available;
>> +
>> +void smmu_enable_secure_address_space(void)
>> +{
>> + arm_secure_as_available = true;
>> +}
>> +
>> /* IOTLB Management */
>>
>> static guint smmu_iotlb_key_hash(gconstpointer v)
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 02209fadcf..805d9aadb7 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -92,6 +92,8 @@
>> #include "hw/cxl/cxl_host.h"
>> #include "qemu/guest-random.h"
>>
>> +AddressSpace arm_secure_address_space;
>> +
>> static GlobalProperty arm_virt_compat[] = {
>> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
>> };
>> @@ -2243,6 +2245,9 @@ static void machvirt_init(MachineState *machine)
>> memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
>> UINT64_MAX);
>> memory_region_add_subregion_overlap(secure_sysmem, 0, sysmem, -1);
>> + address_space_init(&arm_secure_address_space, secure_sysmem,
>> + "secure-memory-space");
>> + smmu_enable_secure_address_space();
>> }
>>
>> firmware_loaded = virt_firmware_init(vms, sysmem,
>> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
>> index 3df82b83eb..cd61c5e126 100644
>> --- a/include/hw/arm/smmu-common.h
>> +++ b/include/hw/arm/smmu-common.h
>> @@ -53,6 +53,26 @@ typedef enum SMMUSecurityIndex {
>> SMMU_SEC_IDX_NUM,
>> } SMMUSecurityIndex;
>>
>> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
>> +extern bool arm_secure_as_available;
>> +void smmu_enable_secure_address_space(void);
>> +
>> +static inline AddressSpace *smmu_get_address_space(SMMUSecurityIndex sec_sid)
>> +{
>> + switch (sec_sid) {
>> + case SMMU_SEC_IDX_S:
>> + {
>> + if (arm_secure_as_available) {
>> + return &arm_secure_address_space;
>> + }
> don't you want to return NULL or at least emit an error in case
> !arm_secure_as_available. When adding Realm support this will avoid to
> return NS AS.
That's a great point. Silently falling back to the non-secure address
space is indeed dangerous. I will update the logic to return NULL and
emit an error if the secure address space is requested but not available.
>> + }
>> + QEMU_FALLTHROUGH;
>> + case SMMU_SEC_IDX_NS:
>> + default:
> Maybe return an error here in case of other value than NS
Also I will change the default case to handle unexpected values by
returning NULL, which will make the code safer for future extensions
like Realm. Then a check for the NULL return value at the call sites of
smmu_get_address_space will be applied to handle the error appropriately
in v3 series.
Thanks again for your helpful feedback.
Best,
Tao
>> + return &address_space_memory;
>> + }
>> +}
>> +
>> /*
>> * Page table walk error types
>> */
> Thanks
>
> Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses
2025-09-29 8:33 ` Tao Tang
@ 2025-09-29 8:54 ` Eric Auger
0 siblings, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-29 8:54 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 9/29/25 10:33 AM, Tao Tang wrote:
> Hi Eric,
>
> On 2025/9/29 15:44, Eric Auger wrote:
>> Hi Tao,
>>
>> On 9/25/25 6:26 PM, Tao Tang wrote:
>>> According to the Arm architecture, SMMU-originated memory accesses,
>>> such as fetching commands or writing events for a secure stream, must
>>> target the Secure Physical Address (PA) space. The existing model sends
>>> all DMA to the global address_space_memory.
>>>
>>> This patch introduces the infrastructure to differentiate between
>>> secure
>>> and non-secure memory accesses. A weak global symbol,
>>> arm_secure_address_space, is added, which can be provided by the
>>> machine model to represent the Secure PA space.
>>>
>>> A new helper, smmu_get_address_space(), selects the target address
>>> space based on the is_secure context. All internal DMA calls
>>> (dma_memory_read/write) are updated to use this helper. Additionally,
>>> the attrs.secure bit is set on transactions targeting the secure
>>> address space.
>> The last sentence does not seem to be implemented in that patch?
>
>
> You are right to point this out, and my apologies for the confusion.
> As I was preparing the series, the patches were intertwined, and I
> didn't manage their boundaries clearly. This led me to mistakenly
> describe a feature in this commit message that is only implemented in
> a subsequent patch #07.
>
> I'm very sorry for the confusion and the unnecessary time this has
> cost you. In all future community interactions, I will pay special
> attention to ensuring each patch and its description are atomic and
> self-contained to reduce the review burden for everyone. Thank you for
> your guidance on this.
No problem. Your commit messages are pretty well written and we all do
such kind of oversights - at least I do ;-) -
Eric
>
>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>> ---
>>> hw/arm/smmu-common.c | 8 ++++++++
>>> hw/arm/virt.c | 5 +++++
>>> include/hw/arm/smmu-common.h | 20 ++++++++++++++++++++
>>> 3 files changed, 33 insertions(+)
>>>
>>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>>> index 62a7612184..24db448683 100644
>>> --- a/hw/arm/smmu-common.c
>>> +++ b/hw/arm/smmu-common.c
>>> @@ -30,6 +30,14 @@
>>> #include "hw/arm/smmu-common.h"
>>> #include "smmu-internal.h"
>>> +/* Global state for secure address space availability */
>>> +bool arm_secure_as_available;
>>> +
>>> +void smmu_enable_secure_address_space(void)
>>> +{
>>> + arm_secure_as_available = true;
>>> +}
>>> +
>>> /* IOTLB Management */
>>> static guint smmu_iotlb_key_hash(gconstpointer v)
>>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>>> index 02209fadcf..805d9aadb7 100644
>>> --- a/hw/arm/virt.c
>>> +++ b/hw/arm/virt.c
>>> @@ -92,6 +92,8 @@
>>> #include "hw/cxl/cxl_host.h"
>>> #include "qemu/guest-random.h"
>>> +AddressSpace arm_secure_address_space;
>>> +
>>> static GlobalProperty arm_virt_compat[] = {
>>> { TYPE_VIRTIO_IOMMU_PCI, "aw-bits", "48" },
>>> };
>>> @@ -2243,6 +2245,9 @@ static void machvirt_init(MachineState *machine)
>>> memory_region_init(secure_sysmem, OBJECT(machine),
>>> "secure-memory",
>>> UINT64_MAX);
>>> memory_region_add_subregion_overlap(secure_sysmem, 0,
>>> sysmem, -1);
>>> + address_space_init(&arm_secure_address_space, secure_sysmem,
>>> + "secure-memory-space");
>>> + smmu_enable_secure_address_space();
>>> }
>>> firmware_loaded = virt_firmware_init(vms, sysmem,
>>> diff --git a/include/hw/arm/smmu-common.h
>>> b/include/hw/arm/smmu-common.h
>>> index 3df82b83eb..cd61c5e126 100644
>>> --- a/include/hw/arm/smmu-common.h
>>> +++ b/include/hw/arm/smmu-common.h
>>> @@ -53,6 +53,26 @@ typedef enum SMMUSecurityIndex {
>>> SMMU_SEC_IDX_NUM,
>>> } SMMUSecurityIndex;
>>> +extern AddressSpace __attribute__((weak)) arm_secure_address_space;
>>> +extern bool arm_secure_as_available;
>>> +void smmu_enable_secure_address_space(void);
>>> +
>>> +static inline AddressSpace
>>> *smmu_get_address_space(SMMUSecurityIndex sec_sid)
>>> +{
>>> + switch (sec_sid) {
>>> + case SMMU_SEC_IDX_S:
>>> + {
>>> + if (arm_secure_as_available) {
>>> + return &arm_secure_address_space;
>>> + }
>> don't you want to return NULL or at least emit an error in case
>> !arm_secure_as_available. When adding Realm support this will avoid to
>> return NS AS.
>
>
> That's a great point. Silently falling back to the non-secure address
> space is indeed dangerous. I will update the logic to return NULL and
> emit an error if the secure address space is requested but not available.
>
>>> + }
>>> + QEMU_FALLTHROUGH;
>>> + case SMMU_SEC_IDX_NS:
>>> + default:
>> Maybe return an error here in case of other value than NS
>
> Also I will change the default case to handle unexpected values by
> returning NULL, which will make the code safer for future extensions
> like Realm. Then a check for the NULL return value at the call sites
> of smmu_get_address_space will be applied to handle the error
> appropriately in v3 series.
>
>
> Thanks again for your helpful feedback.
>
>
> Best,
>
> Tao
>
>
>>> + return &address_space_memory;
>>> + }
>>> +}
>>> +
>>> /*
>>> * Page table walk error types
>>> */
>> Thanks
>>
>> Eric
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (5 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 06/14] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 9:55 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks Tao Tang
` (8 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
This patch adapts the Configuration Cache (STE/CD caches) to support
multiple security states.
The cache key, previously just the SMMUDevice, is now a composite key
of the SMMUDevice and the security index (sec_idx). This allows for
separate cache entries for the same device when operating in different
security states, preventing aliasing between secure and non-secure
configurations.
The configuration cache invalidation functions have been refactored to
correctly handle multiple security states, ensuring that invalidation
commands operate on the appropriate cache entries.
Finally, checks have been added to the stream table's MMIO read/write
logic to verify if the stream table is writable. This ensures that the
Configuration Cache management is compliant with the architecture.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 63 ++++++++++++++++++++++-
hw/arm/smmuv3.c | 98 ++++++++++++++++++++++++++++--------
include/hw/arm/smmu-common.h | 14 ++++++
3 files changed, 153 insertions(+), 22 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 24db448683..bc13b00f1d 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -30,6 +30,43 @@
#include "hw/arm/smmu-common.h"
#include "smmu-internal.h"
+/* Configuration Cache Management */
+static guint smmu_config_key_hash(gconstpointer key)
+{
+ const SMMUConfigKey *k = key;
+ return g_direct_hash(k->sdev) ^ (guint)k->sec_idx;
+}
+
+static gboolean smmu_config_key_equal(gconstpointer a, gconstpointer b)
+{
+ const SMMUConfigKey *ka = a;
+ const SMMUConfigKey *kb = b;
+ return ka->sdev == kb->sdev && ka->sec_idx == kb->sec_idx;
+}
+
+SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx)
+{
+ SMMUConfigKey key = {.sdev = sdev, .sec_idx = sec_idx};
+ return key;
+}
+
+ARMSecuritySpace smmu_get_security_space(SMMUSecurityIndex sec_idx)
+{
+ switch (sec_idx) {
+ case SMMU_SEC_IDX_S:
+ return ARMSS_Secure;
+ case SMMU_SEC_IDX_NS:
+ default:
+ return ARMSS_NonSecure;
+ }
+}
+
+MemTxAttrs smmu_get_txattrs(SMMUSecurityIndex sec_idx)
+{
+ return (MemTxAttrs) { .secure = sec_idx > SMMU_SEC_IDX_NS ? 1 : 0,
+ .space = smmu_get_security_space(sec_idx) };
+}
+
/* Global state for secure address space availability */
bool arm_secure_as_available;
@@ -237,7 +274,8 @@ static gboolean smmu_hash_remove_by_vmid_ipa(gpointer key, gpointer value,
static gboolean
smmu_hash_remove_by_sid_range(gpointer key, gpointer value, gpointer user_data)
{
- SMMUDevice *sdev = (SMMUDevice *)key;
+ SMMUConfigKey *config_key = (SMMUConfigKey *)key;
+ SMMUDevice *sdev = config_key->sdev;
uint32_t sid = smmu_get_sid(sdev);
SMMUSIDRange *sid_range = (SMMUSIDRange *)user_data;
@@ -255,6 +293,25 @@ void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range)
&sid_range);
}
+/* Remove all cached configs for a specific device, regardless of security. */
+static gboolean smmu_hash_remove_by_sdev(gpointer key, gpointer value,
+ gpointer user_data)
+{
+ SMMUConfigKey *config_key = (SMMUConfigKey *)key;
+ SMMUDevice *target = (SMMUDevice *)user_data;
+
+ if (config_key->sdev != target) {
+ return false;
+ }
+ trace_smmu_config_cache_inv(smmu_get_sid(target));
+ return true;
+}
+
+void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
+{
+ g_hash_table_foreach_remove(s->configs, smmu_hash_remove_by_sdev, sdev);
+}
+
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
uint8_t tg, uint64_t num_pages, uint8_t ttl)
{
@@ -942,7 +999,9 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
error_propagate(errp, local_err);
return;
}
- s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+ s->configs = g_hash_table_new_full(smmu_config_key_hash,
+ smmu_config_key_equal,
+ g_free, g_free);
s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
g_free, g_free);
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 2efa39b78c..eba709ae2b 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -352,14 +352,13 @@ static void smmuv3_init_regs(SMMUv3State *s)
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
- SMMUEventInfo *event)
+ SMMUEventInfo *event, SMMUTransCfg *cfg)
{
int ret, i;
trace_smmuv3_get_ste(addr);
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Cannot fetch pte at address=0x%"PRIx64"\n", addr);
@@ -404,8 +403,7 @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
}
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Cannot fetch pte at address=0x%"PRIx64"\n", addr);
@@ -699,8 +697,8 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
l2_ste_offset = sid & ((1 << sid_split) - 1);
l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
- sizeof(l1std), MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(cfg->as, l1ptr,
+ &l1std, sizeof(l1std), cfg->txattrs);
if (ret != MEMTX_OK) {
qemu_log_mask(LOG_GUEST_ERROR,
"Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
@@ -742,7 +740,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
addr = strtab_base + sid * sizeof(*ste);
}
- if (smmu_get_ste(s, addr, ste, event)) {
+ if (smmu_get_ste(s, addr, ste, event, cfg)) {
return -EINVAL;
}
@@ -900,18 +898,21 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
*
* @sdev: SMMUDevice handle
* @event: output event info
+ * @sec_idx: security index
*
* The configuration cache contains data resulting from both STE and CD
* decoding under the form of an SMMUTransCfg struct. The hash table is indexed
* by the SMMUDevice handle.
*/
-static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
+static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
+ SMMUSecurityIndex sec_idx)
{
SMMUv3State *s = sdev->smmu;
SMMUState *bc = &s->smmu_state;
SMMUTransCfg *cfg;
+ SMMUConfigKey lookup_key = smmu_get_config_key(sdev, sec_idx);
- cfg = g_hash_table_lookup(bc->configs, sdev);
+ cfg = g_hash_table_lookup(bc->configs, &lookup_key);
if (cfg) {
sdev->cfg_cache_hits++;
trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
@@ -925,9 +926,14 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
100 * sdev->cfg_cache_hits /
(sdev->cfg_cache_hits + sdev->cfg_cache_misses));
cfg = g_new0(SMMUTransCfg, 1);
+ cfg->sec_idx = sec_idx;
+ cfg->txattrs = smmu_get_txattrs(sec_idx);
+ cfg->as = smmu_get_address_space(sec_idx);
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
- g_hash_table_insert(bc->configs, sdev, cfg);
+ SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
+ *persistent_key = smmu_get_config_key(sdev, sec_idx);
+ g_hash_table_insert(bc->configs, persistent_key, cfg);
} else {
g_free(cfg);
cfg = NULL;
@@ -941,8 +947,8 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
SMMUv3State *s = sdev->smmu;
SMMUState *bc = &s->smmu_state;
- trace_smmu_config_cache_inv(smmu_get_sid(sdev));
- g_hash_table_remove(bc->configs, sdev);
+ /* Remove all security-indexed configs for this device */
+ smmu_configs_inv_sdev(bc, sdev);
}
/* Do translation with TLB lookup. */
@@ -1102,7 +1108,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
goto epilogue;
}
- cfg = smmuv3_get_config(sdev, &event);
+ cfg = smmuv3_get_config(sdev, &event, sec_idx);
if (!cfg) {
status = SMMU_TRANS_ERROR;
goto epilogue;
@@ -1182,7 +1188,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
- SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
+ SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, SMMU_SEC_IDX_NS);
IOMMUTLBEvent event;
uint8_t granule;
@@ -1314,6 +1320,38 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
}
}
+static inline int smmuv3_get_cr0_smmuen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return smmu_enabled(s, sec_idx);
+}
+
+static inline int smmuv3_get_cr0ack_smmuen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, SMMUEN);
+}
+
+static inline bool smmuv3_is_smmu_enabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ int cr0_smmuen = smmuv3_get_cr0_smmuen(s, sec_idx);
+ int cr0ack_smmuen = smmuv3_get_cr0ack_smmuen(s, sec_idx);
+ return (cr0_smmuen == 0 && cr0ack_smmuen == 0);
+}
+
+/* Check if STRTAB_BASE register is writable */
+static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, TABLES_PRESET)) {
+ return false;
+ }
+
+ /* Check SMMUEN conditions for the specific security domain */
+ return smmuv3_is_smmu_enabled(s, sec_idx);
+}
+
static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
{
SMMUState *bs = ARM_SMMU(s);
@@ -1553,6 +1591,11 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
uint32_t reg_offset = offset & 0xfff;
switch (reg_offset) {
case A_STRTAB_BASE:
+ if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
/* Clear reserved bits according to spec */
s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
return MEMTX_OK;
@@ -1637,14 +1680,29 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
- s->bank[reg_sec_idx].strtab_base =
- deposit64(s->bank[reg_sec_idx].strtab_base, 0, 32, data);
- return MEMTX_OK;
case A_STRTAB_BASE + 4:
- s->bank[reg_sec_idx].strtab_base =
- deposit64(s->bank[reg_sec_idx].strtab_base, 32, 32, data);
+ if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_STRTAB_BASE_RESERVED;
+ if (reg_offset == A_STRTAB_BASE) {
+ s->bank[reg_sec_idx].strtab_base = deposit64(
+ s->bank[reg_sec_idx].strtab_base, 0, 32, data);
+ } else {
+ s->bank[reg_sec_idx].strtab_base = deposit64(
+ s->bank[reg_sec_idx].strtab_base, 32, 32, data);
+ }
return MEMTX_OK;
case A_STRTAB_BASE_CFG:
+ if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "STRTAB_BASE_CFG write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
s->bank[reg_sec_idx].strtab_base_cfg = data;
if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
s->bank[reg_sec_idx].sid_split =
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index cd61c5e126..ed21db7728 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -22,6 +22,7 @@
#include "hw/sysbus.h"
#include "hw/pci/pci.h"
#include "qom/object.h"
+#include "hw/arm/arm-security.h"
#define SMMU_PCI_BUS_MAX 256
#define SMMU_PCI_DEVFN_MAX 256
@@ -53,6 +54,9 @@ typedef enum SMMUSecurityIndex {
SMMU_SEC_IDX_NUM,
} SMMUSecurityIndex;
+MemTxAttrs smmu_get_txattrs(SMMUSecurityIndex sec_idx);
+ARMSecuritySpace smmu_get_security_space(SMMUSecurityIndex sec_idx);
+
extern AddressSpace __attribute__((weak)) arm_secure_address_space;
extern bool arm_secure_as_available;
void smmu_enable_secure_address_space(void);
@@ -150,6 +154,8 @@ typedef struct SMMUTransCfg {
/* Used by stage-2 only. */
struct SMMUS2Cfg s2cfg;
SMMUSecurityIndex sec_idx; /* cached security index */
+ MemTxAttrs txattrs; /* cached transaction attributes */
+ AddressSpace *as; /* cached address space */
} SMMUTransCfg;
typedef struct SMMUDevice {
@@ -176,6 +182,11 @@ typedef struct SMMUIOTLBKey {
uint8_t level;
} SMMUIOTLBKey;
+typedef struct SMMUConfigKey {
+ SMMUDevice *sdev;
+ SMMUSecurityIndex sec_idx;
+} SMMUConfigKey;
+
typedef struct SMMUSIDRange {
uint32_t start;
uint32_t end;
@@ -251,6 +262,7 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
uint8_t tg, uint8_t level);
+SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx);
void smmu_iotlb_inv_all(SMMUState *s);
void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
@@ -260,6 +272,8 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
uint64_t num_pages, uint8_t ttl);
void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
+/* Invalidate all cached configs for a given device across all security idx */
+void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
/* Unmap the range of all the notifiers registered to any IOMMU mr */
void smmu_inv_notifiers_all(SMMUState *s);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware
2025-09-25 16:26 ` [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware Tao Tang
@ 2025-09-29 9:55 ` Eric Auger
2025-09-29 10:38 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 9:55 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
On 9/25/25 6:26 PM, Tao Tang wrote:
> This patch adapts the Configuration Cache (STE/CD caches) to support
> multiple security states.
>
> The cache key, previously just the SMMUDevice, is now a composite key
> of the SMMUDevice and the security index (sec_idx). This allows for
> separate cache entries for the same device when operating in different
> security states, preventing aliasing between secure and non-secure
> configurations.
>
> The configuration cache invalidation functions have been refactored to
> correctly handle multiple security states, ensuring that invalidation
> commands operate on the appropriate cache entries.
>
> Finally, checks have been added to the stream table's MMIO read/write
> logic to verify if the stream table is writable. This ensures that the
> Configuration Cache management is compliant with the architecture.
I think this patch shall be split too. I would suggest
- In a preliminary patch, add txattrs and as new members in SMMUTransCfg
+ the related changes (smmu_get_ste, smmu_get_cd, smmu_find_ste.
smmuv3_translate. Add their init in smmuv3_get_config while leaving the
current key as is.
- In a second patch, manage the new config key
- then there are a bunch of changes which does not relate to
configuration cache: smmu_writell, smmu_writel and
smmu_strtab_base_writable. I think this can go in a separate patch.
What do you think?
Eric
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 63 ++++++++++++++++++++++-
> hw/arm/smmuv3.c | 98 ++++++++++++++++++++++++++++--------
> include/hw/arm/smmu-common.h | 14 ++++++
> 3 files changed, 153 insertions(+), 22 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 24db448683..bc13b00f1d 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -30,6 +30,43 @@
> #include "hw/arm/smmu-common.h"
> #include "smmu-internal.h"
>
> +/* Configuration Cache Management */
> +static guint smmu_config_key_hash(gconstpointer key)
> +{
> + const SMMUConfigKey *k = key;
> + return g_direct_hash(k->sdev) ^ (guint)k->sec_idx;
> +}
> +
> +static gboolean smmu_config_key_equal(gconstpointer a, gconstpointer b)
> +{
> + const SMMUConfigKey *ka = a;
> + const SMMUConfigKey *kb = b;
> + return ka->sdev == kb->sdev && ka->sec_idx == kb->sec_idx;
> +}
> +
> +SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx)
> +{
> + SMMUConfigKey key = {.sdev = sdev, .sec_idx = sec_idx};
> + return key;
> +}
> +
> +ARMSecuritySpace smmu_get_security_space(SMMUSecurityIndex sec_idx)
> +{
> + switch (sec_idx) {
> + case SMMU_SEC_IDX_S:
> + return ARMSS_Secure;
> + case SMMU_SEC_IDX_NS:
> + default:
> + return ARMSS_NonSecure;
> + }
> +}
> +
> +MemTxAttrs smmu_get_txattrs(SMMUSecurityIndex sec_idx)
> +{
> + return (MemTxAttrs) { .secure = sec_idx > SMMU_SEC_IDX_NS ? 1 : 0,
> + .space = smmu_get_security_space(sec_idx) };
> +}
> +
> /* Global state for secure address space availability */
> bool arm_secure_as_available;
>
> @@ -237,7 +274,8 @@ static gboolean smmu_hash_remove_by_vmid_ipa(gpointer key, gpointer value,
> static gboolean
> smmu_hash_remove_by_sid_range(gpointer key, gpointer value, gpointer user_data)
> {
> - SMMUDevice *sdev = (SMMUDevice *)key;
> + SMMUConfigKey *config_key = (SMMUConfigKey *)key;
> + SMMUDevice *sdev = config_key->sdev;
> uint32_t sid = smmu_get_sid(sdev);
> SMMUSIDRange *sid_range = (SMMUSIDRange *)user_data;
>
> @@ -255,6 +293,25 @@ void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range)
> &sid_range);
> }
>
> +/* Remove all cached configs for a specific device, regardless of security. */
> +static gboolean smmu_hash_remove_by_sdev(gpointer key, gpointer value,
> + gpointer user_data)
> +{
> + SMMUConfigKey *config_key = (SMMUConfigKey *)key;
> + SMMUDevice *target = (SMMUDevice *)user_data;
> +
> + if (config_key->sdev != target) {
> + return false;
> + }
> + trace_smmu_config_cache_inv(smmu_get_sid(target));
> + return true;
> +}
> +
> +void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
> +{
> + g_hash_table_foreach_remove(s->configs, smmu_hash_remove_by_sdev, sdev);
> +}
> +
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> uint8_t tg, uint64_t num_pages, uint8_t ttl)
> {
> @@ -942,7 +999,9 @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
> error_propagate(errp, local_err);
> return;
> }
> - s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
> + s->configs = g_hash_table_new_full(smmu_config_key_hash,
> + smmu_config_key_equal,
> + g_free, g_free);
> s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
> g_free, g_free);
> s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 2efa39b78c..eba709ae2b 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -352,14 +352,13 @@ static void smmuv3_init_regs(SMMUv3State *s)
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
> - SMMUEventInfo *event)
> + SMMUEventInfo *event, SMMUTransCfg *cfg)
> {
> int ret, i;
>
> trace_smmuv3_get_ste(addr);
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
> @@ -404,8 +403,7 @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, SMMUTransCfg *cfg,
> }
>
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, addr, buf, sizeof(*buf),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, addr, buf, sizeof(*buf), cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
> @@ -699,8 +697,8 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> l2_ste_offset = sid & ((1 << sid_split) - 1);
> l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = dma_memory_read(&address_space_memory, l1ptr, &l1std,
> - sizeof(l1std), MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(cfg->as, l1ptr,
> + &l1std, sizeof(l1std), cfg->txattrs);
> if (ret != MEMTX_OK) {
> qemu_log_mask(LOG_GUEST_ERROR,
> "Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
> @@ -742,7 +740,7 @@ static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
> addr = strtab_base + sid * sizeof(*ste);
> }
>
> - if (smmu_get_ste(s, addr, ste, event)) {
> + if (smmu_get_ste(s, addr, ste, event, cfg)) {
> return -EINVAL;
> }
>
> @@ -900,18 +898,21 @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
> *
> * @sdev: SMMUDevice handle
> * @event: output event info
> + * @sec_idx: security index
> *
> * The configuration cache contains data resulting from both STE and CD
> * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
> * by the SMMUDevice handle.
> */
> -static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
> +static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> + SMMUSecurityIndex sec_idx)
> {
> SMMUv3State *s = sdev->smmu;
> SMMUState *bc = &s->smmu_state;
> SMMUTransCfg *cfg;
> + SMMUConfigKey lookup_key = smmu_get_config_key(sdev, sec_idx);
>
> - cfg = g_hash_table_lookup(bc->configs, sdev);
> + cfg = g_hash_table_lookup(bc->configs, &lookup_key);
> if (cfg) {
> sdev->cfg_cache_hits++;
> trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
> @@ -925,9 +926,14 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
> 100 * sdev->cfg_cache_hits /
> (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
> cfg = g_new0(SMMUTransCfg, 1);
> + cfg->sec_idx = sec_idx;
> + cfg->txattrs = smmu_get_txattrs(sec_idx);
> + cfg->as = smmu_get_address_space(sec_idx);
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> - g_hash_table_insert(bc->configs, sdev, cfg);
> + SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
> + *persistent_key = smmu_get_config_key(sdev, sec_idx);
> + g_hash_table_insert(bc->configs, persistent_key, cfg);
> } else {
> g_free(cfg);
> cfg = NULL;
> @@ -941,8 +947,8 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
> SMMUv3State *s = sdev->smmu;
> SMMUState *bc = &s->smmu_state;
>
> - trace_smmu_config_cache_inv(smmu_get_sid(sdev));
> - g_hash_table_remove(bc->configs, sdev);
> + /* Remove all security-indexed configs for this device */
> + smmu_configs_inv_sdev(bc, sdev);
> }
>
> /* Do translation with TLB lookup. */
> @@ -1102,7 +1108,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> goto epilogue;
> }
>
> - cfg = smmuv3_get_config(sdev, &event);
> + cfg = smmuv3_get_config(sdev, &event, sec_idx);
> if (!cfg) {
> status = SMMU_TRANS_ERROR;
> goto epilogue;
> @@ -1182,7 +1188,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> {
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
> - SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo);
> + SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, SMMU_SEC_IDX_NS);
> IOMMUTLBEvent event;
> uint8_t granule;
>
> @@ -1314,6 +1320,38 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> }
> }
>
> +static inline int smmuv3_get_cr0_smmuen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return smmu_enabled(s, sec_idx);
> +}
> +
> +static inline int smmuv3_get_cr0ack_smmuen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, SMMUEN);
> +}
> +
> +static inline bool smmuv3_is_smmu_enabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + int cr0_smmuen = smmuv3_get_cr0_smmuen(s, sec_idx);
> + int cr0ack_smmuen = smmuv3_get_cr0ack_smmuen(s, sec_idx);
> + return (cr0_smmuen == 0 && cr0ack_smmuen == 0);
> +}
> +
> +/* Check if STRTAB_BASE register is writable */
> +static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, TABLES_PRESET)) {
> + return false;
> + }
> +
> + /* Check SMMUEN conditions for the specific security domain */
> + return smmuv3_is_smmu_enabled(s, sec_idx);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1553,6 +1591,11 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> uint32_t reg_offset = offset & 0xfff;
> switch (reg_offset) {
> case A_STRTAB_BASE:
> + if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> /* Clear reserved bits according to spec */
> s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
> return MEMTX_OK;
> @@ -1637,14 +1680,29 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> - s->bank[reg_sec_idx].strtab_base =
> - deposit64(s->bank[reg_sec_idx].strtab_base, 0, 32, data);
> - return MEMTX_OK;
> case A_STRTAB_BASE + 4:
> - s->bank[reg_sec_idx].strtab_base =
> - deposit64(s->bank[reg_sec_idx].strtab_base, 32, 32, data);
> + if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_STRTAB_BASE_RESERVED;
> + if (reg_offset == A_STRTAB_BASE) {
> + s->bank[reg_sec_idx].strtab_base = deposit64(
> + s->bank[reg_sec_idx].strtab_base, 0, 32, data);
> + } else {
> + s->bank[reg_sec_idx].strtab_base = deposit64(
> + s->bank[reg_sec_idx].strtab_base, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_STRTAB_BASE_CFG:
> + if (!smmu_strtab_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "STRTAB_BASE_CFG write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> s->bank[reg_sec_idx].strtab_base_cfg = data;
> if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
> s->bank[reg_sec_idx].sid_split =
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index cd61c5e126..ed21db7728 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -22,6 +22,7 @@
> #include "hw/sysbus.h"
> #include "hw/pci/pci.h"
> #include "qom/object.h"
> +#include "hw/arm/arm-security.h"
>
> #define SMMU_PCI_BUS_MAX 256
> #define SMMU_PCI_DEVFN_MAX 256
> @@ -53,6 +54,9 @@ typedef enum SMMUSecurityIndex {
> SMMU_SEC_IDX_NUM,
> } SMMUSecurityIndex;
>
> +MemTxAttrs smmu_get_txattrs(SMMUSecurityIndex sec_idx);
> +ARMSecuritySpace smmu_get_security_space(SMMUSecurityIndex sec_idx);
> +
> extern AddressSpace __attribute__((weak)) arm_secure_address_space;
> extern bool arm_secure_as_available;
> void smmu_enable_secure_address_space(void);
> @@ -150,6 +154,8 @@ typedef struct SMMUTransCfg {
> /* Used by stage-2 only. */
> struct SMMUS2Cfg s2cfg;
> SMMUSecurityIndex sec_idx; /* cached security index */
> + MemTxAttrs txattrs; /* cached transaction attributes */
> + AddressSpace *as; /* cached address space */
> } SMMUTransCfg;
>
> typedef struct SMMUDevice {
> @@ -176,6 +182,11 @@ typedef struct SMMUIOTLBKey {
> uint8_t level;
> } SMMUIOTLBKey;
>
> +typedef struct SMMUConfigKey {
> + SMMUDevice *sdev;
> + SMMUSecurityIndex sec_idx;
> +} SMMUConfigKey;
> +
> typedef struct SMMUSIDRange {
> uint32_t start;
> uint32_t end;
> @@ -251,6 +262,7 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> uint8_t tg, uint8_t level);
> +SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx);
> void smmu_iotlb_inv_all(SMMUState *s);
> void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
> void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
> @@ -260,6 +272,8 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> uint64_t num_pages, uint8_t ttl);
> void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
> +/* Invalidate all cached configs for a given device across all security idx */
> +void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
> /* Unmap the range of all the notifiers registered to any IOMMU mr */
> void smmu_inv_notifiers_all(SMMUState *s);
>
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware
2025-09-29 9:55 ` Eric Auger
@ 2025-09-29 10:38 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 10:38 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Jean-Philippe Brucker, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 17:55, Eric Auger wrote:
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> This patch adapts the Configuration Cache (STE/CD caches) to support
>> multiple security states.
>>
>> The cache key, previously just the SMMUDevice, is now a composite key
>> of the SMMUDevice and the security index (sec_idx). This allows for
>> separate cache entries for the same device when operating in different
>> security states, preventing aliasing between secure and non-secure
>> configurations.
>>
>> The configuration cache invalidation functions have been refactored to
>> correctly handle multiple security states, ensuring that invalidation
>> commands operate on the appropriate cache entries.
>>
>> Finally, checks have been added to the stream table's MMIO read/write
>> logic to verify if the stream table is writable. This ensures that the
>> Configuration Cache management is compliant with the architecture.
> I think this patch shall be split too. I would suggest
>
> - In a preliminary patch, add txattrs and as new members in SMMUTransCfg
> + the related changes (smmu_get_ste, smmu_get_cd, smmu_find_ste.
> smmuv3_translate. Add their init in smmuv3_get_config while leaving the
> current key as is.
> - In a second patch, manage the new config key
> - then there are a bunch of changes which does not relate to
> configuration cache: smmu_writell, smmu_writel and
> smmu_strtab_base_writable. I think this can go in a separate patch.
>
> What do you think?
>
> Eric
>
Thank you for the detailed feedback and for taking the time to suggest a
clear path forward.
You are absolutely right. I see now that this patch indeed mixes several
distinct logical changes especially the mix of smmu_write/read and
configuration cache maybe confused, and your suggestion to split it is
the correct approach. It will certainly make the changes much easier to
review and less error-prone.
I will do this in v3 series.
Thanks,
Tao
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (6 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 07/14] hw/arm/smmuv3: Make Configuration Cache security-state aware Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 14:21 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management Tao Tang
` (7 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
This patch introduces the necessary logic to handle security states
during the page table translation process.
Support for the NS (Non-secure) attribute bit is added to the parsing of
various translation structures, including CD and PTEs. This allows the
SMMU model to correctly determine the security properties of memory
during a translation.
With this change, a new translation stage is added:
- Secure Stage 1 translation
Note that this commit does not include support for Secure Stage 2
translation, which will be addressed in the future.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 55 ++++++++++++++++++++++++++++++++----
hw/arm/smmu-internal.h | 7 +++++
hw/arm/smmuv3-internal.h | 2 ++
hw/arm/smmuv3.c | 2 ++
hw/arm/trace-events | 2 +-
include/hw/arm/smmu-common.h | 4 +++
6 files changed, 66 insertions(+), 6 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index bc13b00f1d..f563cba023 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -398,20 +398,25 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
* @base_addr[@index]
*/
static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
- SMMUPTWEventInfo *info)
+ SMMUPTWEventInfo *info, SMMUTransCfg *cfg, int walk_ns)
{
int ret;
dma_addr_t addr = baseaddr + index * sizeof(*pte);
+ /* Only support Secure PA Space as RME isn't implemented yet */
+ MemTxAttrs attrs =
+ smmu_get_txattrs(walk_ns ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S);
+ AddressSpace *as =
+ smmu_get_address_space(walk_ns ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S);
/* TODO: guarantee 64-bit single-copy atomicity */
- ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
+ ret = ldq_le_dma(as, addr, pte, attrs);
if (ret != MEMTX_OK) {
info->type = SMMU_PTW_ERR_WALK_EABT;
info->addr = addr;
return -EINVAL;
}
- trace_smmu_get_pte(baseaddr, index, addr, *pte);
+ trace_smmu_get_pte(baseaddr, index, addr, *pte, walk_ns);
return 0;
}
@@ -542,6 +547,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
baseaddr = extract64(tt->ttb, 0, cfg->oas);
baseaddr &= ~indexmask;
+ int nscfg = tt->nscfg;
+ bool forced_ns = false; /* Track if NSTable=1 forced NS mode */
while (level < VMSA_LEVELS) {
uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
@@ -551,7 +558,9 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
uint8_t ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+ /* Use NS if forced by previous NSTable=1 or current nscfg */
+ int current_ns = forced_ns || nscfg;
+ if (get_pte(baseaddr, offset, &pte, info, cfg, current_ns)) {
goto error;
}
trace_smmu_ptw_level(stage, level, iova, subpage_size,
@@ -576,6 +585,26 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
goto error;
}
}
+
+ /*
+ * Hierarchical control of Secure/Non-secure accesses:
+ * If NSTable=1 from Secure space, force all subsequent lookups to
+ * Non-secure space and ignore future NSTable according to
+ * (IHI 0070G.b) 13.4.1 Stage 1 page permissions and
+ * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory access
+ */
+ if (!forced_ns) {
+ int new_nstable = PTE_NSTABLE(pte);
+ if (!current_ns && new_nstable) {
+ /* First transition from Secure to Non-secure */
+ forced_ns = true;
+ nscfg = 1;
+ } else if (!forced_ns) {
+ /* Still in original mode, update nscfg normally */
+ nscfg = new_nstable;
+ }
+ /* If forced_ns is already true, ignore NSTable bit */
+ }
level++;
continue;
} else if (is_page_pte(pte, level)) {
@@ -618,6 +647,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
goto error;
}
+ tlbe->sec_idx = PTE_NS(pte) ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S;
+ tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_idx);
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = iova & ~mask;
tlbe->entry.addr_mask = mask;
@@ -687,7 +718,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
uint8_t s2ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+ /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
+ if (get_pte(baseaddr, offset, &pte, info, cfg, 1)) {
goto error;
}
trace_smmu_ptw_level(stage, level, ipa, subpage_size,
@@ -740,6 +772,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
goto error_ipa;
}
+ tlbe->sec_idx = SMMU_SEC_IDX_NS;
+ tlbe->entry.target_as = &address_space_memory;
tlbe->entry.translated_addr = gpa;
tlbe->entry.iova = ipa & ~mask;
tlbe->entry.addr_mask = mask;
@@ -824,6 +858,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
return ret;
}
+ if (!cfg->sel2 && tlbe->sec_idx > SMMU_SEC_IDX_NS) {
+ /*
+ * Nested translation with Secure IPA output is not supported if
+ * Secure Stage 2 is not implemented.
+ */
+ info->type = SMMU_PTW_ERR_TRANSLATION;
+ info->stage = SMMU_STAGE_1;
+ tlbe->entry.perm = IOMMU_NONE;
+ return -EINVAL;
+ }
+
ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
if (ret) {
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
index d143d296f3..cb3a6eb8d1 100644
--- a/hw/arm/smmu-internal.h
+++ b/hw/arm/smmu-internal.h
@@ -58,6 +58,10 @@
((level == 3) && \
((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
+/* Non-secure bit */
+#define PTE_NS(pte) \
+ (extract64(pte, 5, 1))
+
/* access permissions */
#define PTE_AP(pte) \
@@ -66,6 +70,9 @@
#define PTE_APTABLE(pte) \
(extract64(pte, 61, 2))
+#define PTE_NSTABLE(pte) \
+ (extract64(pte, 63, 1))
+
#define PTE_AF(pte) \
(extract64(pte, 10, 1))
/*
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index cf17c405de..af2936cf16 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -704,6 +704,8 @@ static inline int oas2bits(int oas_field)
#define CD_R(x) extract32((x)->word[1], 13, 1)
#define CD_A(x) extract32((x)->word[1], 14, 1)
#define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
+#define CD_NSCFG0(x) extract32((x)->word[2], 0, 1)
+#define CD_NSCFG1(x) extract32((x)->word[4], 0, 1)
/**
* tg2granule - Decodes the CD translation granule size field according
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index eba709ae2b..2f8494c346 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -832,6 +832,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
}
+ tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
tt->had = CD_HAD(cd, i);
trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
}
@@ -929,6 +930,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
cfg->sec_idx = sec_idx;
cfg->txattrs = smmu_get_txattrs(sec_idx);
cfg->as = smmu_get_address_space(sec_idx);
+ cfg->sel2 = s->bank[SMMU_SEC_IDX_S].idr[1];
if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index 80cb4d6b04..f99de78655 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -16,7 +16,7 @@ smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_
smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
-smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
+smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte, bool ns_walk) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64" ns_walk=%d"
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index ed21db7728..c27aec8bd4 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -109,6 +109,7 @@ typedef struct SMMUTransTableInfo {
uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
uint8_t granule_sz; /* granule page shift */
bool had; /* hierarchical attribute disable */
+ bool nscfg; /* Non-secure attribute of Starting-level TT */
} SMMUTransTableInfo;
typedef struct SMMUTLBEntry {
@@ -116,6 +117,7 @@ typedef struct SMMUTLBEntry {
uint8_t level;
uint8_t granule;
IOMMUAccessFlags parent_perm;
+ SMMUSecurityIndex sec_idx;
} SMMUTLBEntry;
/* Stage-2 configuration. */
@@ -156,6 +158,8 @@ typedef struct SMMUTransCfg {
SMMUSecurityIndex sec_idx; /* cached security index */
MemTxAttrs txattrs; /* cached transaction attributes */
AddressSpace *as; /* cached address space */
+ bool current_walk_ns; /* cached if the current walk is non-secure */
+ bool sel2;
} SMMUTransCfg;
typedef struct SMMUDevice {
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks
2025-09-25 16:26 ` [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks Tao Tang
@ 2025-09-29 14:21 ` Eric Auger
2025-09-29 15:22 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 14:21 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> This patch introduces the necessary logic to handle security states
> during the page table translation process.
>
> Support for the NS (Non-secure) attribute bit is added to the parsing of
> various translation structures, including CD and PTEs. This allows the
> SMMU model to correctly determine the security properties of memory
> during a translation.
>
> With this change, a new translation stage is added:
>
> - Secure Stage 1 translation
>
> Note that this commit does not include support for Secure Stage 2
> translation, which will be addressed in the future.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 55 ++++++++++++++++++++++++++++++++----
> hw/arm/smmu-internal.h | 7 +++++
> hw/arm/smmuv3-internal.h | 2 ++
> hw/arm/smmuv3.c | 2 ++
> hw/arm/trace-events | 2 +-
> include/hw/arm/smmu-common.h | 4 +++
> 6 files changed, 66 insertions(+), 6 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index bc13b00f1d..f563cba023 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -398,20 +398,25 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
> * @base_addr[@index]
Wile we add some new params it may be relevant to add some new doc
comments above
> */
> static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
> - SMMUPTWEventInfo *info)
> + SMMUPTWEventInfo *info, SMMUTransCfg *cfg, int walk_ns)
I see a cfg param is added while not used.
why walk_ns is an int while it seems to match a SecureIndex type? while
not directly passing the sec_sid?
> {
> int ret;
> dma_addr_t addr = baseaddr + index * sizeof(*pte);
> + /* Only support Secure PA Space as RME isn't implemented yet */
> + MemTxAttrs attrs =
> + smmu_get_txattrs(walk_ns ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S);
> + AddressSpace *as =
> + smmu_get_address_space(walk_ns ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S);
>
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
> + ret = ldq_le_dma(as, addr, pte, attrs);
>
> if (ret != MEMTX_OK) {
> info->type = SMMU_PTW_ERR_WALK_EABT;
> info->addr = addr;
> return -EINVAL;
> }
> - trace_smmu_get_pte(baseaddr, index, addr, *pte);
> + trace_smmu_get_pte(baseaddr, index, addr, *pte, walk_ns);
> return 0;
> }
>
> @@ -542,6 +547,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
>
> baseaddr = extract64(tt->ttb, 0, cfg->oas);
> baseaddr &= ~indexmask;
> + int nscfg = tt->nscfg;
> + bool forced_ns = false; /* Track if NSTable=1 forced NS mode */
>
> while (level < VMSA_LEVELS) {
> uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
> @@ -551,7 +558,9 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS if forced by previous NSTable=1 or current nscfg */
> + int current_ns = forced_ns || nscfg;
> + if (get_pte(baseaddr, offset, &pte, info, cfg, current_ns)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, iova, subpage_size,
> @@ -576,6 +585,26 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
> }
> +
> + /*
> + * Hierarchical control of Secure/Non-secure accesses:
> + * If NSTable=1 from Secure space, force all subsequent lookups to
> + * Non-secure space and ignore future NSTable according to
> + * (IHI 0070G.b) 13.4.1 Stage 1 page permissions and
> + * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory access
> + */
> + if (!forced_ns) {
> + int new_nstable = PTE_NSTABLE(pte);
> + if (!current_ns && new_nstable) {
> + /* First transition from Secure to Non-secure */
> + forced_ns = true;
> + nscfg = 1;
> + } else if (!forced_ns) {
> + /* Still in original mode, update nscfg normally */
> + nscfg = new_nstable;
> + }
> + /* If forced_ns is already true, ignore NSTable bit */
> + }
> level++;
> continue;
> } else if (is_page_pte(pte, level)) {
> @@ -618,6 +647,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
>
> + tlbe->sec_idx = PTE_NS(pte) ? SMMU_SEC_IDX_NS : SMMU_SEC_IDX_S;
> + tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_idx);
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = iova & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -687,7 +718,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t s2ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
> + if (get_pte(baseaddr, offset, &pte, info, cfg, 1)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, ipa, subpage_size,
> @@ -740,6 +772,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> goto error_ipa;
> }
>
> + tlbe->sec_idx = SMMU_SEC_IDX_NS;
> + tlbe->entry.target_as = &address_space_memory;
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = ipa & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -824,6 +858,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
> return ret;
> }
>
> + if (!cfg->sel2 && tlbe->sec_idx > SMMU_SEC_IDX_NS) {
> + /*
> + * Nested translation with Secure IPA output is not supported if
> + * Secure Stage 2 is not implemented.
> + */
> + info->type = SMMU_PTW_ERR_TRANSLATION;
> + info->stage = SMMU_STAGE_1;
> + tlbe->entry.perm = IOMMU_NONE;
> + return -EINVAL;
> + }
> +
> ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
> ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
> if (ret) {
> diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
> index d143d296f3..cb3a6eb8d1 100644
> --- a/hw/arm/smmu-internal.h
> +++ b/hw/arm/smmu-internal.h
> @@ -58,6 +58,10 @@
> ((level == 3) && \
> ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
>
> +/* Non-secure bit */
> +#define PTE_NS(pte) \
> + (extract64(pte, 5, 1))
> +
I have not read that code for a while. Might be worth to create
differentiated sections for the different kinds of descriptors
For instance NS belongs to block & page descriptor while NSTable belongs
to a table descriptor.
> /* access permissions */
>
> #define PTE_AP(pte) \
> @@ -66,6 +70,9 @@
> #define PTE_APTABLE(pte) \
> (extract64(pte, 61, 2))
>
> +#define PTE_NSTABLE(pte) \
> + (extract64(pte, 63, 1))
> +
> #define PTE_AF(pte) \
> (extract64(pte, 10, 1))
> /*
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index cf17c405de..af2936cf16 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -704,6 +704,8 @@ static inline int oas2bits(int oas_field)
> #define CD_R(x) extract32((x)->word[1], 13, 1)
> #define CD_A(x) extract32((x)->word[1], 14, 1)
> #define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
> +#define CD_NSCFG0(x) extract32((x)->word[2], 0, 1)
> +#define CD_NSCFG1(x) extract32((x)->word[4], 0, 1)
>
> /**
> * tg2granule - Decodes the CD translation granule size field according
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index eba709ae2b..2f8494c346 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -832,6 +832,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
> tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
> }
>
> + tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
> tt->had = CD_HAD(cd, i);
> trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
> }
> @@ -929,6 +930,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
> cfg->sec_idx = sec_idx;
> cfg->txattrs = smmu_get_txattrs(sec_idx);
> cfg->as = smmu_get_address_space(sec_idx);
> + cfg->sel2 = s->bank[SMMU_SEC_IDX_S].idr[1];
S_IDR1 contains other feilds than SEL2 such as S_SIDSIZE?
Can't you split that patch again into 2 patches:
one related to the config data extraction and another one related to
page table walk according to the config settings?
>
> if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
> SMMUConfigKey *persistent_key = g_new(SMMUConfigKey, 1);
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index 80cb4d6b04..f99de78655 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -16,7 +16,7 @@ smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_
> smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
> smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
> smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
> -smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
> +smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte, bool ns_walk) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64" ns_walk=%d"
> smmu_iotlb_inv_all(void) "IOTLB invalidate all"
> smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
> smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index ed21db7728..c27aec8bd4 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -109,6 +109,7 @@ typedef struct SMMUTransTableInfo {
> uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
> uint8_t granule_sz; /* granule page shift */
> bool had; /* hierarchical attribute disable */
> + bool nscfg; /* Non-secure attribute of Starting-level TT */
> } SMMUTransTableInfo;
>
> typedef struct SMMUTLBEntry {
> @@ -116,6 +117,7 @@ typedef struct SMMUTLBEntry {
> uint8_t level;
> uint8_t granule;
> IOMMUAccessFlags parent_perm;
> + SMMUSecurityIndex sec_idx;
> } SMMUTLBEntry;
>
> /* Stage-2 configuration. */
> @@ -156,6 +158,8 @@ typedef struct SMMUTransCfg {
> SMMUSecurityIndex sec_idx; /* cached security index */
> MemTxAttrs txattrs; /* cached transaction attributes */
> AddressSpace *as; /* cached address space */
> + bool current_walk_ns; /* cached if the current walk is non-secure */
this does not seem to be used?
> + bool sel2;
would require a comment to remind the reader what sel2 is.
> } SMMUTransCfg;
>
> typedef struct SMMUDevice {
Thanks
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks
2025-09-29 14:21 ` Eric Auger
@ 2025-09-29 15:22 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 15:22 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Mostafa Saleh, Jean-Philippe Brucker
Hi Eric,
On 2025/9/29 22:21, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> This patch introduces the necessary logic to handle security states
>> during the page table translation process.
>>
>> Support for the NS (Non-secure) attribute bit is added to the parsing of
>> various translation structures, including CD and PTEs. This allows the
>> SMMU model to correctly determine the security properties of memory
>> during a translation.
>>
>> With this change, a new translation stage is added:
>>
>> - Secure Stage 1 translation
>>
>> Note that this commit does not include support for Secure Stage 2
>> translation, which will be addressed in the future.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmu-common.c | 55 ++++++++++++++++++++++++++++++++----
>> hw/arm/smmu-internal.h | 7 +++++
>> hw/arm/smmuv3-internal.h | 2 ++
>> hw/arm/smmuv3.c | 2 ++
>> hw/arm/trace-events | 2 +-
>> include/hw/arm/smmu-common.h | 4 +++
>> 6 files changed, 66 insertions(+), 6 deletions(-)
>>
>> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
>> index bc13b00f1d..f563cba023 100644
>> --- a/hw/arm/smmu-common.c
>> +++ b/hw/arm/smmu-common.c
>> @@ -398,20 +398,25 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
>> * @base_addr[@index]
> Wile we add some new params it may be relevant to add some new doc
> comments above
Thank you for another incredibly thorough and insightful review. I
sincerely appreciate you taking the time to go through the code in such
detail.
You're right. I missed some necessary comments when adding new
parameters. I will add them in next series.
>> */
>> static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
>> - SMMUPTWEventInfo *info)
>> + SMMUPTWEventInfo *info, SMMUTransCfg *cfg, int walk_ns)
> I see a cfg param is added while not used.
My original plan was to cache the NS attr in a cfg->walk_ns field, which
is why I passed cfg into this function. I later realized this caching
wasn't necessary and removed the walk_ns member from the struct, but I
clearly missed removing the now-redundant cfg parameter from the
function signature. Thanks for spotting this oversight; I will remove it
in v3.
> why walk_ns is an int while it seems to match a SecureIndex type? while
> not directly passing the sec_sid?
You're right. I will replace 'int walk_ns ' with sec_sid in v3.
>>
>> diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
>> index d143d296f3..cb3a6eb8d1 100644
>> --- a/hw/arm/smmu-internal.h
>> +++ b/hw/arm/smmu-internal.h
>> @@ -58,6 +58,10 @@
>> ((level == 3) && \
>> ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
>>
>> +/* Non-secure bit */
>> +#define PTE_NS(pte) \
>> + (extract64(pte, 5, 1))
>> +
> I have not read that code for a while. Might be worth to create
> differentiated sections for the different kinds of descriptors
> For instance NS belongs to block & page descriptor while NSTable belongs
> to a table descriptor.
The original code didn't have clear comments to differentiate between
the descriptor types. Now that I've introduced the new NS and NSTable
attribute bits, which can be easily confused, it has become necessary to
add these clarifying sections. I will add comments to group the macros
by descriptor type to improve readability in the next version. Thanks
for the suggestion.
>>
>>
>> /**
>> * tg2granule - Decodes the CD translation granule size field according
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index eba709ae2b..2f8494c346 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -832,6 +832,7 @@ static int decode_cd(SMMUv3State *s, SMMUTransCfg *cfg,
>> tt->ttb = CACHED_ENTRY_TO_ADDR(entry, tt->ttb);
>> }
>>
>> + tt->nscfg = i ? CD_NSCFG1(cd) : CD_NSCFG0(cd);
>> tt->had = CD_HAD(cd, i);
>> trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz, tt->had);
>> }
>> @@ -929,6 +930,7 @@ static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event,
>> cfg->sec_idx = sec_idx;
>> cfg->txattrs = smmu_get_txattrs(sec_idx);
>> cfg->as = smmu_get_address_space(sec_idx);
>> + cfg->sel2 = s->bank[SMMU_SEC_IDX_S].idr[1];
> S_IDR1 contains other feilds than SEL2 such as S_SIDSIZE?
>
> Can't you split that patch again into 2 patches:
> one related to the config data extraction and another one related to
> page table walk according to the config settings?
Sure. I'll split it into 2 patchs in v3 and cfg->sel2 will be corrected.
>>
>>
>>
>> typedef struct SMMUTLBEntry {
>> @@ -116,6 +117,7 @@ typedef struct SMMUTLBEntry {
>> uint8_t level;
>> uint8_t granule;
>> IOMMUAccessFlags parent_perm;
>> + SMMUSecurityIndex sec_idx;
>> } SMMUTLBEntry;
>>
>> /* Stage-2 configuration. */
>> @@ -156,6 +158,8 @@ typedef struct SMMUTransCfg {
>> SMMUSecurityIndex sec_idx; /* cached security index */
>> MemTxAttrs txattrs; /* cached transaction attributes */
>> AddressSpace *as; /* cached address space */
>> + bool current_walk_ns; /* cached if the current walk is non-secure */
> this does not seem to be used?
>> + bool sel2;
> would require a comment to remind the reader what sel2 is.
>> } SMMUTransCfg;
>>
>> typedef struct SMMUDevice {
> Thanks
>
> Eric
Yes. current_walk_ns will be removed and comment will be add in v3.
Thanks,
Tao
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (7 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 08/14] hw/arm/smmuv3: Add security-state handling for page table walks Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 14:57 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling Tao Tang
` (6 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
To prevent aliasing between secure and non-secure translations for the
same address space, the IOTLB lookup key must incorporate the security
state of the transaction. A secure parameter has been added to
smmu_get_iotlb_key, ensuring that secure and non-secure TLB entries are
treated as distinct entities within the cache.
Building on this, this commit also implements the SMMU_S_INIT register.
This secure-only register provides a mechanism for software to perform a
global invalidation of all cached translations within the IOTLB. This is
a critical feature for secure hypervisors like Hafnium, which use it as
the final step of their SMMU initialization sequence to ensure a clean
TLB state.
Together, these changes provide robust management for secure TLB entries,
preventing TLB pollution between security worlds and allowing for proper
initialization by secure software.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmu-common.c | 25 ++++++++------
hw/arm/smmuv3.c | 67 ++++++++++++++++++++++++++++--------
hw/arm/trace-events | 1 +
include/hw/arm/smmu-common.h | 10 ++++--
4 files changed, 76 insertions(+), 27 deletions(-)
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index f563cba023..fdabcc4542 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -84,7 +84,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
/* Jenkins hash */
a = b = c = JHASH_INITVAL + sizeof(*key);
- a += key->asid + key->vmid + key->level + key->tg;
+ a += key->asid + key->vmid + key->level + key->tg + key->sec_idx;
b += extract64(key->iova, 0, 32);
c += extract64(key->iova, 32, 32);
@@ -100,14 +100,15 @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
(k1->level == k2->level) && (k1->tg == k2->tg) &&
- (k1->vmid == k2->vmid);
+ (k1->vmid == k2->vmid) && (k1->sec_idx == k2->sec_idx);
}
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
- uint8_t tg, uint8_t level)
+ uint8_t tg, uint8_t level,
+ SMMUSecurityIndex sec_idx)
{
SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
- .tg = tg, .level = level};
+ .tg = tg, .level = level, .sec_idx = sec_idx};
return key;
}
@@ -129,7 +130,7 @@ static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
SMMUIOTLBKey key;
key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
- iova & ~mask, tg, level);
+ iova & ~mask, tg, level, cfg->sec_idx);
entry = g_hash_table_lookup(bs->iotlb, &key);
if (entry) {
break;
@@ -193,7 +194,7 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
}
*key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
- tg, new->level);
+ tg, new->level, cfg->sec_idx);
trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
tg, new->level);
g_hash_table_insert(bs->iotlb, key, new);
@@ -313,13 +314,15 @@ void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
}
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
- uint8_t tg, uint64_t num_pages, uint8_t ttl)
+ uint8_t tg, uint64_t num_pages, uint8_t ttl,
+ SMMUSecurityIndex sec_idx)
{
/* if tg is not set we use 4KB range invalidation */
uint8_t granule = tg ? tg * 2 + 10 : 12;
if (ttl && (num_pages == 1) && (asid >= 0)) {
- SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova,
+ tg, ttl, sec_idx);
if (g_hash_table_remove(s->iotlb, &key)) {
return;
@@ -345,13 +348,15 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
* in Stage-1 invalidation ASID = -1, means don't care.
*/
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
- uint64_t num_pages, uint8_t ttl)
+ uint64_t num_pages, uint8_t ttl,
+ SMMUSecurityIndex sec_idx)
{
uint8_t granule = tg ? tg * 2 + 10 : 12;
int asid = -1;
if (ttl && (num_pages == 1)) {
- SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa, tg, ttl);
+ SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa,
+ tg, ttl, sec_idx);
if (g_hash_table_remove(s->iotlb, &key)) {
return;
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 2f8494c346..3835b9e79f 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -953,6 +953,21 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
smmu_configs_inv_sdev(bc, sdev);
}
+static void smmuv3_invalidate_all_caches(SMMUv3State *s)
+{
+ trace_smmuv3_invalidate_all_caches();
+ SMMUState *bs = &s->smmu_state;
+
+ /* Clear all cached configs including STE and CD */
+ if (bs->configs) {
+ g_hash_table_remove_all(bs->configs);
+ }
+
+ /* Invalidate all SMMU IOTLB entries */
+ smmu_inv_notifiers_all(&s->smmu_state);
+ smmu_iotlb_inv_all(bs);
+}
+
/* Do translation with TLB lookup. */
static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
SMMUTransCfg *cfg,
@@ -1181,16 +1196,18 @@ epilogue:
* @tg: translation granule (if communicated through range invalidation)
* @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
* @stage: Which stage(1 or 2) is used
+ * @sec_idx: security index
*/
static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
IOMMUNotifier *n,
int asid, int vmid,
dma_addr_t iova, uint8_t tg,
- uint64_t num_pages, int stage)
+ uint64_t num_pages, int stage,
+ SMMUSecurityIndex sec_idx)
{
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
- SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, SMMU_SEC_IDX_NS);
+ SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_idx);
IOMMUTLBEvent event;
uint8_t granule;
@@ -1235,7 +1252,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
}
event.type = IOMMU_NOTIFIER_UNMAP;
- event.entry.target_as = &address_space_memory;
+ event.entry.target_as = smmu_get_address_space(sec_idx);
event.entry.iova = iova;
event.entry.addr_mask = num_pages * (1 << granule) - 1;
event.entry.perm = IOMMU_NONE;
@@ -1246,7 +1263,8 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
/* invalidate an asid/vmid/iova range tuple in all mr's */
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
dma_addr_t iova, uint8_t tg,
- uint64_t num_pages, int stage)
+ uint64_t num_pages, int stage,
+ SMMUSecurityIndex sec_idx)
{
SMMUDevice *sdev;
@@ -1258,12 +1276,14 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
iova, tg, num_pages, stage);
IOMMU_NOTIFIER_FOREACH(n, mr) {
- smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages, stage);
+ smmuv3_notify_iova(mr, n, asid, vmid, iova, tg,
+ num_pages, stage, sec_idx);
}
}
}
-static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
+static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
+ SMMUSecurityIndex sec_idx)
{
dma_addr_t end, addr = CMD_ADDR(cmd);
uint8_t type = CMD_TYPE(cmd);
@@ -1289,11 +1309,11 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
if (!tg) {
trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage);
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage, sec_idx);
if (stage == SMMU_STAGE_1) {
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl, sec_idx);
} else {
- smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl);
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl, sec_idx);
}
return;
}
@@ -1312,11 +1332,13 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
num_pages = (mask + 1) >> granule;
trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
ttl, leaf, stage);
- smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages, stage);
+ smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg,
+ num_pages, stage, sec_idx);
if (stage == SMMU_STAGE_1) {
- smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
+ smmu_iotlb_inv_iova(s, asid, vmid, addr, tg,
+ num_pages, ttl, sec_idx);
} else {
- smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl);
+ smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl, sec_idx);
}
addr += mask + 1;
}
@@ -1514,7 +1536,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
cmd_error = SMMU_CERROR_ILL;
break;
}
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, sec_idx);
break;
case SMMU_CMD_TLBI_S12_VMALL:
{
@@ -1539,7 +1561,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
* As currently only either s1 or s2 are supported
* we can reuse same function for s2.
*/
- smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2);
+ smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, sec_idx);
break;
case SMMU_CMD_TLBI_EL3_ALL:
case SMMU_CMD_TLBI_EL3_VA:
@@ -1769,6 +1791,21 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
case A_EVENTQ_IRQ_CFG2:
s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
return MEMTX_OK;
+ case A_S_INIT_ALIAS:
+ if (data & R_S_INIT_INV_ALL_MASK) {
+ int cr0_smmuen = smmuv3_get_cr0_smmuen(s, SMMU_SEC_IDX_NS);
+ int s_cr0_smmuen = smmuv3_get_cr0_smmuen(s, SMMU_SEC_IDX_S);
+ if (cr0_smmuen || s_cr0_smmuen) {
+ /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
+ qemu_log_mask(LOG_GUEST_ERROR, "S_INIT write ignored: "
+ "CR0.SMMUEN=%d or S_CR0.SMMUEN=%d is set\n",
+ cr0_smmuen, s_cr0_smmuen);
+ return MEMTX_OK;
+ }
+ smmuv3_invalidate_all_caches(s);
+ }
+ /* Synchronous emulation: invalidation completed instantly. */
+ return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
"%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
@@ -1925,6 +1962,8 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
case A_EVENTQ_CONS:
*data = s->bank[reg_sec_idx].eventq.cons;
return MEMTX_OK;
+ case A_S_INIT_ALIAS:
+ *data = 0;
return MEMTX_OK;
default:
*data = 0;
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
index f99de78655..64e8659ae9 100644
--- a/hw/arm/trace-events
+++ b/hw/arm/trace-events
@@ -64,6 +64,7 @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
+smmuv3_invalidate_all_caches(void) "Invalidate all SMMU caches and TLBs"
smmu_reset_exit(void) ""
# strongarm.c
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
index c27aec8bd4..b566f11b47 100644
--- a/include/hw/arm/smmu-common.h
+++ b/include/hw/arm/smmu-common.h
@@ -184,6 +184,7 @@ typedef struct SMMUIOTLBKey {
int vmid;
uint8_t tg;
uint8_t level;
+ SMMUSecurityIndex sec_idx;
} SMMUIOTLBKey;
typedef struct SMMUConfigKey {
@@ -265,16 +266,19 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
SMMUTransTableInfo *tt, hwaddr iova);
void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
- uint8_t tg, uint8_t level);
+ uint8_t tg, uint8_t level,
+ SMMUSecurityIndex sec_idx);
SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx);
void smmu_iotlb_inv_all(SMMUState *s);
void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid);
void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
- uint8_t tg, uint64_t num_pages, uint8_t ttl);
+ uint8_t tg, uint64_t num_pages, uint8_t ttl,
+ SMMUSecurityIndex sec_idx);
void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
- uint64_t num_pages, uint8_t ttl);
+ uint64_t num_pages, uint8_t ttl,
+ SMMUSecurityIndex sec_idx);
void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
/* Invalidate all cached configs for a given device across all security idx */
void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management
2025-09-25 16:26 ` [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management Tao Tang
@ 2025-09-29 14:57 ` Eric Auger
2025-09-29 15:29 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 14:57 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> To prevent aliasing between secure and non-secure translations for the
> same address space, the IOTLB lookup key must incorporate the security
> state of the transaction. A secure parameter has been added to
> smmu_get_iotlb_key, ensuring that secure and non-secure TLB entries are
> treated as distinct entities within the cache.
In the same spirit as my previous comments I would encourage to split
this into 2 patches:
1) add the sec_sid in the key. This should cause no regression with the
NS current code base.
2) introduce a separate patch for the new smmuv3_invalidate_all_caches
and A_S_INIT_ALIAS support.
Those can be easily separated and you can refocus the associated commit
messages.
I also see you adapted some range invalidations for STAGE2 with sec_idx.
Since secure path is not implemented for S2 yet you may abort if this is
attempted at some point
Thanks
Eric
>
> Building on this, this commit also implements the SMMU_S_INIT register.
> This secure-only register provides a mechanism for software to perform a
> global invalidation of all cached translations within the IOTLB. This is
> a critical feature for secure hypervisors like Hafnium, which use it as
> the final step of their SMMU initialization sequence to ensure a clean
> TLB state.
>
> Together, these changes provide robust management for secure TLB entries,
> preventing TLB pollution between security worlds and allowing for proper
> initialization by secure software.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 25 ++++++++------
> hw/arm/smmuv3.c | 67 ++++++++++++++++++++++++++++--------
> hw/arm/trace-events | 1 +
> include/hw/arm/smmu-common.h | 10 ++++--
> 4 files changed, 76 insertions(+), 27 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index f563cba023..fdabcc4542 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -84,7 +84,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
>
> /* Jenkins hash */
> a = b = c = JHASH_INITVAL + sizeof(*key);
> - a += key->asid + key->vmid + key->level + key->tg;
> + a += key->asid + key->vmid + key->level + key->tg + key->sec_idx;
> b += extract64(key->iova, 0, 32);
> c += extract64(key->iova, 32, 32);
>
> @@ -100,14 +100,15 @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
>
> return (k1->asid == k2->asid) && (k1->iova == k2->iova) &&
> (k1->level == k2->level) && (k1->tg == k2->tg) &&
> - (k1->vmid == k2->vmid);
> + (k1->vmid == k2->vmid) && (k1->sec_idx == k2->sec_idx);
> }
>
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> - uint8_t tg, uint8_t level)
> + uint8_t tg, uint8_t level,
> + SMMUSecurityIndex sec_idx)
> {
> SMMUIOTLBKey key = {.asid = asid, .vmid = vmid, .iova = iova,
> - .tg = tg, .level = level};
> + .tg = tg, .level = level, .sec_idx = sec_idx};
>
> return key;
> }
> @@ -129,7 +130,7 @@ static SMMUTLBEntry *smmu_iotlb_lookup_all_levels(SMMUState *bs,
> SMMUIOTLBKey key;
>
> key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid,
> - iova & ~mask, tg, level);
> + iova & ~mask, tg, level, cfg->sec_idx);
> entry = g_hash_table_lookup(bs->iotlb, &key);
> if (entry) {
> break;
> @@ -193,7 +194,7 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
> }
>
> *key = smmu_get_iotlb_key(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
> - tg, new->level);
> + tg, new->level, cfg->sec_idx);
> trace_smmu_iotlb_insert(cfg->asid, cfg->s2cfg.vmid, new->entry.iova,
> tg, new->level);
> g_hash_table_insert(bs->iotlb, key, new);
> @@ -313,13 +314,15 @@ void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev)
> }
>
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> - uint8_t tg, uint64_t num_pages, uint8_t ttl)
> + uint8_t tg, uint64_t num_pages, uint8_t ttl,
> + SMMUSecurityIndex sec_idx)
> {
> /* if tg is not set we use 4KB range invalidation */
> uint8_t granule = tg ? tg * 2 + 10 : 12;
>
> if (ttl && (num_pages == 1) && (asid >= 0)) {
> - SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova, tg, ttl);
> + SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, iova,
> + tg, ttl, sec_idx);
>
> if (g_hash_table_remove(s->iotlb, &key)) {
> return;
> @@ -345,13 +348,15 @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> * in Stage-1 invalidation ASID = -1, means don't care.
> */
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> - uint64_t num_pages, uint8_t ttl)
> + uint64_t num_pages, uint8_t ttl,
> + SMMUSecurityIndex sec_idx)
> {
> uint8_t granule = tg ? tg * 2 + 10 : 12;
> int asid = -1;
>
> if (ttl && (num_pages == 1)) {
> - SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa, tg, ttl);
> + SMMUIOTLBKey key = smmu_get_iotlb_key(asid, vmid, ipa,
> + tg, ttl, sec_idx);
>
> if (g_hash_table_remove(s->iotlb, &key)) {
> return;
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 2f8494c346..3835b9e79f 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -953,6 +953,21 @@ static void smmuv3_flush_config(SMMUDevice *sdev)
> smmu_configs_inv_sdev(bc, sdev);
> }
>
> +static void smmuv3_invalidate_all_caches(SMMUv3State *s)
> +{
> + trace_smmuv3_invalidate_all_caches();
> + SMMUState *bs = &s->smmu_state;
> +
> + /* Clear all cached configs including STE and CD */
> + if (bs->configs) {
> + g_hash_table_remove_all(bs->configs);
> + }
> +
> + /* Invalidate all SMMU IOTLB entries */
> + smmu_inv_notifiers_all(&s->smmu_state);
> + smmu_iotlb_inv_all(bs);
> +}
> +
> /* Do translation with TLB lookup. */
> static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
> SMMUTransCfg *cfg,
> @@ -1181,16 +1196,18 @@ epilogue:
> * @tg: translation granule (if communicated through range invalidation)
> * @num_pages: number of @granule sized pages (if tg != 0), otherwise 1
> * @stage: Which stage(1 or 2) is used
> + * @sec_idx: security index
> */
> static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> IOMMUNotifier *n,
> int asid, int vmid,
> dma_addr_t iova, uint8_t tg,
> - uint64_t num_pages, int stage)
> + uint64_t num_pages, int stage,
> + SMMUSecurityIndex sec_idx)
> {
> SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
> SMMUEventInfo eventinfo = {.inval_ste_allowed = true};
> - SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, SMMU_SEC_IDX_NS);
> + SMMUTransCfg *cfg = smmuv3_get_config(sdev, &eventinfo, sec_idx);
> IOMMUTLBEvent event;
> uint8_t granule;
>
> @@ -1235,7 +1252,7 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> }
>
> event.type = IOMMU_NOTIFIER_UNMAP;
> - event.entry.target_as = &address_space_memory;
> + event.entry.target_as = smmu_get_address_space(sec_idx);
> event.entry.iova = iova;
> event.entry.addr_mask = num_pages * (1 << granule) - 1;
> event.entry.perm = IOMMU_NONE;
> @@ -1246,7 +1263,8 @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
> /* invalidate an asid/vmid/iova range tuple in all mr's */
> static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
> dma_addr_t iova, uint8_t tg,
> - uint64_t num_pages, int stage)
> + uint64_t num_pages, int stage,
> + SMMUSecurityIndex sec_idx)
> {
> SMMUDevice *sdev;
>
> @@ -1258,12 +1276,14 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, int vmid,
> iova, tg, num_pages, stage);
>
> IOMMU_NOTIFIER_FOREACH(n, mr) {
> - smmuv3_notify_iova(mr, n, asid, vmid, iova, tg, num_pages, stage);
> + smmuv3_notify_iova(mr, n, asid, vmid, iova, tg,
> + num_pages, stage, sec_idx);
> }
> }
> }
>
> -static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> +static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage,
> + SMMUSecurityIndex sec_idx)
> {
> dma_addr_t end, addr = CMD_ADDR(cmd);
> uint8_t type = CMD_TYPE(cmd);
> @@ -1289,11 +1309,11 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
>
> if (!tg) {
> trace_smmuv3_range_inval(vmid, asid, addr, tg, 1, ttl, leaf, stage);
> - smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage);
> + smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, 1, stage, sec_idx);
> if (stage == SMMU_STAGE_1) {
> - smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl);
> + smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, 1, ttl, sec_idx);
> } else {
> - smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl);
> + smmu_iotlb_inv_ipa(s, vmid, addr, tg, 1, ttl, sec_idx);
> }
> return;
> }
> @@ -1312,11 +1332,13 @@ static void smmuv3_range_inval(SMMUState *s, Cmd *cmd, SMMUStage stage)
> num_pages = (mask + 1) >> granule;
> trace_smmuv3_range_inval(vmid, asid, addr, tg, num_pages,
> ttl, leaf, stage);
> - smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg, num_pages, stage);
> + smmuv3_inv_notifiers_iova(s, asid, vmid, addr, tg,
> + num_pages, stage, sec_idx);
> if (stage == SMMU_STAGE_1) {
> - smmu_iotlb_inv_iova(s, asid, vmid, addr, tg, num_pages, ttl);
> + smmu_iotlb_inv_iova(s, asid, vmid, addr, tg,
> + num_pages, ttl, sec_idx);
> } else {
> - smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl);
> + smmu_iotlb_inv_ipa(s, vmid, addr, tg, num_pages, ttl, sec_idx);
> }
> addr += mask + 1;
> }
> @@ -1514,7 +1536,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> cmd_error = SMMU_CERROR_ILL;
> break;
> }
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_1, sec_idx);
> break;
> case SMMU_CMD_TLBI_S12_VMALL:
> {
> @@ -1539,7 +1561,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> * As currently only either s1 or s2 are supported
> * we can reuse same function for s2.
> */
> - smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2);
> + smmuv3_range_inval(bs, &cmd, SMMU_STAGE_2, sec_idx);
> break;
> case SMMU_CMD_TLBI_EL3_ALL:
> case SMMU_CMD_TLBI_EL3_VA:
> @@ -1769,6 +1791,21 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_IRQ_CFG2:
> s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
> return MEMTX_OK;
> + case A_S_INIT_ALIAS:
> + if (data & R_S_INIT_INV_ALL_MASK) {
> + int cr0_smmuen = smmuv3_get_cr0_smmuen(s, SMMU_SEC_IDX_NS);
> + int s_cr0_smmuen = smmuv3_get_cr0_smmuen(s, SMMU_SEC_IDX_S);
> + if (cr0_smmuen || s_cr0_smmuen) {
> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
> + qemu_log_mask(LOG_GUEST_ERROR, "S_INIT write ignored: "
> + "CR0.SMMUEN=%d or S_CR0.SMMUEN=%d is set\n",
> + cr0_smmuen, s_cr0_smmuen);
> + return MEMTX_OK;
> + }
> + smmuv3_invalidate_all_caches(s);
> + }
> + /* Synchronous emulation: invalidation completed instantly. */
> + return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> "%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
> @@ -1925,6 +1962,8 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_CONS:
> *data = s->bank[reg_sec_idx].eventq.cons;
> return MEMTX_OK;
> + case A_S_INIT_ALIAS:
> + *data = 0;
> return MEMTX_OK;
> default:
> *data = 0;
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index f99de78655..64e8659ae9 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -64,6 +64,7 @@ smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d"
> smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
> smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
> smmuv3_inv_notifiers_iova(const char *name, int asid, int vmid, uint64_t iova, uint8_t tg, uint64_t num_pages, int stage) "iommu mr=%s asid=%d vmid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" stage=%d"
> +smmuv3_invalidate_all_caches(void) "Invalidate all SMMU caches and TLBs"
> smmu_reset_exit(void) ""
>
> # strongarm.c
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index c27aec8bd4..b566f11b47 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -184,6 +184,7 @@ typedef struct SMMUIOTLBKey {
> int vmid;
> uint8_t tg;
> uint8_t level;
> + SMMUSecurityIndex sec_idx;
> } SMMUIOTLBKey;
>
> typedef struct SMMUConfigKey {
> @@ -265,16 +266,19 @@ SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> SMMUTransTableInfo *tt, hwaddr iova);
> void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry);
> SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova,
> - uint8_t tg, uint8_t level);
> + uint8_t tg, uint8_t level,
> + SMMUSecurityIndex sec_idx);
> SMMUConfigKey smmu_get_config_key(SMMUDevice *sdev, SMMUSecurityIndex sec_idx);
> void smmu_iotlb_inv_all(SMMUState *s);
> void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid);
> void smmu_iotlb_inv_vmid(SMMUState *s, int vmid);
> void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid);
> void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova,
> - uint8_t tg, uint64_t num_pages, uint8_t ttl);
> + uint8_t tg, uint64_t num_pages, uint8_t ttl,
> + SMMUSecurityIndex sec_idx);
> void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,
> - uint64_t num_pages, uint8_t ttl);
> + uint64_t num_pages, uint8_t ttl,
> + SMMUSecurityIndex sec_idx);
> void smmu_configs_inv_sid_range(SMMUState *s, SMMUSIDRange sid_range);
> /* Invalidate all cached configs for a given device across all security idx */
> void smmu_configs_inv_sdev(SMMUState *s, SMMUDevice *sdev);
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management
2025-09-29 14:57 ` Eric Auger
@ 2025-09-29 15:29 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 15:29 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 2025/9/29 22:57, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> To prevent aliasing between secure and non-secure translations for the
>> same address space, the IOTLB lookup key must incorporate the security
>> state of the transaction. A secure parameter has been added to
>> smmu_get_iotlb_key, ensuring that secure and non-secure TLB entries are
>> treated as distinct entities within the cache.
> In the same spirit as my previous comments I would encourage to split
> this into 2 patches:
> 1) add the sec_sid in the key. This should cause no regression with the
> NS current code base.
> 2) introduce a separate patch for the new smmuv3_invalidate_all_caches
> and A_S_INIT_ALIAS support.
> Those can be easily separated and you can refocus the associated commit
> messages.
>
> I also see you adapted some range invalidations for STAGE2 with sec_idx.
> Since secure path is not implemented for S2 yet you may abort if this is
> attempted at some point
>
> Thanks
>
> Eric
Hi Eric,
Thank you for the review and for the excellent suggestion on how to
split this patch.
Following the same spirit as your previous reviews, this makes perfect
sense. Separating the IOTLB key modification from the new S_INIT
functionality will indeed make the changes much cleaner and easier to
verify. I will restructure this for the v3 series into the two patches
you proposed.
You also made a very important point about the unimplemented Secure
Stage 2 path. Silently letting it fall through is a risk. Furthermore, I
will take this opportunity to proactively audit the code and check if
there are any other areas where the yet-to-be-implemented Secure Stage 2
logic might be incorrectly invoked, and I will add similar assertions
there as well.
Thanks,
Tao
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (8 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 09/14] hw/arm/smmuv3: Add secure TLB entry management Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 15:07 ` Eric Auger
2025-09-29 15:09 ` Eric Auger
2025-09-25 16:26 ` [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
` (5 subsequent siblings)
15 siblings, 2 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
This commit extends the banked register model to the Command Queue
(CMDQ), Event Queue (EVTQ), and global error handling logic.
By leveraging the banked structure, the SMMUv3 model now supports
separate, parallel queues and error status registers for the Secure and
Non-secure states. This is essential for correctly modeling the
parallel programming interfaces defined by the SMMUv3 architecture.
Additionally, this patch hardens the MMIO implementation by adding
several checks that were incomplete in the previous version.
For instance, writes to the Command Queue registers are now correctly
gated by the IDR1.QUEUES_PRESET bit, ensuring compliance with the
architectural rules for pre-configured queues.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3-internal.h | 2 +
hw/arm/smmuv3.c | 374 +++++++++++++++++++++++++++++++++------
2 files changed, 323 insertions(+), 53 deletions(-)
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
index af2936cf16..6bff504219 100644
--- a/hw/arm/smmuv3-internal.h
+++ b/hw/arm/smmuv3-internal.h
@@ -517,6 +517,8 @@ typedef struct SMMUEventInfo {
uint32_t sid;
bool recorded;
bool inval_ste_allowed;
+ AddressSpace *as;
+ MemTxAttrs txattrs;
SMMUSecurityIndex sec_idx;
union {
struct {
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 3835b9e79f..53c7eff0e3 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -106,14 +106,15 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
}
-static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
+static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd,
+ SMMUSecurityIndex sec_idx)
{
dma_addr_t addr = Q_CONS_ENTRY(q);
MemTxResult ret;
int i;
- ret = dma_memory_read(&address_space_memory, addr, cmd, sizeof(Cmd),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_read(smmu_get_address_space(sec_idx), addr, cmd,
+ sizeof(Cmd), smmu_get_txattrs(sec_idx));
if (ret != MEMTX_OK) {
return ret;
}
@@ -123,7 +124,8 @@ static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
return ret;
}
-static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
+static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in,
+ SMMUSecurityIndex sec_idx)
{
dma_addr_t addr = Q_PROD_ENTRY(q);
MemTxResult ret;
@@ -133,8 +135,8 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
for (i = 0; i < ARRAY_SIZE(evt.word); i++) {
cpu_to_le32s(&evt.word[i]);
}
- ret = dma_memory_write(&address_space_memory, addr, &evt, sizeof(Evt),
- MEMTXATTRS_UNSPECIFIED);
+ ret = dma_memory_write(smmu_get_address_space(sec_idx), addr, &evt,
+ sizeof(Evt), smmu_get_txattrs(sec_idx));
if (ret != MEMTX_OK) {
return ret;
}
@@ -157,7 +159,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
return MEMTX_ERROR;
}
- r = queue_write(q, evt);
+ r = queue_write(q, evt, sec_idx);
if (r != MEMTX_OK) {
return r;
}
@@ -1025,6 +1027,8 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
*/
class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
event->sec_idx = cfg->sec_idx;
+ event->txattrs = cfg->txattrs;
+ event->sec_idx = cfg->sec_idx;
switch (ptw_info.type) {
case SMMU_PTW_ERR_WALK_EABT:
event->type = SMMU_EVT_F_WALK_EABT;
@@ -1376,6 +1380,110 @@ static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
return smmuv3_is_smmu_enabled(s, sec_idx);
}
+static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
+}
+
+static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, CMDQEN);
+}
+
+static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
+}
+
+static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, EVENTQEN);
+}
+
+/* Check if MSI is supported */
+static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].idr[0], IDR0, MSI);
+}
+
+/* Check if secure GERROR_IRQ_CFGx registers are writable */
+static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
+ return false;
+ }
+
+ bool ctrl_en = FIELD_EX32(s->bank[sec_idx].irq_ctrl,
+ IRQ_CTRL, GERROR_IRQEN);
+
+ return !ctrl_en;
+}
+
+/* Check if CMDQEN is disabled */
+static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_idx);
+ int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_idx);
+ return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
+}
+
+/* Check if CMDQ_BASE register is writable */
+static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, QUEUES_PRESET)) {
+ return false;
+ }
+
+ return smmu_cmdqen_disabled(s, sec_idx);
+}
+
+/* Check if EVENTQEN is disabled */
+static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_idx);
+ int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_idx);
+ return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
+}
+
+static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].idr[1], IDR1, QUEUES_PRESET);
+}
+
+/* Check if EVENTQ_BASE register is writable */
+static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
+{
+ /* Check TABLES_PRESET - use NS bank as it's the global setting */
+ if (smmu_idr1_queue_preset(s, SMMU_SEC_IDX_NS)) {
+ return false;
+ }
+
+ return smmu_eventqen_disabled(s, sec_idx);
+}
+
+static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
+}
+
+/* Check if EVENTQ_IRQ_CFGx is writable */
+static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
+ SMMUSecurityIndex sec_idx)
+{
+ if (smmu_msi_supported(s, sec_idx)) {
+ return false;
+ }
+
+ return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
+}
+
static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
{
SMMUState *bs = ARM_SMMU(s);
@@ -1405,7 +1513,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
break;
}
- if (queue_read(q, &cmd) != MEMTX_OK) {
+ if (queue_read(q, &cmd, sec_idx) != MEMTX_OK) {
cmd_error = SMMU_CERROR_ABT;
break;
}
@@ -1430,8 +1538,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
SMMUDevice *sdev = smmu_find_sdev(bs, sid);
if (CMD_SSEC(&cmd)) {
- cmd_error = SMMU_CERROR_ILL;
- break;
+ if (sec_idx != SMMU_SEC_IDX_S) {
+ /* Secure Stream with Non-Secure command */
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
}
if (!sdev) {
@@ -1450,8 +1561,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
SMMUSIDRange sid_range;
if (CMD_SSEC(&cmd)) {
- cmd_error = SMMU_CERROR_ILL;
- break;
+ if (sec_idx != SMMU_SEC_IDX_S) {
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
}
mask = (1ULL << (range + 1)) - 1;
@@ -1469,8 +1582,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
SMMUDevice *sdev = smmu_find_sdev(bs, sid);
if (CMD_SSEC(&cmd)) {
- cmd_error = SMMU_CERROR_ILL;
- break;
+ if (sec_idx != SMMU_SEC_IDX_S) {
+ cmd_error = SMMU_CERROR_ILL;
+ break;
+ }
}
if (!sdev) {
@@ -1624,6 +1739,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
return MEMTX_OK;
case A_CMDQ_BASE:
+ if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
s->bank[reg_sec_idx].cmdq.base = data;
s->bank[reg_sec_idx].cmdq.log2size = extract64(
s->bank[reg_sec_idx].cmdq.base, 0, 5);
@@ -1632,6 +1754,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_EVENTQ_BASE:
+ if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
s->bank[reg_sec_idx].eventq.base = data;
s->bank[reg_sec_idx].eventq.log2size = extract64(
s->bank[reg_sec_idx].eventq.base, 0, 5);
@@ -1640,8 +1769,25 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
return MEMTX_OK;
+ case A_GERROR_IRQ_CFG0:
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
+ /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+ s->bank[reg_sec_idx].gerror_irq_cfg0 =
+ data & SMMU_GERROR_IRQ_CFG0_RESERVED;
+ return MEMTX_OK;
default:
qemu_log_mask(LOG_UNIMP,
"%s Unexpected 64-bit access to 0x%"PRIx64" (WI)\n",
@@ -1666,7 +1812,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
s->bank[reg_sec_idx].cr[1] = data;
return MEMTX_OK;
case A_CR2:
- s->bank[reg_sec_idx].cr[2] = data;
+ if (smmuv3_is_smmu_enabled(s, reg_sec_idx)) {
+ /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
+ s->bank[reg_sec_idx].cr[2] = data;
+ } else {
+ /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CR2 write ignored: register is read-only when "
+ "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
+ }
return MEMTX_OK;
case A_IRQ_CTRL:
s->bank[reg_sec_idx].irq_ctrl = data;
@@ -1680,14 +1834,28 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
smmuv3_cmdq_consume(s, reg_sec_idx);
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- s->bank[reg_sec_idx].gerror_irq_cfg0 =
- deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
- return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- s->bank[reg_sec_idx].gerror_irq_cfg0 =
- deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
+ if (reg_offset == A_GERROR_IRQ_CFG0) {
+ s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
+ s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
+ } else {
+ s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
+ s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
+ }
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
+ if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
+ "register is RO when IRQ enabled\n");
+ return MEMTX_OK;
+ }
s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
@@ -1735,60 +1903,106 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
}
return MEMTX_OK;
case A_CMDQ_BASE: /* 64b */
- s->bank[reg_sec_idx].cmdq.base =
- deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
- s->bank[reg_sec_idx].cmdq.log2size =
- extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
- if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
- s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
- }
- return MEMTX_OK;
case A_CMDQ_BASE + 4: /* 64b */
- s->bank[reg_sec_idx].cmdq.base =
- deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
- return MEMTX_OK;
+ if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
+ if (reg_offset == A_CMDQ_BASE) {
+ s->bank[reg_sec_idx].cmdq.base = deposit64(
+ s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
+
+ s->bank[reg_sec_idx].cmdq.log2size = extract64(
+ s->bank[reg_sec_idx].cmdq.base, 0, 5);
+ if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
+ s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
+ }
+ } else {
+ s->bank[reg_sec_idx].cmdq.base = deposit64(
+ s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
+ }
+
return MEMTX_OK;
case A_CMDQ_PROD:
s->bank[reg_sec_idx].cmdq.prod = data;
smmuv3_cmdq_consume(s, reg_sec_idx);
return MEMTX_OK;
case A_CMDQ_CONS:
+ if (!smmu_cmdqen_disabled(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "CMDQ_CONS write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
s->bank[reg_sec_idx].cmdq.cons = data;
return MEMTX_OK;
case A_EVENTQ_BASE: /* 64b */
- s->bank[reg_sec_idx].eventq.base =
- deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
- s->bank[reg_sec_idx].eventq.log2size =
- extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
- if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
- s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
- }
- s->bank[reg_sec_idx].eventq.cons = data;
- return MEMTX_OK;
case A_EVENTQ_BASE + 4:
- s->bank[reg_sec_idx].eventq.base =
- deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
- return MEMTX_OK;
+ if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_BASE write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_QUEUE_BASE_RESERVED;
+ if (reg_offset == A_EVENTQ_BASE) {
+ s->bank[reg_sec_idx].eventq.base = deposit64(
+ s->bank[reg_sec_idx].eventq.base, 0, 32, data);
+
+ s->bank[reg_sec_idx].eventq.log2size = extract64(
+ s->bank[reg_sec_idx].eventq.base, 0, 5);
+ if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
+ s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
+ }
+ } else {
+ s->bank[reg_sec_idx].eventq.base = deposit64(
+ s->bank[reg_sec_idx].eventq.base, 32, 32, data);
+ }
return MEMTX_OK;
case A_EVENTQ_PROD:
+ if (!smmu_eventqen_disabled(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_PROD write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
s->bank[reg_sec_idx].eventq.prod = data;
return MEMTX_OK;
case A_EVENTQ_CONS:
s->bank[reg_sec_idx].eventq.cons = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0: /* 64b */
- s->bank[reg_sec_idx].eventq_irq_cfg0 =
- deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
- return MEMTX_OK;
case A_EVENTQ_IRQ_CFG0 + 4:
- s->bank[reg_sec_idx].eventq_irq_cfg0 =
- deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
- return MEMTX_OK;
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
+
+ data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
+ if (reg_offset == A_EVENTQ_IRQ_CFG0) {
+ s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
+ s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
+ } else {
+ s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
+ s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
+ }
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG1:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
return MEMTX_OK;
case A_EVENTQ_IRQ_CFG2:
+ if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
+ return MEMTX_OK;
+ }
s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
return MEMTX_OK;
case A_S_INIT_ALIAS:
@@ -1848,6 +2062,11 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
uint32_t reg_offset = offset & 0xfff;
switch (reg_offset) {
case A_GERROR_IRQ_CFG0:
+ /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
+ if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
*data = s->bank[reg_sec_idx].gerror_irq_cfg0;
return MEMTX_OK;
case A_STRTAB_BASE:
@@ -1859,6 +2078,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
case A_EVENTQ_BASE:
*data = s->bank[reg_sec_idx].eventq.base;
return MEMTX_OK;
+ case A_EVENTQ_IRQ_CFG0:
+ /* MSI support is depends on the register's security domain */
+ if (!smmu_msi_supported(s, reg_sec_idx)) {
+ *data = 0;
+ return MEMTX_OK;
+ }
+
*data = s->bank[reg_sec_idx].eventq_irq_cfg0;
return MEMTX_OK;
default:
@@ -1917,16 +2143,31 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
*data = s->bank[reg_sec_idx].gerrorn;
return MEMTX_OK;
case A_GERROR_IRQ_CFG0: /* 64b */
- *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
- return MEMTX_OK;
case A_GERROR_IRQ_CFG0 + 4:
- *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
- return MEMTX_OK;
+ /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
+ if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
+
+ if (reg_offset == A_GERROR_IRQ_CFG0) {
+ *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
+ } else {
+ *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
+ }
return MEMTX_OK;
case A_GERROR_IRQ_CFG1:
+ if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
*data = s->bank[reg_sec_idx].gerror_irq_cfg1;
return MEMTX_OK;
case A_GERROR_IRQ_CFG2:
+ if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
+ *data = 0; /* RES0 */
+ return MEMTX_OK;
+ }
*data = s->bank[reg_sec_idx].gerror_irq_cfg2;
return MEMTX_OK;
case A_STRTAB_BASE: /* 64b */
@@ -1962,6 +2203,33 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
case A_EVENTQ_CONS:
*data = s->bank[reg_sec_idx].eventq.cons;
return MEMTX_OK;
+ case A_EVENTQ_IRQ_CFG0:
+ case A_EVENTQ_IRQ_CFG0 + 4:
+ if (!smmu_msi_supported(s, reg_sec_idx)) {
+ *data = 0;
+ return MEMTX_OK;
+ }
+
+ if (reg_offset == A_EVENTQ_IRQ_CFG0) {
+ *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32);
+ } else {
+ *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32);
+ }
+ return MEMTX_OK;
+ case A_EVENTQ_IRQ_CFG1:
+ if (!smmu_msi_supported(s, reg_sec_idx)) {
+ *data = 0;
+ return MEMTX_OK;
+ }
+ *data = s->bank[reg_sec_idx].eventq_irq_cfg1;
+ return MEMTX_OK;
+ case A_EVENTQ_IRQ_CFG2:
+ if (!smmu_msi_supported(s, reg_sec_idx)) {
+ *data = 0;
+ return MEMTX_OK;
+ }
+ *data = s->bank[reg_sec_idx].eventq_irq_cfg2;
+ return MEMTX_OK;
case A_S_INIT_ALIAS:
*data = 0;
return MEMTX_OK;
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling
2025-09-25 16:26 ` [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling Tao Tang
@ 2025-09-29 15:07 ` Eric Auger
2025-09-29 15:45 ` Tao Tang
2025-09-29 15:09 ` Eric Auger
1 sibling, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:07 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
On 9/25/25 6:26 PM, Tao Tang wrote:
> This commit extends the banked register model to the Command Queue
> (CMDQ), Event Queue (EVTQ), and global error handling logic.
>
> By leveraging the banked structure, the SMMUv3 model now supports
> separate, parallel queues and error status registers for the Secure and
> Non-secure states. This is essential for correctly modeling the
> parallel programming interfaces defined by the SMMUv3 architecture.
>
> Additionally, this patch hardens the MMIO implementation by adding
> several checks that were incomplete in the previous version.
> For instance, writes to the Command Queue registers are now correctly
> gated by the IDR1.QUEUES_PRESET bit, ensuring compliance with the
> architectural rules for pre-configured queues.
This must be split as well. This is really hard to review as is. Try to
isolate the different functional changes.
1) addition of sec_idx to queue_read/write_eventq + modification of
queue_write_read can be moved in a separate patch
2) smmu_cmdq_base_writable, smmu_eventq_irq_cfg_writable,
smmu_gerror_irq_cfg_writable can go in a separate patch
3) MMIO hardening can go in a separate patch too
../..
Thanks
Eric
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 2 +
> hw/arm/smmuv3.c | 374 +++++++++++++++++++++++++++++++++------
> 2 files changed, 323 insertions(+), 53 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index af2936cf16..6bff504219 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -517,6 +517,8 @@ typedef struct SMMUEventInfo {
> uint32_t sid;
> bool recorded;
> bool inval_ste_allowed;
> + AddressSpace *as;
> + MemTxAttrs txattrs;
> SMMUSecurityIndex sec_idx;
> union {
> struct {
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 3835b9e79f..53c7eff0e3 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -106,14 +106,15 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
> trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
> }
>
> -static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> +static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd,
> + SMMUSecurityIndex sec_idx)
> {
> dma_addr_t addr = Q_CONS_ENTRY(q);
> MemTxResult ret;
> int i;
>
> - ret = dma_memory_read(&address_space_memory, addr, cmd, sizeof(Cmd),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(smmu_get_address_space(sec_idx), addr, cmd,
> + sizeof(Cmd), smmu_get_txattrs(sec_idx));
> if (ret != MEMTX_OK) {
> return ret;
> }
> @@ -123,7 +124,8 @@ static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> return ret;
> }
>
> -static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> +static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in,
> + SMMUSecurityIndex sec_idx)
> {
> dma_addr_t addr = Q_PROD_ENTRY(q);
> MemTxResult ret;
> @@ -133,8 +135,8 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> for (i = 0; i < ARRAY_SIZE(evt.word); i++) {
> cpu_to_le32s(&evt.word[i]);
> }
> - ret = dma_memory_write(&address_space_memory, addr, &evt, sizeof(Evt),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_write(smmu_get_address_space(sec_idx), addr, &evt,
> + sizeof(Evt), smmu_get_txattrs(sec_idx));
> if (ret != MEMTX_OK) {
> return ret;
> }
> @@ -157,7 +159,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
> return MEMTX_ERROR;
> }
>
> - r = queue_write(q, evt);
> + r = queue_write(q, evt, sec_idx);
> if (r != MEMTX_OK) {
> return r;
> }
> @@ -1025,6 +1027,8 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
> */
> class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
> event->sec_idx = cfg->sec_idx;
> + event->txattrs = cfg->txattrs;
> + event->sec_idx = cfg->sec_idx;
> switch (ptw_info.type) {
> case SMMU_PTW_ERR_WALK_EABT:
> event->type = SMMU_EVT_F_WALK_EABT;
> @@ -1376,6 +1380,110 @@ static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> return smmuv3_is_smmu_enabled(s, sec_idx);
> }
>
> +static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, EVENTQEN);
> +}
> +
> +/* Check if MSI is supported */
> +static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].idr[0], IDR0, MSI);
> +}
> +
> +/* Check if secure GERROR_IRQ_CFGx registers are writable */
> +static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + return false;
> + }
> +
> + bool ctrl_en = FIELD_EX32(s->bank[sec_idx].irq_ctrl,
> + IRQ_CTRL, GERROR_IRQEN);
> +
> + return !ctrl_en;
> +}
> +
> +/* Check if CMDQEN is disabled */
> +static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_idx);
> + int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_idx);
> + return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
> +}
> +
> +/* Check if CMDQ_BASE register is writable */
> +static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, QUEUES_PRESET)) {
> + return false;
> + }
> +
> + return smmu_cmdqen_disabled(s, sec_idx);
> +}
> +
> +/* Check if EVENTQEN is disabled */
> +static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_idx);
> + int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_idx);
> + return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
> +}
> +
> +static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].idr[1], IDR1, QUEUES_PRESET);
> +}
> +
> +/* Check if EVENTQ_BASE register is writable */
> +static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (smmu_idr1_queue_preset(s, SMMU_SEC_IDX_NS)) {
> + return false;
> + }
> +
> + return smmu_eventqen_disabled(s, sec_idx);
> +}
> +
> +static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> +}
> +
> +/* Check if EVENTQ_IRQ_CFGx is writable */
> +static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + if (smmu_msi_supported(s, sec_idx)) {
> + return false;
> + }
> +
> + return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1405,7 +1513,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> break;
> }
>
> - if (queue_read(q, &cmd) != MEMTX_OK) {
> + if (queue_read(q, &cmd, sec_idx) != MEMTX_OK) {
> cmd_error = SMMU_CERROR_ABT;
> break;
> }
> @@ -1430,8 +1538,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + /* Secure Stream with Non-Secure command */
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> if (!sdev) {
> @@ -1450,8 +1561,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUSIDRange sid_range;
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> mask = (1ULL << (range + 1)) - 1;
> @@ -1469,8 +1582,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> if (!sdev) {
> @@ -1624,6 +1739,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> s->bank[reg_sec_idx].cmdq.base = data;
> s->bank[reg_sec_idx].cmdq.log2size = extract64(
> s->bank[reg_sec_idx].cmdq.base, 0, 5);
> @@ -1632,6 +1754,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> s->bank[reg_sec_idx].eventq.base = data;
> s->bank[reg_sec_idx].eventq.log2size = extract64(
> s->bank[reg_sec_idx].eventq.base, 0, 5);
> @@ -1640,8 +1769,25 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
> return MEMTX_OK;
> + case A_GERROR_IRQ_CFG0:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
> + data & SMMU_GERROR_IRQ_CFG0_RESERVED;
> + return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> "%s Unexpected 64-bit access to 0x%"PRIx64" (WI)\n",
> @@ -1666,7 +1812,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> s->bank[reg_sec_idx].cr[1] = data;
> return MEMTX_OK;
> case A_CR2:
> - s->bank[reg_sec_idx].cr[2] = data;
> + if (smmuv3_is_smmu_enabled(s, reg_sec_idx)) {
> + /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
> + s->bank[reg_sec_idx].cr[2] = data;
> + } else {
> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CR2 write ignored: register is read-only when "
> + "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
> + }
> return MEMTX_OK;
> case A_IRQ_CTRL:
> s->bank[reg_sec_idx].irq_ctrl = data;
> @@ -1680,14 +1834,28 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
> - return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
> + if (reg_offset == A_GERROR_IRQ_CFG0) {
> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
> + } else {
> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> @@ -1735,60 +1903,106 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - s->bank[reg_sec_idx].cmdq.base =
> - deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
> - s->bank[reg_sec_idx].cmdq.log2size =
> - extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
> - if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> - s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> - }
> - return MEMTX_OK;
> case A_CMDQ_BASE + 4: /* 64b */
> - s->bank[reg_sec_idx].cmdq.base =
> - deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> + if (reg_offset == A_CMDQ_BASE) {
> + s->bank[reg_sec_idx].cmdq.base = deposit64(
> + s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
> +
> + s->bank[reg_sec_idx].cmdq.log2size = extract64(
> + s->bank[reg_sec_idx].cmdq.base, 0, 5);
> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> + }
> + } else {
> + s->bank[reg_sec_idx].cmdq.base = deposit64(
> + s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
> + }
> +
> return MEMTX_OK;
> case A_CMDQ_PROD:
> s->bank[reg_sec_idx].cmdq.prod = data;
> smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> + if (!smmu_cmdqen_disabled(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_CONS write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].cmdq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - s->bank[reg_sec_idx].eventq.base =
> - deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
> - s->bank[reg_sec_idx].eventq.log2size =
> - extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
> - if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> - s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> - }
> - s->bank[reg_sec_idx].eventq.cons = data;
> - return MEMTX_OK;
> case A_EVENTQ_BASE + 4:
> - s->bank[reg_sec_idx].eventq.base =
> - deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> + if (reg_offset == A_EVENTQ_BASE) {
> + s->bank[reg_sec_idx].eventq.base = deposit64(
> + s->bank[reg_sec_idx].eventq.base, 0, 32, data);
> +
> + s->bank[reg_sec_idx].eventq.log2size = extract64(
> + s->bank[reg_sec_idx].eventq.base, 0, 5);
> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> + }
> + } else {
> + s->bank[reg_sec_idx].eventq.base = deposit64(
> + s->bank[reg_sec_idx].eventq.base, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> + if (!smmu_eventqen_disabled(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_PROD write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq.prod = data;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> s->bank[reg_sec_idx].eventq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0: /* 64b */
> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
> - return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0 + 4:
> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
> + } else {
> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG1:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG2:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
> return MEMTX_OK;
> case A_S_INIT_ALIAS:
> @@ -1848,6 +2062,11 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> uint32_t reg_offset = offset & 0xfff;
> switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> @@ -1859,6 +2078,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_BASE:
> *data = s->bank[reg_sec_idx].eventq.base;
> return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG0:
> + /* MSI support is depends on the register's security domain */
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> +
> *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
> return MEMTX_OK;
> default:
> @@ -1917,16 +2143,31 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> *data = s->bank[reg_sec_idx].gerrorn;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
> - return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
> - return MEMTX_OK;
> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> + if (reg_offset == A_GERROR_IRQ_CFG0) {
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
> + } else {
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
> + }
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> @@ -1962,6 +2203,33 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_CONS:
> *data = s->bank[reg_sec_idx].eventq.cons;
> return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG0:
> + case A_EVENTQ_IRQ_CFG0 + 4:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> +
> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32);
> + } else {
> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32);
> + }
> + return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG1:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> + *data = s->bank[reg_sec_idx].eventq_irq_cfg1;
> + return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG2:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> + *data = s->bank[reg_sec_idx].eventq_irq_cfg2;
> + return MEMTX_OK;
> case A_S_INIT_ALIAS:
> *data = 0;
> return MEMTX_OK;
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling
2025-09-29 15:07 ` Eric Auger
@ 2025-09-29 15:45 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 15:45 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 23:07, Eric Auger wrote:
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> This commit extends the banked register model to the Command Queue
>> (CMDQ), Event Queue (EVTQ), and global error handling logic.
>>
>> By leveraging the banked structure, the SMMUv3 model now supports
>> separate, parallel queues and error status registers for the Secure and
>> Non-secure states. This is essential for correctly modeling the
>> parallel programming interfaces defined by the SMMUv3 architecture.
>>
>> Additionally, this patch hardens the MMIO implementation by adding
>> several checks that were incomplete in the previous version.
>> For instance, writes to the Command Queue registers are now correctly
>> gated by the IDR1.QUEUES_PRESET bit, ensuring compliance with the
>> architectural rules for pre-configured queues.
> if this is a fix for the current code, you can put in a preample patch
> with Fixes tag.
Thanks for the guidance. I'll add Fixes tag.
> This must be split as well. This is really hard to review as is. Try to
> isolate the different functional changes.
> 1) addition of sec_idx to queue_read/write_eventq + modification of
> queue_write_read can be moved in a separate patch
> 2) smmu_cmdq_base_writable, smmu_eventq_irq_cfg_writable,
> smmu_gerror_irq_cfg_writable can go in a separate patch
> 3) MMIO hardening can go in a separate patch too
> ../..
>
> Thanks
>
> Eric
I am very sorry for making the review process so difficult and
time-consuming for you. I'll try to split it as you suggested in v3.
Thanks,
Tao
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3-internal.h | 2 +
>> hw/arm/smmuv3.c | 374 +++++++++++++++++++++++++++++++++------
>> 2 files changed, 323 insertions(+), 53 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
>> index af2936cf16..6bff504219 100644
>> --- a/hw/arm/smmuv3-internal.h
>> +++ b/hw/arm/smmuv3-internal.h
>> @@ -517,6 +517,8 @@ typedef struct SMMUEventInfo {
>> uint32_t sid;
>> bool recorded;
>> bool inval_ste_allowed;
>> + AddressSpace *as;
>> + MemTxAttrs txattrs;
>> SMMUSecurityIndex sec_idx;
>> union {
>> struct {
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 3835b9e79f..53c7eff0e3 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -106,14 +106,15 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
>> trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
>> }
>>
>> -static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
>> +static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd,
>> + SMMUSecurityIndex sec_idx)
>> {
>> dma_addr_t addr = Q_CONS_ENTRY(q);
>> MemTxResult ret;
>> int i;
>>
>> - ret = dma_memory_read(&address_space_memory, addr, cmd, sizeof(Cmd),
>> - MEMTXATTRS_UNSPECIFIED);
>> + ret = dma_memory_read(smmu_get_address_space(sec_idx), addr, cmd,
>> + sizeof(Cmd), smmu_get_txattrs(sec_idx));
>> if (ret != MEMTX_OK) {
>> return ret;
>> }
>> @@ -123,7 +124,8 @@ static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
>> return ret;
>> }
>>
>> -static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
>> +static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in,
>> + SMMUSecurityIndex sec_idx)
>> {
>> dma_addr_t addr = Q_PROD_ENTRY(q);
>> MemTxResult ret;
>> @@ -133,8 +135,8 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
>> for (i = 0; i < ARRAY_SIZE(evt.word); i++) {
>> cpu_to_le32s(&evt.word[i]);
>> }
>> - ret = dma_memory_write(&address_space_memory, addr, &evt, sizeof(Evt),
>> - MEMTXATTRS_UNSPECIFIED);
>> + ret = dma_memory_write(smmu_get_address_space(sec_idx), addr, &evt,
>> + sizeof(Evt), smmu_get_txattrs(sec_idx));
>> if (ret != MEMTX_OK) {
>> return ret;
>> }
>> @@ -157,7 +159,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
>> return MEMTX_ERROR;
>> }
>>
>> - r = queue_write(q, evt);
>> + r = queue_write(q, evt, sec_idx);
>> if (r != MEMTX_OK) {
>> return r;
>> }
>> @@ -1025,6 +1027,8 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
>> */
>> class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
>> event->sec_idx = cfg->sec_idx;
>> + event->txattrs = cfg->txattrs;
>> + event->sec_idx = cfg->sec_idx;
>> switch (ptw_info.type) {
>> case SMMU_PTW_ERR_WALK_EABT:
>> event->type = SMMU_EVT_F_WALK_EABT;
>> @@ -1376,6 +1380,110 @@ static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> return smmuv3_is_smmu_enabled(s, sec_idx);
>> }
>>
>> +static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
>> +}
>> +
>> +static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, CMDQEN);
>> +}
>> +
>> +static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
>> +}
>> +
>> +static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, EVENTQEN);
>> +}
>> +
>> +/* Check if MSI is supported */
>> +static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].idr[0], IDR0, MSI);
>> +}
>> +
>> +/* Check if secure GERROR_IRQ_CFGx registers are writable */
>> +static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
>> + return false;
>> + }
>> +
>> + bool ctrl_en = FIELD_EX32(s->bank[sec_idx].irq_ctrl,
>> + IRQ_CTRL, GERROR_IRQEN);
>> +
>> + return !ctrl_en;
>> +}
>> +
>> +/* Check if CMDQEN is disabled */
>> +static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_idx);
>> + int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_idx);
>> + return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
>> +}
>> +
>> +/* Check if CMDQ_BASE register is writable */
>> +static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
>> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, QUEUES_PRESET)) {
>> + return false;
>> + }
>> +
>> + return smmu_cmdqen_disabled(s, sec_idx);
>> +}
>> +
>> +/* Check if EVENTQEN is disabled */
>> +static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_idx);
>> + int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_idx);
>> + return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
>> +}
>> +
>> +static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].idr[1], IDR1, QUEUES_PRESET);
>> +}
>> +
>> +/* Check if EVENTQ_BASE register is writable */
>> +static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> +{
>> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
>> + if (smmu_idr1_queue_preset(s, SMMU_SEC_IDX_NS)) {
>> + return false;
>> + }
>> +
>> + return smmu_eventqen_disabled(s, sec_idx);
>> +}
>> +
>> +static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
>> +}
>> +
>> +/* Check if EVENTQ_IRQ_CFGx is writable */
>> +static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
>> + SMMUSecurityIndex sec_idx)
>> +{
>> + if (smmu_msi_supported(s, sec_idx)) {
>> + return false;
>> + }
>> +
>> + return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
>> +}
>> +
>> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> {
>> SMMUState *bs = ARM_SMMU(s);
>> @@ -1405,7 +1513,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> break;
>> }
>>
>> - if (queue_read(q, &cmd) != MEMTX_OK) {
>> + if (queue_read(q, &cmd, sec_idx) != MEMTX_OK) {
>> cmd_error = SMMU_CERROR_ABT;
>> break;
>> }
>> @@ -1430,8 +1538,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>>
>> if (CMD_SSEC(&cmd)) {
>> - cmd_error = SMMU_CERROR_ILL;
>> - break;
>> + if (sec_idx != SMMU_SEC_IDX_S) {
>> + /* Secure Stream with Non-Secure command */
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> }
>>
>> if (!sdev) {
>> @@ -1450,8 +1561,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> SMMUSIDRange sid_range;
>>
>> if (CMD_SSEC(&cmd)) {
>> - cmd_error = SMMU_CERROR_ILL;
>> - break;
>> + if (sec_idx != SMMU_SEC_IDX_S) {
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> }
>>
>> mask = (1ULL << (range + 1)) - 1;
>> @@ -1469,8 +1582,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>>
>> if (CMD_SSEC(&cmd)) {
>> - cmd_error = SMMU_CERROR_ILL;
>> - break;
>> + if (sec_idx != SMMU_SEC_IDX_S) {
>> + cmd_error = SMMU_CERROR_ILL;
>> + break;
>> + }
>> }
>>
>> if (!sdev) {
>> @@ -1624,6 +1739,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
>> return MEMTX_OK;
>> case A_CMDQ_BASE:
>> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "CMDQ_BASE write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_QUEUE_BASE_RESERVED;
>> s->bank[reg_sec_idx].cmdq.base = data;
>> s->bank[reg_sec_idx].cmdq.log2size = extract64(
>> s->bank[reg_sec_idx].cmdq.base, 0, 5);
>> @@ -1632,6 +1754,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> }
>> return MEMTX_OK;
>> case A_EVENTQ_BASE:
>> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_BASE write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_QUEUE_BASE_RESERVED;
>> s->bank[reg_sec_idx].eventq.base = data;
>> s->bank[reg_sec_idx].eventq.log2size = extract64(
>> s->bank[reg_sec_idx].eventq.base, 0, 5);
>> @@ -1640,8 +1769,25 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> }
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0:
>> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
>> s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
>> return MEMTX_OK;
>> + case A_GERROR_IRQ_CFG0:
>> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
>> + /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
>> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
>> + "register is RO when IRQ enabled\n");
>> + return MEMTX_OK;
>> + }
>> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
>> + data & SMMU_GERROR_IRQ_CFG0_RESERVED;
>> + return MEMTX_OK;
>> default:
>> qemu_log_mask(LOG_UNIMP,
>> "%s Unexpected 64-bit access to 0x%"PRIx64" (WI)\n",
>> @@ -1666,7 +1812,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> s->bank[reg_sec_idx].cr[1] = data;
>> return MEMTX_OK;
>> case A_CR2:
>> - s->bank[reg_sec_idx].cr[2] = data;
>> + if (smmuv3_is_smmu_enabled(s, reg_sec_idx)) {
>> + /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
>> + s->bank[reg_sec_idx].cr[2] = data;
>> + } else {
>> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "CR2 write ignored: register is read-only when "
>> + "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
>> + }
>> return MEMTX_OK;
>> case A_IRQ_CTRL:
>> s->bank[reg_sec_idx].irq_ctrl = data;
>> @@ -1680,14 +1834,28 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> smmuv3_cmdq_consume(s, reg_sec_idx);
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0: /* 64b */
>> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
>> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
>> - return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0 + 4:
>> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
>> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
>> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
>> + "register is RO when IRQ enabled\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
>> + if (reg_offset == A_GERROR_IRQ_CFG0) {
>> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
>> + s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
>> + } else {
>> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
>> + s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
>> + }
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG1:
>> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
>> + "register is RO when IRQ enabled\n");
>> + return MEMTX_OK;
>> + }
>> s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG2:
>> @@ -1735,60 +1903,106 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
>> }
>> return MEMTX_OK;
>> case A_CMDQ_BASE: /* 64b */
>> - s->bank[reg_sec_idx].cmdq.base =
>> - deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
>> - s->bank[reg_sec_idx].cmdq.log2size =
>> - extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
>> - if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
>> - s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
>> - }
>> - return MEMTX_OK;
>> case A_CMDQ_BASE + 4: /* 64b */
>> - s->bank[reg_sec_idx].cmdq.base =
>> - deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
>> - return MEMTX_OK;
>> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "CMDQ_BASE write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_QUEUE_BASE_RESERVED;
>> + if (reg_offset == A_CMDQ_BASE) {
>> + s->bank[reg_sec_idx].cmdq.base = deposit64(
>> + s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
>> +
>> + s->bank[reg_sec_idx].cmdq.log2size = extract64(
>> + s->bank[reg_sec_idx].cmdq.base, 0, 5);
>> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
>> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
>> + }
>> + } else {
>> + s->bank[reg_sec_idx].cmdq.base = deposit64(
>> + s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
>> + }
>> +
>> return MEMTX_OK;
>> case A_CMDQ_PROD:
>> s->bank[reg_sec_idx].cmdq.prod = data;
>> smmuv3_cmdq_consume(s, reg_sec_idx);
>> return MEMTX_OK;
>> case A_CMDQ_CONS:
>> + if (!smmu_cmdqen_disabled(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "CMDQ_CONS write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> s->bank[reg_sec_idx].cmdq.cons = data;
>> return MEMTX_OK;
>> case A_EVENTQ_BASE: /* 64b */
>> - s->bank[reg_sec_idx].eventq.base =
>> - deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
>> - s->bank[reg_sec_idx].eventq.log2size =
>> - extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
>> - if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
>> - s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
>> - }
>> - s->bank[reg_sec_idx].eventq.cons = data;
>> - return MEMTX_OK;
>> case A_EVENTQ_BASE + 4:
>> - s->bank[reg_sec_idx].eventq.base =
>> - deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
>> - return MEMTX_OK;
>> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_BASE write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_QUEUE_BASE_RESERVED;
>> + if (reg_offset == A_EVENTQ_BASE) {
>> + s->bank[reg_sec_idx].eventq.base = deposit64(
>> + s->bank[reg_sec_idx].eventq.base, 0, 32, data);
>> +
>> + s->bank[reg_sec_idx].eventq.log2size = extract64(
>> + s->bank[reg_sec_idx].eventq.base, 0, 5);
>> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
>> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
>> + }
>> + } else {
>> + s->bank[reg_sec_idx].eventq.base = deposit64(
>> + s->bank[reg_sec_idx].eventq.base, 32, 32, data);
>> + }
>> return MEMTX_OK;
>> case A_EVENTQ_PROD:
>> + if (!smmu_eventqen_disabled(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_PROD write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> s->bank[reg_sec_idx].eventq.prod = data;
>> return MEMTX_OK;
>> case A_EVENTQ_CONS:
>> s->bank[reg_sec_idx].eventq.cons = data;
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0: /* 64b */
>> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
>> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
>> - return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG0 + 4:
>> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
>> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
>> - return MEMTX_OK;
>> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> +
>> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
>> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
>> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
>> + s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
>> + } else {
>> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
>> + s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
>> + }
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG1:
>> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
>> return MEMTX_OK;
>> case A_EVENTQ_IRQ_CFG2:
>> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
>> + return MEMTX_OK;
>> + }
>> s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
>> return MEMTX_OK;
>> case A_S_INIT_ALIAS:
>> @@ -1848,6 +2062,11 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
>> uint32_t reg_offset = offset & 0xfff;
>> switch (reg_offset) {
>> case A_GERROR_IRQ_CFG0:
>> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
>> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
>> + *data = 0; /* RES0 */
>> + return MEMTX_OK;
>> + }
>> *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
>> return MEMTX_OK;
>> case A_STRTAB_BASE:
>> @@ -1859,6 +2078,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
>> case A_EVENTQ_BASE:
>> *data = s->bank[reg_sec_idx].eventq.base;
>> return MEMTX_OK;
>> + case A_EVENTQ_IRQ_CFG0:
>> + /* MSI support is depends on the register's security domain */
>> + if (!smmu_msi_supported(s, reg_sec_idx)) {
>> + *data = 0;
>> + return MEMTX_OK;
>> + }
>> +
>> *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
>> return MEMTX_OK;
>> default:
>> @@ -1917,16 +2143,31 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> *data = s->bank[reg_sec_idx].gerrorn;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0: /* 64b */
>> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
>> - return MEMTX_OK;
>> case A_GERROR_IRQ_CFG0 + 4:
>> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
>> - return MEMTX_OK;
>> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
>> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
>> + *data = 0; /* RES0 */
>> + return MEMTX_OK;
>> + }
>> +
>> + if (reg_offset == A_GERROR_IRQ_CFG0) {
>> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
>> + } else {
>> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
>> + }
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG1:
>> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
>> + *data = 0; /* RES0 */
>> + return MEMTX_OK;
>> + }
>> *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
>> return MEMTX_OK;
>> case A_GERROR_IRQ_CFG2:
>> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
>> + *data = 0; /* RES0 */
>> + return MEMTX_OK;
>> + }
>> *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
>> return MEMTX_OK;
>> case A_STRTAB_BASE: /* 64b */
>> @@ -1962,6 +2203,33 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
>> case A_EVENTQ_CONS:
>> *data = s->bank[reg_sec_idx].eventq.cons;
>> return MEMTX_OK;
>> + case A_EVENTQ_IRQ_CFG0:
>> + case A_EVENTQ_IRQ_CFG0 + 4:
>> + if (!smmu_msi_supported(s, reg_sec_idx)) {
>> + *data = 0;
>> + return MEMTX_OK;
>> + }
>> +
>> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
>> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32);
>> + } else {
>> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32);
>> + }
>> + return MEMTX_OK;
>> + case A_EVENTQ_IRQ_CFG1:
>> + if (!smmu_msi_supported(s, reg_sec_idx)) {
>> + *data = 0;
>> + return MEMTX_OK;
>> + }
>> + *data = s->bank[reg_sec_idx].eventq_irq_cfg1;
>> + return MEMTX_OK;
>> + case A_EVENTQ_IRQ_CFG2:
>> + if (!smmu_msi_supported(s, reg_sec_idx)) {
>> + *data = 0;
>> + return MEMTX_OK;
>> + }
>> + *data = s->bank[reg_sec_idx].eventq_irq_cfg2;
>> + return MEMTX_OK;
>> case A_S_INIT_ALIAS:
>> *data = 0;
>> return MEMTX_OK;
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling
2025-09-25 16:26 ` [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling Tao Tang
2025-09-29 15:07 ` Eric Auger
@ 2025-09-29 15:09 ` Eric Auger
1 sibling, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:09 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
On 9/25/25 6:26 PM, Tao Tang wrote:
> This commit extends the banked register model to the Command Queue
> (CMDQ), Event Queue (EVTQ), and global error handling logic.
>
> By leveraging the banked structure, the SMMUv3 model now supports
> separate, parallel queues and error status registers for the Secure and
> Non-secure states. This is essential for correctly modeling the
> parallel programming interfaces defined by the SMMUv3 architecture.
>
> Additionally, this patch hardens the MMIO implementation by adding
> several checks that were incomplete in the previous version.
> For instance, writes to the Command Queue registers are now correctly
> gated by the IDR1.QUEUES_PRESET bit, ensuring compliance with the
> architectural rules for pre-configured queues.
if this is a fix for the current code, you can put in a preample patch
with Fixes tag.
Eric
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3-internal.h | 2 +
> hw/arm/smmuv3.c | 374 +++++++++++++++++++++++++++++++++------
> 2 files changed, 323 insertions(+), 53 deletions(-)
>
> diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
> index af2936cf16..6bff504219 100644
> --- a/hw/arm/smmuv3-internal.h
> +++ b/hw/arm/smmuv3-internal.h
> @@ -517,6 +517,8 @@ typedef struct SMMUEventInfo {
> uint32_t sid;
> bool recorded;
> bool inval_ste_allowed;
> + AddressSpace *as;
> + MemTxAttrs txattrs;
> SMMUSecurityIndex sec_idx;
> union {
> struct {
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 3835b9e79f..53c7eff0e3 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -106,14 +106,15 @@ static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn,
> trace_smmuv3_write_gerrorn(toggled & pending, s->bank[sec_idx].gerrorn);
> }
>
> -static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> +static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd,
> + SMMUSecurityIndex sec_idx)
> {
> dma_addr_t addr = Q_CONS_ENTRY(q);
> MemTxResult ret;
> int i;
>
> - ret = dma_memory_read(&address_space_memory, addr, cmd, sizeof(Cmd),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_read(smmu_get_address_space(sec_idx), addr, cmd,
> + sizeof(Cmd), smmu_get_txattrs(sec_idx));
> if (ret != MEMTX_OK) {
> return ret;
> }
> @@ -123,7 +124,8 @@ static inline MemTxResult queue_read(SMMUQueue *q, Cmd *cmd)
> return ret;
> }
>
> -static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> +static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in,
> + SMMUSecurityIndex sec_idx)
> {
> dma_addr_t addr = Q_PROD_ENTRY(q);
> MemTxResult ret;
> @@ -133,8 +135,8 @@ static MemTxResult queue_write(SMMUQueue *q, Evt *evt_in)
> for (i = 0; i < ARRAY_SIZE(evt.word); i++) {
> cpu_to_le32s(&evt.word[i]);
> }
> - ret = dma_memory_write(&address_space_memory, addr, &evt, sizeof(Evt),
> - MEMTXATTRS_UNSPECIFIED);
> + ret = dma_memory_write(smmu_get_address_space(sec_idx), addr, &evt,
> + sizeof(Evt), smmu_get_txattrs(sec_idx));
> if (ret != MEMTX_OK) {
> return ret;
> }
> @@ -157,7 +159,7 @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt,
> return MEMTX_ERROR;
> }
>
> - r = queue_write(q, evt);
> + r = queue_write(q, evt, sec_idx);
> if (r != MEMTX_OK) {
> return r;
> }
> @@ -1025,6 +1027,8 @@ static SMMUTranslationStatus smmuv3_do_translate(SMMUv3State *s, hwaddr addr,
> */
> class = ptw_info.is_ipa_descriptor ? SMMU_CLASS_TT : class;
> event->sec_idx = cfg->sec_idx;
> + event->txattrs = cfg->txattrs;
> + event->sec_idx = cfg->sec_idx;
> switch (ptw_info.type) {
> case SMMU_PTW_ERR_WALK_EABT:
> event->type = SMMU_EVT_F_WALK_EABT;
> @@ -1376,6 +1380,110 @@ static bool smmu_strtab_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> return smmuv3_is_smmu_enabled(s, sec_idx);
> }
>
> +static inline int smmuv3_get_cr0_cmdqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_cmdqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, CMDQEN);
> +}
> +
> +static inline int smmuv3_get_cr0_eventqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr[0], CR0, EVENTQEN);
> +}
> +
> +static inline int smmuv3_get_cr0ack_eventqen(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].cr0ack, CR0, EVENTQEN);
> +}
> +
> +/* Check if MSI is supported */
> +static inline bool smmu_msi_supported(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].idr[0], IDR0, MSI);
> +}
> +
> +/* Check if secure GERROR_IRQ_CFGx registers are writable */
> +static inline bool smmu_gerror_irq_cfg_writable(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + return false;
> + }
> +
> + bool ctrl_en = FIELD_EX32(s->bank[sec_idx].irq_ctrl,
> + IRQ_CTRL, GERROR_IRQEN);
> +
> + return !ctrl_en;
> +}
> +
> +/* Check if CMDQEN is disabled */
> +static bool smmu_cmdqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + int cr0_cmdqen = smmuv3_get_cr0_cmdqen(s, sec_idx);
> + int cr0ack_cmdqen = smmuv3_get_cr0ack_cmdqen(s, sec_idx);
> + return (cr0_cmdqen == 0 && cr0ack_cmdqen == 0);
> +}
> +
> +/* Check if CMDQ_BASE register is writable */
> +static bool smmu_cmdq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (FIELD_EX32(s->bank[SMMU_SEC_IDX_NS].idr[1], IDR1, QUEUES_PRESET)) {
> + return false;
> + }
> +
> + return smmu_cmdqen_disabled(s, sec_idx);
> +}
> +
> +/* Check if EVENTQEN is disabled */
> +static bool smmu_eventqen_disabled(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + int cr0_eventqen = smmuv3_get_cr0_eventqen(s, sec_idx);
> + int cr0ack_eventqen = smmuv3_get_cr0ack_eventqen(s, sec_idx);
> + return (cr0_eventqen == 0 && cr0ack_eventqen == 0);
> +}
> +
> +static bool smmu_idr1_queue_preset(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].idr[1], IDR1, QUEUES_PRESET);
> +}
> +
> +/* Check if EVENTQ_BASE register is writable */
> +static bool smmu_eventq_base_writable(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> +{
> + /* Check TABLES_PRESET - use NS bank as it's the global setting */
> + if (smmu_idr1_queue_preset(s, SMMU_SEC_IDX_NS)) {
> + return false;
> + }
> +
> + return smmu_eventqen_disabled(s, sec_idx);
> +}
> +
> +static bool smmu_irq_ctl_evtq_irqen_disabled(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + return FIELD_EX32(s->bank[sec_idx].irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
> +}
> +
> +/* Check if EVENTQ_IRQ_CFGx is writable */
> +static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
> + SMMUSecurityIndex sec_idx)
> +{
> + if (smmu_msi_supported(s, sec_idx)) {
> + return false;
> + }
> +
> + return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1405,7 +1513,7 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> break;
> }
>
> - if (queue_read(q, &cmd) != MEMTX_OK) {
> + if (queue_read(q, &cmd, sec_idx) != MEMTX_OK) {
> cmd_error = SMMU_CERROR_ABT;
> break;
> }
> @@ -1430,8 +1538,11 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + /* Secure Stream with Non-Secure command */
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> if (!sdev) {
> @@ -1450,8 +1561,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUSIDRange sid_range;
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> mask = (1ULL << (range + 1)) - 1;
> @@ -1469,8 +1582,10 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> SMMUDevice *sdev = smmu_find_sdev(bs, sid);
>
> if (CMD_SSEC(&cmd)) {
> - cmd_error = SMMU_CERROR_ILL;
> - break;
> + if (sec_idx != SMMU_SEC_IDX_S) {
> + cmd_error = SMMU_CERROR_ILL;
> + break;
> + }
> }
>
> if (!sdev) {
> @@ -1624,6 +1739,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> s->bank[reg_sec_idx].strtab_base = data & SMMU_STRTAB_BASE_RESERVED;
> return MEMTX_OK;
> case A_CMDQ_BASE:
> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> s->bank[reg_sec_idx].cmdq.base = data;
> s->bank[reg_sec_idx].cmdq.log2size = extract64(
> s->bank[reg_sec_idx].cmdq.base, 0, 5);
> @@ -1632,6 +1754,13 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_EVENTQ_BASE:
> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> s->bank[reg_sec_idx].eventq.base = data;
> s->bank[reg_sec_idx].eventq.log2size = extract64(
> s->bank[reg_sec_idx].eventq.base, 0, 5);
> @@ -1640,8 +1769,25 @@ static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> s->bank[reg_sec_idx].eventq_irq_cfg0 = data;
> return MEMTX_OK;
> + case A_GERROR_IRQ_CFG0:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + /* SMMU_(*_)_IRQ_CTRL.GERROR_IRQEN == 1: IGNORED this write */
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> + s->bank[reg_sec_idx].gerror_irq_cfg0 =
> + data & SMMU_GERROR_IRQ_CFG0_RESERVED;
> + return MEMTX_OK;
> default:
> qemu_log_mask(LOG_UNIMP,
> "%s Unexpected 64-bit access to 0x%"PRIx64" (WI)\n",
> @@ -1666,7 +1812,15 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> s->bank[reg_sec_idx].cr[1] = data;
> return MEMTX_OK;
> case A_CR2:
> - s->bank[reg_sec_idx].cr[2] = data;
> + if (smmuv3_is_smmu_enabled(s, reg_sec_idx)) {
> + /* Allow write: SMMUEN is 0 in both CR0 and CR0ACK */
> + s->bank[reg_sec_idx].cr[2] = data;
> + } else {
> + /* CONSTRAINED UNPREDICTABLE behavior: Ignore this write */
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CR2 write ignored: register is read-only when "
> + "CR0.SMMUEN or CR0ACK.SMMUEN is set\n");
> + }
> return MEMTX_OK;
> case A_IRQ_CTRL:
> s->bank[reg_sec_idx].irq_ctrl = data;
> @@ -1680,14 +1834,28 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
> - return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - s->bank[reg_sec_idx].gerror_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG0 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_GERROR_IRQ_CFG0_RESERVED;
> + if (reg_offset == A_GERROR_IRQ_CFG0) {
> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32, data);
> + } else {
> + s->bank[reg_sec_idx].gerror_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_gerror_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "GERROR_IRQ_CFG1 write ignored: "
> + "register is RO when IRQ enabled\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].gerror_irq_cfg1 = data;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> @@ -1735,60 +1903,106 @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
> }
> return MEMTX_OK;
> case A_CMDQ_BASE: /* 64b */
> - s->bank[reg_sec_idx].cmdq.base =
> - deposit64(s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
> - s->bank[reg_sec_idx].cmdq.log2size =
> - extract64(s->bank[reg_sec_idx].cmdq.base, 0, 5);
> - if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> - s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> - }
> - return MEMTX_OK;
> case A_CMDQ_BASE + 4: /* 64b */
> - s->bank[reg_sec_idx].cmdq.base =
> - deposit64(s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_cmdq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> + if (reg_offset == A_CMDQ_BASE) {
> + s->bank[reg_sec_idx].cmdq.base = deposit64(
> + s->bank[reg_sec_idx].cmdq.base, 0, 32, data);
> +
> + s->bank[reg_sec_idx].cmdq.log2size = extract64(
> + s->bank[reg_sec_idx].cmdq.base, 0, 5);
> + if (s->bank[reg_sec_idx].cmdq.log2size > SMMU_CMDQS) {
> + s->bank[reg_sec_idx].cmdq.log2size = SMMU_CMDQS;
> + }
> + } else {
> + s->bank[reg_sec_idx].cmdq.base = deposit64(
> + s->bank[reg_sec_idx].cmdq.base, 32, 32, data);
> + }
> +
> return MEMTX_OK;
> case A_CMDQ_PROD:
> s->bank[reg_sec_idx].cmdq.prod = data;
> smmuv3_cmdq_consume(s, reg_sec_idx);
> return MEMTX_OK;
> case A_CMDQ_CONS:
> + if (!smmu_cmdqen_disabled(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "CMDQ_CONS write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].cmdq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_BASE: /* 64b */
> - s->bank[reg_sec_idx].eventq.base =
> - deposit64(s->bank[reg_sec_idx].eventq.base, 0, 32, data);
> - s->bank[reg_sec_idx].eventq.log2size =
> - extract64(s->bank[reg_sec_idx].eventq.base, 0, 5);
> - if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> - s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> - }
> - s->bank[reg_sec_idx].eventq.cons = data;
> - return MEMTX_OK;
> case A_EVENTQ_BASE + 4:
> - s->bank[reg_sec_idx].eventq.base =
> - deposit64(s->bank[reg_sec_idx].eventq.base, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_eventq_base_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_BASE write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_QUEUE_BASE_RESERVED;
> + if (reg_offset == A_EVENTQ_BASE) {
> + s->bank[reg_sec_idx].eventq.base = deposit64(
> + s->bank[reg_sec_idx].eventq.base, 0, 32, data);
> +
> + s->bank[reg_sec_idx].eventq.log2size = extract64(
> + s->bank[reg_sec_idx].eventq.base, 0, 5);
> + if (s->bank[reg_sec_idx].eventq.log2size > SMMU_EVENTQS) {
> + s->bank[reg_sec_idx].eventq.log2size = SMMU_EVENTQS;
> + }
> + } else {
> + s->bank[reg_sec_idx].eventq.base = deposit64(
> + s->bank[reg_sec_idx].eventq.base, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_EVENTQ_PROD:
> + if (!smmu_eventqen_disabled(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_PROD write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq.prod = data;
> return MEMTX_OK;
> case A_EVENTQ_CONS:
> s->bank[reg_sec_idx].eventq.cons = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0: /* 64b */
> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
> - return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG0 + 4:
> - s->bank[reg_sec_idx].eventq_irq_cfg0 =
> - deposit64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
> - return MEMTX_OK;
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG0 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> +
> + data &= SMMU_EVENTQ_IRQ_CFG0_RESERVED;
> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32, data);
> + } else {
> + s->bank[reg_sec_idx].eventq_irq_cfg0 = deposit64(
> + s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32, data);
> + }
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG1:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG1 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq_irq_cfg1 = data;
> return MEMTX_OK;
> case A_EVENTQ_IRQ_CFG2:
> + if (!smmu_eventq_irq_cfg_writable(s, reg_sec_idx)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "EVENTQ_IRQ_CFG2 write ignored: register is RO\n");
> + return MEMTX_OK;
> + }
> s->bank[reg_sec_idx].eventq_irq_cfg2 = data;
> return MEMTX_OK;
> case A_S_INIT_ALIAS:
> @@ -1848,6 +2062,11 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> uint32_t reg_offset = offset & 0xfff;
> switch (reg_offset) {
> case A_GERROR_IRQ_CFG0:
> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg0;
> return MEMTX_OK;
> case A_STRTAB_BASE:
> @@ -1859,6 +2078,13 @@ static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_BASE:
> *data = s->bank[reg_sec_idx].eventq.base;
> return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG0:
> + /* MSI support is depends on the register's security domain */
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> +
> *data = s->bank[reg_sec_idx].eventq_irq_cfg0;
> return MEMTX_OK;
> default:
> @@ -1917,16 +2143,31 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> *data = s->bank[reg_sec_idx].gerrorn;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG0: /* 64b */
> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
> - return MEMTX_OK;
> case A_GERROR_IRQ_CFG0 + 4:
> - *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
> - return MEMTX_OK;
> + /* SMMU_(*_)GERROR_IRQ_CFG0 BOTH check SMMU_IDR0.MSI */
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> +
> + if (reg_offset == A_GERROR_IRQ_CFG0) {
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 0, 32);
> + } else {
> + *data = extract64(s->bank[reg_sec_idx].gerror_irq_cfg0, 32, 32);
> + }
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG1:
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg1;
> return MEMTX_OK;
> case A_GERROR_IRQ_CFG2:
> + if (!smmu_msi_supported(s, SMMU_SEC_IDX_NS)) {
> + *data = 0; /* RES0 */
> + return MEMTX_OK;
> + }
> *data = s->bank[reg_sec_idx].gerror_irq_cfg2;
> return MEMTX_OK;
> case A_STRTAB_BASE: /* 64b */
> @@ -1962,6 +2203,33 @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
> case A_EVENTQ_CONS:
> *data = s->bank[reg_sec_idx].eventq.cons;
> return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG0:
> + case A_EVENTQ_IRQ_CFG0 + 4:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> +
> + if (reg_offset == A_EVENTQ_IRQ_CFG0) {
> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 0, 32);
> + } else {
> + *data = extract64(s->bank[reg_sec_idx].eventq_irq_cfg0, 32, 32);
> + }
> + return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG1:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> + *data = s->bank[reg_sec_idx].eventq_irq_cfg1;
> + return MEMTX_OK;
> + case A_EVENTQ_IRQ_CFG2:
> + if (!smmu_msi_supported(s, reg_sec_idx)) {
> + *data = 0;
> + return MEMTX_OK;
> + }
> + *data = s->bank[reg_sec_idx].eventq_irq_cfg2;
> + return MEMTX_OK;
> case A_S_INIT_ALIAS:
> *data = 0;
> return MEMTX_OK;
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (9 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 10/14] hw/arm/smmuv3: Add banked support for queues and error handling Tao Tang
@ 2025-09-25 16:26 ` Tao Tang
2025-09-29 15:30 ` Eric Auger
2025-09-26 3:08 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
` (4 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-25 16:26 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa, Tao Tang
This patch hardens the security validation within the main MMIO
dispatcher functions (smmu_read_mmio and smmu_write_mmio).
First, accesses to the secure register space are now correctly gated by
whether the SECURE_IMPL feature is enabled in the model. This prevents
guest software from accessing the secure programming interface when it is
disabled, though some registers are exempt from this check as per the
architecture.
Second, the check for the input stream's security is made more robust.
It now validates not only the legacy MemTxAttrs.secure bit, but also
the .space field. This brings the SMMU's handling of security spaces
into full alignment with the PE.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 58 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 53c7eff0e3..eec36d5fd2 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1484,6 +1484,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
}
+/* Check if the SMMU hardware itself implements secure state features */
+static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
+{
+ return FIELD_EX32(s->bank[SMMU_SEC_IDX_S].idr[1], S_IDR1, SECURE_IMPL);
+}
+
static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
{
SMMUState *bs = ARM_SMMU(s);
@@ -1723,6 +1729,43 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
return 0;
}
+static bool is_secure_impl_exempt_reg(hwaddr offset)
+{
+ switch (offset) {
+ case A_S_EVENTQ_IRQ_CFG0:
+ case A_S_EVENTQ_IRQ_CFG1:
+ case A_S_EVENTQ_IRQ_CFG2:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* Helper function for Secure register access validation */
+static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
+ hwaddr offset, bool is_read)
+{ /* Check if the access is secure */
+ if (!(attrs.space == ARMSS_Secure || attrs.space == ARMSS_Root ||
+ attrs.secure == 1)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
+ __func__, is_read ? "read" : "write", offset,
+ is_read ? "RAZ" : "WI");
+ return false;
+ }
+
+ /* Check if the secure state is implemented. Some registers are exempted */
+ /* from this check. */
+ if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
+ "is not implemented (RES0)\n",
+ __func__, is_read ? "read" : "write", offset);
+ return false;
+ }
+ return true;
+}
+
static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
uint64_t data, MemTxAttrs attrs,
SMMUSecurityIndex reg_sec_idx)
@@ -2038,6 +2081,13 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
+ if (offset >= SMMU_SECURE_BASE_OFFSET) {
+ if (!smmu_check_secure_access(s, attrs, offset, false)) {
+ trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
+ return MEMTX_OK;
+ }
+ reg_sec_idx = SMMU_SEC_IDX_S;
+ }
switch (size) {
case 8:
@@ -2252,6 +2302,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
/* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
offset &= ~0x10000;
SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
+ if (offset >= SMMU_SECURE_BASE_OFFSET) {
+ if (!smmu_check_secure_access(s, attrs, offset, true)) {
+ *data = 0;
+ trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
+ return MEMTX_OK;
+ }
+ reg_sec_idx = SMMU_SEC_IDX_S;
+ }
switch (size) {
case 8:
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-09-25 16:26 ` [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
@ 2025-09-29 15:30 ` Eric Auger
2025-09-29 15:56 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:30 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi Tao,
On 9/25/25 6:26 PM, Tao Tang wrote:
> This patch hardens the security validation within the main MMIO
> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>
> First, accesses to the secure register space are now correctly gated by
> whether the SECURE_IMPL feature is enabled in the model. This prevents
> guest software from accessing the secure programming interface when it is
> disabled, though some registers are exempt from this check as per the
> architecture.
>
> Second, the check for the input stream's security is made more robust.
> It now validates not only the legacy MemTxAttrs.secure bit, but also
> the .space field. This brings the SMMU's handling of security spaces
> into full alignment with the PE.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 58 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 53c7eff0e3..eec36d5fd2 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1484,6 +1484,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
> }
>
> +/* Check if the SMMU hardware itself implements secure state features */
> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
> +{
> + return FIELD_EX32(s->bank[SMMU_SEC_IDX_S].idr[1], S_IDR1, SECURE_IMPL);
> +}
> +
> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> {
> SMMUState *bs = ARM_SMMU(s);
> @@ -1723,6 +1729,43 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
> return 0;
> }
>
> +static bool is_secure_impl_exempt_reg(hwaddr offset)
Worth a comment: some secure registers can be accessed even if secure HW
is not implemented. Returns true if this is the case or something alike.
> +{
> + switch (offset) {
> + case A_S_EVENTQ_IRQ_CFG0:
> + case A_S_EVENTQ_IRQ_CFG1:
> + case A_S_EVENTQ_IRQ_CFG2:
> + return true;
> + default:
> + return false;
> + }
> +}
> +
> +/* Helper function for Secure register access validation */
> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
> + hwaddr offset, bool is_read)
> +{ /* Check if the access is secure */
> + if (!(attrs.space == ARMSS_Secure || attrs.space == ARMSS_Root ||
First occurence of ARMSS_Root in hw dir? Is it needed?
> + attrs.secure == 1)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
> + __func__, is_read ? "read" : "write", offset,
> + is_read ? "RAZ" : "WI");
> + return false;
> + }
> +
> + /* Check if the secure state is implemented. Some registers are exempted */
> + /* from this check. */
> + if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
> + "is not implemented (RES0)\n",
> + __func__, is_read ? "read" : "write", offset);
> + return false;
> + }
> + return true;
> +}
> +
> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
> uint64_t data, MemTxAttrs attrs,
> SMMUSecurityIndex reg_sec_idx)
> @@ -2038,6 +2081,13 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
> + return MEMTX_OK;
> + }
> + reg_sec_idx = SMMU_SEC_IDX_S;
> + }
>
> switch (size) {
> case 8:
> @@ -2252,6 +2302,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
> offset &= ~0x10000;
> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
> + *data = 0;
> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
> + return MEMTX_OK;
> + }
> + reg_sec_idx = SMMU_SEC_IDX_S;
> + }
>
> switch (size) {
> case 8:
Thanks
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-09-29 15:30 ` Eric Auger
@ 2025-09-29 15:56 ` Tao Tang
2025-09-30 13:13 ` Eric Auger
0 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-29 15:56 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 23:30, Eric Auger wrote:
> Hi Tao,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> This patch hardens the security validation within the main MMIO
>> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>>
>> First, accesses to the secure register space are now correctly gated by
>> whether the SECURE_IMPL feature is enabled in the model. This prevents
>> guest software from accessing the secure programming interface when it is
>> disabled, though some registers are exempt from this check as per the
>> architecture.
>>
>> Second, the check for the input stream's security is made more robust.
>> It now validates not only the legacy MemTxAttrs.secure bit, but also
>> the .space field. This brings the SMMU's handling of security spaces
>> into full alignment with the PE.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 58 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 53c7eff0e3..eec36d5fd2 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -1484,6 +1484,12 @@ static bool smmu_eventq_irq_cfg_writable(SMMUv3State *s,
>> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
>> }
>>
>> +/* Check if the SMMU hardware itself implements secure state features */
>> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
>> +{
>> + return FIELD_EX32(s->bank[SMMU_SEC_IDX_S].idr[1], S_IDR1, SECURE_IMPL);
>> +}
>> +
>> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> {
>> SMMUState *bs = ARM_SMMU(s);
>> @@ -1723,6 +1729,43 @@ static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex sec_idx)
>> return 0;
>> }
>>
>> +static bool is_secure_impl_exempt_reg(hwaddr offset)
> Worth a comment: some secure registers can be accessed even if secure HW
> is not implemented. Returns true if this is the case or something alike.
You're right, that function definitely needs a comment to explain the
architectural exception it handles. I will add one in the next version
to improve clarity.
>> +{
>> + switch (offset) {
>> + case A_S_EVENTQ_IRQ_CFG0:
>> + case A_S_EVENTQ_IRQ_CFG1:
>> + case A_S_EVENTQ_IRQ_CFG2:
>> + return true;
>> + default:
>> + return false;
>> + }
>> +}
>> +
>> +/* Helper function for Secure register access validation */
>> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
>> + hwaddr offset, bool is_read)
>> +{ /* Check if the access is secure */
>> + if (!(attrs.space == ARMSS_Secure || attrs.space == ARMSS_Root ||
> First occurence of ARMSS_Root in hw dir? Is it needed?
This is a good question, and I'd like to clarify your expectation. My
thinking was that if we are using ARMSecuritySpace to propagate the
security context at the device level, then ARMSS_Root will eventually be
part of this check.
Is your suggestion that I should remove the ARMSS_Root check for now, as
it's not strictly necessary for the current Secure-state implementation,
and only re-introduce it when full Realm/Root support is added to the
SMMU model? I'm happy to do that to keep this patch focused.
Thanks,
Tao
>> + attrs.secure == 1)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 " (%s)\n",
>> + __func__, is_read ? "read" : "write", offset,
>> + is_read ? "RAZ" : "WI");
>> + return false;
>> + }
>> +
>> + /* Check if the secure state is implemented. Some registers are exempted */
>> + /* from this check. */
>> + if (!is_secure_impl_exempt_reg(offset) && !smmu_hw_secure_implemented(s)) {
>> + qemu_log_mask(LOG_GUEST_ERROR,
>> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But Secure state "
>> + "is not implemented (RES0)\n",
>> + __func__, is_read ? "read" : "write", offset);
>> + return false;
>> + }
>> + return true;
>> +}
>> +
>> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>> uint64_t data, MemTxAttrs attrs,
>> SMMUSecurityIndex reg_sec_idx)
>> @@ -2038,6 +2081,13 @@ static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
>> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
>> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
>> + return MEMTX_OK;
>> + }
>> + reg_sec_idx = SMMU_SEC_IDX_S;
>> + }
>>
>> switch (size) {
>> case 8:
>> @@ -2252,6 +2302,14 @@ static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
>> offset &= ~0x10000;
>> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
>> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
>> + *data = 0;
>> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
>> + return MEMTX_OK;
>> + }
>> + reg_sec_idx = SMMU_SEC_IDX_S;
>> + }
>>
>> switch (size) {
>> case 8:
> Thanks
>
> Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers
2025-09-29 15:56 ` Tao Tang
@ 2025-09-30 13:13 ` Eric Auger
0 siblings, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-30 13:13 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 9/29/25 5:56 PM, Tao Tang wrote:
> Hi Eric,
>
> On 2025/9/29 23:30, Eric Auger wrote:
>> Hi Tao,
>>
>> On 9/25/25 6:26 PM, Tao Tang wrote:
>>> This patch hardens the security validation within the main MMIO
>>> dispatcher functions (smmu_read_mmio and smmu_write_mmio).
>>>
>>> First, accesses to the secure register space are now correctly gated by
>>> whether the SECURE_IMPL feature is enabled in the model. This prevents
>>> guest software from accessing the secure programming interface when
>>> it is
>>> disabled, though some registers are exempt from this check as per the
>>> architecture.
>>>
>>> Second, the check for the input stream's security is made more robust.
>>> It now validates not only the legacy MemTxAttrs.secure bit, but also
>>> the .space field. This brings the SMMU's handling of security spaces
>>> into full alignment with the PE.
>>>
>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>> ---
>>> hw/arm/smmuv3.c | 58
>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 58 insertions(+)
>>>
>>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>>> index 53c7eff0e3..eec36d5fd2 100644
>>> --- a/hw/arm/smmuv3.c
>>> +++ b/hw/arm/smmuv3.c
>>> @@ -1484,6 +1484,12 @@ static bool
>>> smmu_eventq_irq_cfg_writable(SMMUv3State *s,
>>> return smmu_irq_ctl_evtq_irqen_disabled(s, sec_idx);
>>> }
>>> +/* Check if the SMMU hardware itself implements secure state
>>> features */
>>> +static inline bool smmu_hw_secure_implemented(SMMUv3State *s)
>>> +{
>>> + return FIELD_EX32(s->bank[SMMU_SEC_IDX_S].idr[1], S_IDR1,
>>> SECURE_IMPL);
>>> +}
>>> +
>>> static int smmuv3_cmdq_consume(SMMUv3State *s, SMMUSecurityIndex
>>> sec_idx)
>>> {
>>> SMMUState *bs = ARM_SMMU(s);
>>> @@ -1723,6 +1729,43 @@ static int smmuv3_cmdq_consume(SMMUv3State
>>> *s, SMMUSecurityIndex sec_idx)
>>> return 0;
>>> }
>>> +static bool is_secure_impl_exempt_reg(hwaddr offset)
>> Worth a comment: some secure registers can be accessed even if secure HW
>> is not implemented. Returns true if this is the case or something alike.
>
>
> You're right, that function definitely needs a comment to explain the
> architectural exception it handles. I will add one in the next version
> to improve clarity.
>
>>> +{
>>> + switch (offset) {
>>> + case A_S_EVENTQ_IRQ_CFG0:
>>> + case A_S_EVENTQ_IRQ_CFG1:
>>> + case A_S_EVENTQ_IRQ_CFG2:
>>> + return true;
>>> + default:
>>> + return false;
>>> + }
>>> +}
>>> +
>>> +/* Helper function for Secure register access validation */
>>> +static bool smmu_check_secure_access(SMMUv3State *s, MemTxAttrs attrs,
>>> + hwaddr offset, bool is_read)
>>> +{ /* Check if the access is secure */
>>> + if (!(attrs.space == ARMSS_Secure || attrs.space == ARMSS_Root ||
>> First occurence of ARMSS_Root in hw dir? Is it needed?
>
>
> This is a good question, and I'd like to clarify your expectation. My
> thinking was that if we are using ARMSecuritySpace to propagate the
> security context at the device level, then ARMSS_Root will eventually
> be part of this check.
>
> Is your suggestion that I should remove the ARMSS_Root check for now,
> as it's not strictly necessary for the current Secure-state
> implementation, and only re-introduce it when full Realm/Root support
> is added to the SMMU model? I'm happy to do that to keep this patch
> focused.
Well I think I would remove it if not supported anywhere. As an
alternative If we can get this value and if this is definitively not
supported by the code we can assert.
Thanks
Eric
>
> Thanks,
> Tao
>
>>> + attrs.secure == 1)) {
>>> + qemu_log_mask(LOG_GUEST_ERROR,
>>> + "%s: Non-secure %s attempt at offset 0x%" PRIx64 "
>>> (%s)\n",
>>> + __func__, is_read ? "read" : "write", offset,
>>> + is_read ? "RAZ" : "WI");
>>> + return false;
>>> + }
>>> +
>>> + /* Check if the secure state is implemented. Some registers are
>>> exempted */
>>> + /* from this check. */
>>> + if (!is_secure_impl_exempt_reg(offset) &&
>>> !smmu_hw_secure_implemented(s)) {
>>> + qemu_log_mask(LOG_GUEST_ERROR,
>>> + "%s: Secure %s attempt at offset 0x%" PRIx64 ". But
>>> Secure state "
>>> + "is not implemented (RES0)\n",
>>> + __func__, is_read ? "read" : "write", offset);
>>> + return false;
>>> + }
>>> + return true;
>>> +}
>>> +
>>> static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
>>> uint64_t data, MemTxAttrs attrs,
>>> SMMUSecurityIndex reg_sec_idx)
>>> @@ -2038,6 +2081,13 @@ static MemTxResult smmu_write_mmio(void
>>> *opaque, hwaddr offset, uint64_t data,
>>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact
>>> aliases */
>>> offset &= ~0x10000;
>>> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>>> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
>>> + if (!smmu_check_secure_access(s, attrs, offset, false)) {
>>> + trace_smmuv3_write_mmio(offset, data, size, MEMTX_OK);
>>> + return MEMTX_OK;
>>> + }
>>> + reg_sec_idx = SMMU_SEC_IDX_S;
>>> + }
>>> switch (size) {
>>> case 8:
>>> @@ -2252,6 +2302,14 @@ static MemTxResult smmu_read_mmio(void
>>> *opaque, hwaddr offset, uint64_t *data,
>>> /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact
>>> aliases */
>>> offset &= ~0x10000;
>>> SMMUSecurityIndex reg_sec_idx = SMMU_SEC_IDX_NS;
>>> + if (offset >= SMMU_SECURE_BASE_OFFSET) {
>>> + if (!smmu_check_secure_access(s, attrs, offset, true)) {
>>> + *data = 0;
>>> + trace_smmuv3_read_mmio(offset, *data, size, MEMTX_OK);
>>> + return MEMTX_OK;
>>> + }
>>> + reg_sec_idx = SMMU_SEC_IDX_S;
>>> + }
>>> switch (size) {
>>> case 8:
>> Thanks
>>
>> Eric
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (10 preceding siblings ...)
2025-09-25 16:26 ` [PATCH v2 11/14] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
@ 2025-09-26 3:08 ` Tao Tang
2025-09-26 3:08 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
` (2 more replies)
2025-09-26 3:23 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
` (3 subsequent siblings)
15 siblings, 3 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-26 3:08 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Resending patches 12–14/14 that were missing due to a send issue. Sorry
for the noise.
The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
the programming interface. To support future extensions like RME, which
defines four security states (Non-secure, Secure, Realm, and Root), the
QEMU model must cleanly separate these contexts for all operations.
This commit leverages the generic iommu_index to represent this
security context. The core IOMMU layer now uses the SMMU's .attrs_to_index
callback to map a transaction's ARMSecuritySpace attribute to the
corresponding iommu_index.
This index is then passed down to smmuv3_translate and used throughout
the model to select the correct register bank and processing logic. This
makes the iommu_index the clear QEMU equivalent of the architectural
SEC_SID, cleanly separating the contexts for all subsequent lookups.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 37 ++++++++++++++++++++++++++++++++++++-
1 file changed, 36 insertions(+), 1 deletion(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index eec36d5fd2..c92cc0f06a 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -1099,6 +1099,38 @@ static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
}
}
+static SMMUSecurityIndex smmuv3_attrs_to_security_index(MemTxAttrs attrs)
+{
+ switch (attrs.space) {
+ case ARMSS_Secure:
+ return SMMU_SEC_IDX_S;
+ case ARMSS_NonSecure:
+ default:
+ return SMMU_SEC_IDX_NS;
+ }
+}
+
+/*
+ * ARM SMMU IOMMU index mapping (implements SEC_SID from ARM SMMU):
+ * iommu_idx = 0: Non-secure transactions
+ * iommu_idx = 1: Secure transactions
+ *
+ * The iommu_idx parameter effectively implements the SEC_SID
+ * (Security Stream ID) attribute from the ARM SMMU architecture
+ * specification, which allows the SMMU to differentiate between
+ * secure and non-secure transactions at the hardware level.
+ */
+static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
+{
+ return smmuv3_attrs_to_security_index(attrs);
+}
+
+static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
+{
+ /* Support 2 IOMMU indexes for now: NS/S */
+ return SMMU_SEC_IDX_NUM;
+}
+
/* Entry point to SMMU, does everything. */
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
IOMMUAccessFlags flag, int iommu_idx)
@@ -1111,7 +1143,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
.inval_ste_allowed = false};
SMMUTranslationStatus status;
SMMUTransCfg *cfg = NULL;
- SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
+ SMMUSecurityIndex sec_idx = iommu_idx;
IOMMUTLBEntry entry = {
.target_as = &address_space_memory,
.iova = addr,
@@ -1155,6 +1187,7 @@ epilogue:
entry.perm = cached_entry->entry.perm;
entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
entry.addr_mask = cached_entry->entry.addr_mask;
+ entry.target_as = cached_entry->entry.target_as;
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
entry.translated_addr, entry.perm,
cfg->stage);
@@ -2534,6 +2567,8 @@ static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
imrc->translate = smmuv3_translate;
imrc->notify_flag_changed = smmuv3_notify_flag_changed;
+ imrc->attrs_to_index = smmuv3_attrs_to_index;
+ imrc->num_indexes = smmuv3_num_indexes;
}
static const TypeInfo smmuv3_type_info = {
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support
2025-09-26 3:08 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
@ 2025-09-26 3:08 ` Tao Tang
2025-09-26 3:08 ` [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections Tao Tang
2025-09-29 15:33 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Eric Auger
2 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-26 3:08 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Resending patches 12–14/14 that were missing due to a send issue. Sorry
for the noise.
This commit completes the initial implementation of the Secure SMMUv3
model by making the feature user-configurable.
A new boolean property, "secure-impl", is introduced to the device.
This property defaults to false, ensuring backward compatibility for
existing machine types that do not expect the secure programming
interface.
When "secure-impl" is set to true, the smmuv3_init_regs function now
initializes the secure register bank (bank[SMMU_SEC_IDX_S]). Crucially,
the S_IDR1.SECURE_IMPL bit is set according to this property,
correctly advertising the presence of the secure functionality to the
guest.
This patch ties together all the previous refactoring work. With this
property enabled, the banked registers, security-aware queues, and
other secure features become active, allowing a guest to probe and
configure the Secure SMMU.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index c92cc0f06a..80fbc25cf5 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -351,6 +351,16 @@ static void smmuv3_init_regs(SMMUv3State *s)
s->statusr = 0;
s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
+ /* Initialize Secure bank (SMMU_SEC_IDX_S) */
+ memset(s->bank[SMMU_SEC_IDX_S].idr, 0, sizeof(s->bank[SMMU_SEC_IDX_S].idr));
+ s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(s->bank[SMMU_SEC_IDX_S].idr[1],
+ S_IDR1, SECURE_IMPL,
+ s->secure_impl);
+ s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_S].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
+ s->bank[SMMU_SEC_IDX_S].gbpa = SMMU_GBPA_RESET_VAL;
+ s->bank[SMMU_SEC_IDX_S].cmdq.entry_size = sizeof(struct Cmd);
+ s->bank[SMMU_SEC_IDX_S].eventq.entry_size = sizeof(struct Evt);
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
@@ -2505,6 +2515,12 @@ static const Property smmuv3_properties[] = {
* Defaults to stage 1
*/
DEFINE_PROP_STRING("stage", SMMUv3State, stage),
+ /*
+ * SECURE_IMPL field in S_IDR1 register.
+ * Indicates whether secure state is implemented.
+ * Defaults to false (0)
+ */
+ DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
};
static void smmuv3_instance_init(Object *obj)
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections
2025-09-26 3:08 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
2025-09-26 3:08 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
@ 2025-09-26 3:08 ` Tao Tang
2025-09-29 15:33 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Eric Auger
2 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-26 3:08 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
Resending patches 12–14/14 that were missing due to a send issue. Sorry
for the noise.
Introduce a generic vmstate_smmuv3_bank that serializes a single SMMUv3
bank (registers and queues). Add a 'smmuv3/bank_s' subsection guarded by
secure_impl and a new 'migrate-secure-bank' property; when enabled, the S
bank is migrated. Add a 'smmuv3/gbpa_secure' subsection which is only sent
when GBPA differs from its reset value.
This keeps the existing migration stream unchanged by default and remains
backward compatible: older QEMUs can ignore unknown subsections, and with
'migrate-secure-bank' defaulting to off, the stream is identical to before.
This also prepares for future RME extensions (Realm/Root) by reusing the
bank subsection pattern.
Usage:
-global arm-smmuv3,secure-impl=on,migrate-secure-bank=on
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 70 +++++++++++++++++++++++++++++++++++++++++
include/hw/arm/smmuv3.h | 1 +
2 files changed, 71 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 80fbc25cf5..2a1e80d179 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -2450,6 +2450,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
},
};
+static const VMStateDescription vmstate_smmuv3_bank = {
+ .name = "smmuv3_bank",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(features, SMMUv3RegBank),
+ VMSTATE_UINT8(sid_split, SMMUv3RegBank),
+ VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
+ VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
+ VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror, SMMUv3RegBank),
+ VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
+ VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
+ VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
+ VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
+static bool smmuv3_secure_bank_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl && s->migrate_secure_bank;
+}
+
+static const VMStateDescription vmstate_smmuv3_bank_s = {
+ .name = "smmuv3/bank_s",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_secure_bank_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_STRUCT(bank[SMMU_SEC_IDX_S], SMMUv3State, 0,
+ vmstate_smmuv3_bank, SMMUv3RegBank),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
static bool smmuv3_gbpa_needed(void *opaque)
{
SMMUv3State *s = opaque;
@@ -2469,6 +2516,25 @@ static const VMStateDescription vmstate_gbpa = {
}
};
+static bool smmuv3_gbpa_secure_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl && s->migrate_secure_bank &&
+ s->bank[SMMU_SEC_IDX_S].gbpa != SMMU_GBPA_RESET_VAL;
+}
+
+static const VMStateDescription vmstate_gbpa_secure = {
+ .name = "smmuv3/gbpa_secure",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_gbpa_secure_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_S].gbpa, SMMUv3State),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
static const VMStateDescription vmstate_smmuv3 = {
.name = "smmuv3",
.version_id = 1,
@@ -2502,6 +2568,8 @@ static const VMStateDescription vmstate_smmuv3 = {
},
.subsections = (const VMStateDescription * const []) {
&vmstate_gbpa,
+ &vmstate_smmuv3_bank_s,
+ &vmstate_gbpa_secure,
NULL
}
};
@@ -2521,6 +2589,8 @@ static const Property smmuv3_properties[] = {
* Defaults to false (0)
*/
DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
+ DEFINE_PROP_BOOL("migrate-secure-bank", SMMUv3State,
+ migrate_secure_bank, false),
};
static void smmuv3_instance_init(Object *obj)
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index 572f15251e..5ffb609fa2 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -71,6 +71,7 @@ struct SMMUv3State {
QemuMutex mutex;
char *stage;
bool secure_impl;
+ bool migrate_secure_bank;
};
typedef enum {
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-09-26 3:08 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
2025-09-26 3:08 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
2025-09-26 3:08 ` [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections Tao Tang
@ 2025-09-29 15:33 ` Eric Auger
2025-09-29 16:02 ` Tao Tang
2 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:33 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 9/26/25 5:08 AM, Tao Tang wrote:
> Resending patches 12–14/14 that were missing due to a send issue. Sorry
> for the noise.
>
> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
> the programming interface. To support future extensions like RME, which
> defines four security states (Non-secure, Secure, Realm, and Root), the
> QEMU model must cleanly separate these contexts for all operations.
>
> This commit leverages the generic iommu_index to represent this
> security context. The core IOMMU layer now uses the SMMU's .attrs_to_index
> callback to map a transaction's ARMSecuritySpace attribute to the
> corresponding iommu_index.
>
> This index is then passed down to smmuv3_translate and used throughout
> the model to select the correct register bank and processing logic. This
> makes the iommu_index the clear QEMU equivalent of the architectural
> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 37 ++++++++++++++++++++++++++++++++++++-
> 1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index eec36d5fd2..c92cc0f06a 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -1099,6 +1099,38 @@ static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
> }
> }
>
> +static SMMUSecurityIndex smmuv3_attrs_to_security_index(MemTxAttrs attrs)
> +{
> + switch (attrs.space) {
> + case ARMSS_Secure:
> + return SMMU_SEC_IDX_S;
> + case ARMSS_NonSecure:
> + default:
> + return SMMU_SEC_IDX_NS;
> + }
> +}
> +
> +/*
> + * ARM SMMU IOMMU index mapping (implements SEC_SID from ARM SMMU):
> + * iommu_idx = 0: Non-secure transactions
> + * iommu_idx = 1: Secure transactions
> + *
> + * The iommu_idx parameter effectively implements the SEC_SID
> + * (Security Stream ID) attribute from the ARM SMMU architecture
> + * specification, which allows the SMMU to differentiate between
> + * secure and non-secure transactions at the hardware level.
> + */
> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
> +{
> + return smmuv3_attrs_to_security_index(attrs);
> +}
> +
> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
> +{
> + /* Support 2 IOMMU indexes for now: NS/S */
> + return SMMU_SEC_IDX_NUM;
> +}
> +
> /* Entry point to SMMU, does everything. */
> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> IOMMUAccessFlags flag, int iommu_idx)
> @@ -1111,7 +1143,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
> .inval_ste_allowed = false};
> SMMUTranslationStatus status;
> SMMUTransCfg *cfg = NULL;
> - SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
> + SMMUSecurityIndex sec_idx = iommu_idx;
> IOMMUTLBEntry entry = {
> .target_as = &address_space_memory,
> .iova = addr,
> @@ -1155,6 +1187,7 @@ epilogue:
> entry.perm = cached_entry->entry.perm;
> entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
> entry.addr_mask = cached_entry->entry.addr_mask;
> + entry.target_as = cached_entry->entry.target_as;
this change looks unrelated to the commit desc.
Eric
> trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
> entry.translated_addr, entry.perm,
> cfg->stage);
> @@ -2534,6 +2567,8 @@ static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>
> imrc->translate = smmuv3_translate;
> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
> + imrc->attrs_to_index = smmuv3_attrs_to_index;
> + imrc->num_indexes = smmuv3_num_indexes;
> }
>
> static const TypeInfo smmuv3_type_info = {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context
2025-09-29 15:33 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Eric Auger
@ 2025-09-29 16:02 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 16:02 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 23:33, Eric Auger wrote:
>
> On 9/26/25 5:08 AM, Tao Tang wrote:
>> Resending patches 12–14/14 that were missing due to a send issue. Sorry
>> for the noise.
>>
>> The Arm SMMUv3 architecture uses a SEC_SID (Secure StreamID) to select
>> the programming interface. To support future extensions like RME, which
>> defines four security states (Non-secure, Secure, Realm, and Root), the
>> QEMU model must cleanly separate these contexts for all operations.
>>
>> This commit leverages the generic iommu_index to represent this
>> security context. The core IOMMU layer now uses the SMMU's .attrs_to_index
>> callback to map a transaction's ARMSecuritySpace attribute to the
>> corresponding iommu_index.
>>
>> This index is then passed down to smmuv3_translate and used throughout
>> the model to select the correct register bank and processing logic. This
>> makes the iommu_index the clear QEMU equivalent of the architectural
>> SEC_SID, cleanly separating the contexts for all subsequent lookups.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 37 ++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 36 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index eec36d5fd2..c92cc0f06a 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -1099,6 +1099,38 @@ static void smmuv3_fixup_event(SMMUEventInfo *event, hwaddr iova)
>> }
>> }
>>
>> +static SMMUSecurityIndex smmuv3_attrs_to_security_index(MemTxAttrs attrs)
>> +{
>> + switch (attrs.space) {
>> + case ARMSS_Secure:
>> + return SMMU_SEC_IDX_S;
>> + case ARMSS_NonSecure:
>> + default:
>> + return SMMU_SEC_IDX_NS;
>> + }
>> +}
>> +
>> +/*
>> + * ARM SMMU IOMMU index mapping (implements SEC_SID from ARM SMMU):
>> + * iommu_idx = 0: Non-secure transactions
>> + * iommu_idx = 1: Secure transactions
>> + *
>> + * The iommu_idx parameter effectively implements the SEC_SID
>> + * (Security Stream ID) attribute from the ARM SMMU architecture
>> + * specification, which allows the SMMU to differentiate between
>> + * secure and non-secure transactions at the hardware level.
>> + */
>> +static int smmuv3_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs)
>> +{
>> + return smmuv3_attrs_to_security_index(attrs);
>> +}
>> +
>> +static int smmuv3_num_indexes(IOMMUMemoryRegion *iommu)
>> +{
>> + /* Support 2 IOMMU indexes for now: NS/S */
>> + return SMMU_SEC_IDX_NUM;
>> +}
>> +
>> /* Entry point to SMMU, does everything. */
>> static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>> IOMMUAccessFlags flag, int iommu_idx)
>> @@ -1111,7 +1143,7 @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
>> .inval_ste_allowed = false};
>> SMMUTranslationStatus status;
>> SMMUTransCfg *cfg = NULL;
>> - SMMUSecurityIndex sec_idx = SMMU_SEC_IDX_NS;
>> + SMMUSecurityIndex sec_idx = iommu_idx;
>> IOMMUTLBEntry entry = {
>> .target_as = &address_space_memory,
>> .iova = addr,
>> @@ -1155,6 +1187,7 @@ epilogue:
>> entry.perm = cached_entry->entry.perm;
>> entry.translated_addr = CACHED_ENTRY_TO_ADDR(cached_entry, addr);
>> entry.addr_mask = cached_entry->entry.addr_mask;
>> + entry.target_as = cached_entry->entry.target_as;
> this change looks unrelated to the commit desc.
>
> Eric
You are absolutely right. That line of code is clearly part of the IOTLB
cache logic and doesn't belong in this commit.
I will move this change to the relevant IOTLB cache commit in the next
version of the series.
Thanks for catching that.
Best,
Tao
>> trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
>> entry.translated_addr, entry.perm,
>> cfg->stage);
>> @@ -2534,6 +2567,8 @@ static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
>>
>> imrc->translate = smmuv3_translate;
>> imrc->notify_flag_changed = smmuv3_notify_flag_changed;
>> + imrc->attrs_to_index = smmuv3_attrs_to_index;
>> + imrc->num_indexes = smmuv3_num_indexes;
>> }
>>
>> static const TypeInfo smmuv3_type_info = {
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (11 preceding siblings ...)
2025-09-26 3:08 ` [PATCH v2 12/14] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
@ 2025-09-26 3:23 ` Tao Tang
2025-09-29 15:42 ` Eric Auger
2025-09-26 3:30 ` [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections Tao Tang
` (2 subsequent siblings)
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-26 3:23 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
My apologies, resending patches 13-14/14 to fix a threading mistake from
my previous attempt.
This commit completes the initial implementation of the Secure SMMUv3
model by making the feature user-configurable.
A new boolean property, "secure-impl", is introduced to the device.
This property defaults to false, ensuring backward compatibility for
existing machine types that do not expect the secure programming
interface.
When "secure-impl" is set to true, the smmuv3_init_regs function now
initializes the secure register bank (bank[SMMU_SEC_IDX_S]). Crucially,
the S_IDR1.SECURE_IMPL bit is set according to this property,
correctly advertising the presence of the secure functionality to the
guest.
This patch ties together all the previous refactoring work. With this
property enabled, the banked registers, security-aware queues, and
other secure features become active, allowing a guest to probe and
configure the Secure SMMU.
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index c92cc0f06a..80fbc25cf5 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -351,6 +351,16 @@ static void smmuv3_init_regs(SMMUv3State *s)
s->statusr = 0;
s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
+ /* Initialize Secure bank (SMMU_SEC_IDX_S) */
+ memset(s->bank[SMMU_SEC_IDX_S].idr, 0, sizeof(s->bank[SMMU_SEC_IDX_S].idr));
+ s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(s->bank[SMMU_SEC_IDX_S].idr[1],
+ S_IDR1, SECURE_IMPL,
+ s->secure_impl);
+ s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(
+ s->bank[SMMU_SEC_IDX_S].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
+ s->bank[SMMU_SEC_IDX_S].gbpa = SMMU_GBPA_RESET_VAL;
+ s->bank[SMMU_SEC_IDX_S].cmdq.entry_size = sizeof(struct Cmd);
+ s->bank[SMMU_SEC_IDX_S].eventq.entry_size = sizeof(struct Evt);
}
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
@@ -2505,6 +2515,12 @@ static const Property smmuv3_properties[] = {
* Defaults to stage 1
*/
DEFINE_PROP_STRING("stage", SMMUv3State, stage),
+ /*
+ * SECURE_IMPL field in S_IDR1 register.
+ * Indicates whether secure state is implemented.
+ * Defaults to false (0)
+ */
+ DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
};
static void smmuv3_instance_init(Object *obj)
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support
2025-09-26 3:23 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
@ 2025-09-29 15:42 ` Eric Auger
2025-09-29 16:15 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:42 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
On 9/26/25 5:23 AM, Tao Tang wrote:
> My apologies, resending patches 13-14/14 to fix a threading mistake from
> my previous attempt.
>
> This commit completes the initial implementation of the Secure SMMUv3
> model by making the feature user-configurable.
>
> A new boolean property, "secure-impl", is introduced to the device.
> This property defaults to false, ensuring backward compatibility for
> existing machine types that do not expect the secure programming
> interface.
>
> When "secure-impl" is set to true, the smmuv3_init_regs function now
> initializes the secure register bank (bank[SMMU_SEC_IDX_S]). Crucially,
> the S_IDR1.SECURE_IMPL bit is set according to this property,
> correctly advertising the presence of the secure functionality to the
> guest.
>
> This patch ties together all the previous refactoring work. With this
> property enabled, the banked registers, security-aware queues, and
> other secure features become active, allowing a guest to probe and
> configure the Secure SMMU.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index c92cc0f06a..80fbc25cf5 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -351,6 +351,16 @@ static void smmuv3_init_regs(SMMUv3State *s)
> s->statusr = 0;
> s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
>
> + /* Initialize Secure bank (SMMU_SEC_IDX_S) */
same comment as before, use a local pointer to the secure bank.
> + memset(s->bank[SMMU_SEC_IDX_S].idr, 0, sizeof(s->bank[SMMU_SEC_IDX_S].idr));
> + s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(s->bank[SMMU_SEC_IDX_S].idr[1],
> + S_IDR1, SECURE_IMPL,
> + s->secure_impl);
> + s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(
> + s->bank[SMMU_SEC_IDX_S].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
> + s->bank[SMMU_SEC_IDX_S].gbpa = SMMU_GBPA_RESET_VAL;
> + s->bank[SMMU_SEC_IDX_S].cmdq.entry_size = sizeof(struct Cmd);
> + s->bank[SMMU_SEC_IDX_S].eventq.entry_size = sizeof(struct Evt);
> }
>
> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
> @@ -2505,6 +2515,12 @@ static const Property smmuv3_properties[] = {
> * Defaults to stage 1
> */
> DEFINE_PROP_STRING("stage", SMMUv3State, stage),
> + /*
> + * SECURE_IMPL field in S_IDR1 register.
> + * Indicates whether secure state is implemented.
> + * Defaults to false (0)
> + */
> + DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
> };
I would introduce the secure-impl property at the very end of the series
because at this point migration is not yet supported.
By the way the secure_impl field can be introduced in the first which
uses it. I don't think "hw/arm/smmuv3: Introduce banked registers for
SMMUv3 state" does
Thanks
Eric
>
> static void smmuv3_instance_init(Object *obj)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support
2025-09-29 15:42 ` Eric Auger
@ 2025-09-29 16:15 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-29 16:15 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 23:42, Eric Auger wrote:
>
> On 9/26/25 5:23 AM, Tao Tang wrote:
>> My apologies, resending patches 13-14/14 to fix a threading mistake from
>> my previous attempt.
>>
>> This commit completes the initial implementation of the Secure SMMUv3
>> model by making the feature user-configurable.
>>
>> A new boolean property, "secure-impl", is introduced to the device.
>> This property defaults to false, ensuring backward compatibility for
>> existing machine types that do not expect the secure programming
>> interface.
>>
>> When "secure-impl" is set to true, the smmuv3_init_regs function now
>> initializes the secure register bank (bank[SMMU_SEC_IDX_S]). Crucially,
>> the S_IDR1.SECURE_IMPL bit is set according to this property,
>> correctly advertising the presence of the secure functionality to the
>> guest.
>>
>> This patch ties together all the previous refactoring work. With this
>> property enabled, the banked registers, security-aware queues, and
>> other secure features become active, allowing a guest to probe and
>> configure the Secure SMMU.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index c92cc0f06a..80fbc25cf5 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -351,6 +351,16 @@ static void smmuv3_init_regs(SMMUv3State *s)
>> s->statusr = 0;
>> s->bank[SMMU_SEC_IDX_NS].gbpa = SMMU_GBPA_RESET_VAL;
>>
>> + /* Initialize Secure bank (SMMU_SEC_IDX_S) */
> same comment as before, use a local pointer to the secure bank.
Of course, I will fix the code style by using a local pointer for the
secure bank access.
>> + memset(s->bank[SMMU_SEC_IDX_S].idr, 0, sizeof(s->bank[SMMU_SEC_IDX_S].idr));
>> + s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(s->bank[SMMU_SEC_IDX_S].idr[1],
>> + S_IDR1, SECURE_IMPL,
>> + s->secure_impl);
>> + s->bank[SMMU_SEC_IDX_S].idr[1] = FIELD_DP32(
>> + s->bank[SMMU_SEC_IDX_S].idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
>> + s->bank[SMMU_SEC_IDX_S].gbpa = SMMU_GBPA_RESET_VAL;
>> + s->bank[SMMU_SEC_IDX_S].cmdq.entry_size = sizeof(struct Cmd);
>> + s->bank[SMMU_SEC_IDX_S].eventq.entry_size = sizeof(struct Evt);
>> }
>>
>> static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
>> @@ -2505,6 +2515,12 @@ static const Property smmuv3_properties[] = {
>> * Defaults to stage 1
>> */
>> DEFINE_PROP_STRING("stage", SMMUv3State, stage),
>> + /*
>> + * SECURE_IMPL field in S_IDR1 register.
>> + * Indicates whether secure state is implemented.
>> + * Defaults to false (0)
>> + */
>> + DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
>> };
> I would introduce the secure-impl property at the very end of the series
> because at this point migration is not yet supported.
> By the way the secure_impl field can be introduced in the first which
> uses it. I don't think "hw/arm/smmuv3: Introduce banked registers for
> SMMUv3 state" does
>
> Thanks
>
> Eric
You are absolutely right. Introducing the "secure-impl" property before
the migration support is complete would expose an unfinished feature and
could lead to serious issues for users. It was a mistake to place it here.
And secure_impl field is only used once at init to set the SECURE_IMPL
register bit. After that, all checks correctly use the register bit, not
the field.
I understand this is inconsistent. In the next version, I will make the
logic more direct to improve readability.
Thanks,
Tao
>> static void smmuv3_instance_init(Object *obj)
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (12 preceding siblings ...)
2025-09-26 3:23 ` [PATCH v2 13/14] hw/arm/smmuv3: Add property to enable Secure SMMU support Tao Tang
@ 2025-09-26 3:30 ` Tao Tang
2025-09-29 15:47 ` Eric Auger
2025-09-26 12:24 ` [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Eric Auger
2025-10-11 0:31 ` Pierrick Bouvier
15 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-26 3:30 UTC (permalink / raw)
To: Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh,
Tao Tang
My apologies, resending patches 13-14/14 to fix a threading mistake from
my previous attempt.
Introduce a generic vmstate_smmuv3_bank that serializes a single SMMUv3
bank (registers and queues). Add a 'smmuv3/bank_s' subsection guarded by
secure_impl and a new 'migrate-secure-bank' property; when enabled, the S
bank is migrated. Add a 'smmuv3/gbpa_secure' subsection which is only sent
when GBPA differs from its reset value.
This keeps the existing migration stream unchanged by default and remains
backward compatible: older QEMUs can ignore unknown subsections, and with
'migrate-secure-bank' defaulting to off, the stream is identical to before.
This also prepares for future RME extensions (Realm/Root) by reusing the
bank subsection pattern.
Usage:
-global arm-smmuv3,secure-impl=on,migrate-secure-bank=on
Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
---
hw/arm/smmuv3.c | 70 +++++++++++++++++++++++++++++++++++++++++
include/hw/arm/smmuv3.h | 1 +
2 files changed, 71 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 80fbc25cf5..2a1e80d179 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm/smmuv3.c
@@ -2450,6 +2450,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
},
};
+static const VMStateDescription vmstate_smmuv3_bank = {
+ .name = "smmuv3_bank",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(features, SMMUv3RegBank),
+ VMSTATE_UINT8(sid_split, SMMUv3RegBank),
+ VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
+ VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
+ VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror, SMMUv3RegBank),
+ VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
+ VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
+ VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
+ VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
+ VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
+ VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
+ vmstate_smmuv3_queue, SMMUQueue),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
+static bool smmuv3_secure_bank_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl && s->migrate_secure_bank;
+}
+
+static const VMStateDescription vmstate_smmuv3_bank_s = {
+ .name = "smmuv3/bank_s",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_secure_bank_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_STRUCT(bank[SMMU_SEC_IDX_S], SMMUv3State, 0,
+ vmstate_smmuv3_bank, SMMUv3RegBank),
+ VMSTATE_END_OF_LIST(),
+ },
+};
+
static bool smmuv3_gbpa_needed(void *opaque)
{
SMMUv3State *s = opaque;
@@ -2469,6 +2516,25 @@ static const VMStateDescription vmstate_gbpa = {
}
};
+static bool smmuv3_gbpa_secure_needed(void *opaque)
+{
+ SMMUv3State *s = opaque;
+
+ return s->secure_impl && s->migrate_secure_bank &&
+ s->bank[SMMU_SEC_IDX_S].gbpa != SMMU_GBPA_RESET_VAL;
+}
+
+static const VMStateDescription vmstate_gbpa_secure = {
+ .name = "smmuv3/gbpa_secure",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = smmuv3_gbpa_secure_needed,
+ .fields = (const VMStateField[]) {
+ VMSTATE_UINT32(bank[SMMU_SEC_IDX_S].gbpa, SMMUv3State),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
static const VMStateDescription vmstate_smmuv3 = {
.name = "smmuv3",
.version_id = 1,
@@ -2502,6 +2568,8 @@ static const VMStateDescription vmstate_smmuv3 = {
},
.subsections = (const VMStateDescription * const []) {
&vmstate_gbpa,
+ &vmstate_smmuv3_bank_s,
+ &vmstate_gbpa_secure,
NULL
}
};
@@ -2521,6 +2589,8 @@ static const Property smmuv3_properties[] = {
* Defaults to false (0)
*/
DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
+ DEFINE_PROP_BOOL("migrate-secure-bank", SMMUv3State,
+ migrate_secure_bank, false),
};
static void smmuv3_instance_init(Object *obj)
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
index 572f15251e..5ffb609fa2 100644
--- a/include/hw/arm/smmuv3.h
+++ b/include/hw/arm/smmuv3.h
@@ -71,6 +71,7 @@ struct SMMUv3State {
QemuMutex mutex;
char *stage;
bool secure_impl;
+ bool migrate_secure_bank;
};
typedef enum {
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections
2025-09-26 3:30 ` [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections Tao Tang
@ 2025-09-29 15:47 ` Eric Auger
2025-09-30 3:35 ` Tao Tang
0 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-29 15:47 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Tao,
On 9/26/25 5:30 AM, Tao Tang wrote:
> My apologies, resending patches 13-14/14 to fix a threading mistake from
> my previous attempt.
>
> Introduce a generic vmstate_smmuv3_bank that serializes a single SMMUv3
> bank (registers and queues). Add a 'smmuv3/bank_s' subsection guarded by
> secure_impl and a new 'migrate-secure-bank' property; when enabled, the S
> bank is migrated. Add a 'smmuv3/gbpa_secure' subsection which is only sent
> when GBPA differs from its reset value.
>
> This keeps the existing migration stream unchanged by default and remains
> backward compatible: older QEMUs can ignore unknown subsections, and with
> 'migrate-secure-bank' defaulting to off, the stream is identical to before.
>
> This also prepares for future RME extensions (Realm/Root) by reusing the
> bank subsection pattern.
>
> Usage:
> -global arm-smmuv3,secure-impl=on,migrate-secure-bank=on
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmuv3.c | 70 +++++++++++++++++++++++++++++++++++++++++
> include/hw/arm/smmuv3.h | 1 +
> 2 files changed, 71 insertions(+)
>
> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
> index 80fbc25cf5..2a1e80d179 100644
> --- a/hw/arm/smmuv3.c
> +++ b/hw/arm/smmuv3.c
> @@ -2450,6 +2450,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
> },
> };
>
> +static const VMStateDescription vmstate_smmuv3_bank = {
I would name this vmstate_smmuv3_secure_bank
> + .name = "smmuv3_bank",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .fields = (const VMStateField[]) {
> + VMSTATE_UINT32(features, SMMUv3RegBank),
> + VMSTATE_UINT8(sid_split, SMMUv3RegBank),
> + VMSTATE_UINT32_ARRAY(cr, SMMUv3RegBank, 3),
> + VMSTATE_UINT32(cr0ack, SMMUv3RegBank),
> + VMSTATE_UINT32(irq_ctrl, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror, SMMUv3RegBank),
> + VMSTATE_UINT32(gerrorn, SMMUv3RegBank),
> + VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3RegBank),
> + VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3RegBank),
> + VMSTATE_UINT64(strtab_base, SMMUv3RegBank),
> + VMSTATE_UINT32(strtab_base_cfg, SMMUv3RegBank),
> + VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3RegBank),
> + VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3RegBank),
> + VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3RegBank),
> + VMSTATE_STRUCT(cmdq, SMMUv3RegBank, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_STRUCT(eventq, SMMUv3RegBank, 0,
> + vmstate_smmuv3_queue, SMMUQueue),
> + VMSTATE_END_OF_LIST(),
> + },
> +};
> +
> +static bool smmuv3_secure_bank_needed(void *opaque)
> +{
> + SMMUv3State *s = opaque;
> +
> + return s->secure_impl && s->migrate_secure_bank;
why is it needed to check s->migrate_secure_bank?
> +}
> +
> +static const VMStateDescription vmstate_smmuv3_bank_s = {
> + .name = "smmuv3/bank_s",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = smmuv3_secure_bank_needed,
> + .fields = (const VMStateField[]) {
> + VMSTATE_STRUCT(bank[SMMU_SEC_IDX_S], SMMUv3State, 0,
> + vmstate_smmuv3_bank, SMMUv3RegBank),
> + VMSTATE_END_OF_LIST(),
> + },
> +};
> +
> static bool smmuv3_gbpa_needed(void *opaque)
> {
> SMMUv3State *s = opaque;
> @@ -2469,6 +2516,25 @@ static const VMStateDescription vmstate_gbpa = {
> }
> };
>
> +static bool smmuv3_gbpa_secure_needed(void *opaque)
> +{
> + SMMUv3State *s = opaque;
> +
> + return s->secure_impl && s->migrate_secure_bank &&
same
> + s->bank[SMMU_SEC_IDX_S].gbpa != SMMU_GBPA_RESET_VAL;
> +}
> +
> +static const VMStateDescription vmstate_gbpa_secure = {
> + .name = "smmuv3/gbpa_secure",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = smmuv3_gbpa_secure_needed,
> + .fields = (const VMStateField[]) {
> + VMSTATE_UINT32(bank[SMMU_SEC_IDX_S].gbpa, SMMUv3State),
> + VMSTATE_END_OF_LIST()
> + }
> +};
> +
> static const VMStateDescription vmstate_smmuv3 = {
> .name = "smmuv3",
> .version_id = 1,
> @@ -2502,6 +2568,8 @@ static const VMStateDescription vmstate_smmuv3 = {
> },
> .subsections = (const VMStateDescription * const []) {
> &vmstate_gbpa,
> + &vmstate_smmuv3_bank_s,
> + &vmstate_gbpa_secure,
> NULL
> }
> };
> @@ -2521,6 +2589,8 @@ static const Property smmuv3_properties[] = {
> * Defaults to false (0)
> */
> DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
> + DEFINE_PROP_BOOL("migrate-secure-bank", SMMUv3State,
> + migrate_secure_bank, false),
I don't get why you need another migrate_secure_bank prop. You need to
migrate the subsection if secure_impl is implemented, don't you?
> };
>
> static void smmuv3_instance_init(Object *obj)
> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
> index 572f15251e..5ffb609fa2 100644
> --- a/include/hw/arm/smmuv3.h
> +++ b/include/hw/arm/smmuv3.h
> @@ -71,6 +71,7 @@ struct SMMUv3State {
> QemuMutex mutex;
> char *stage;
> bool secure_impl;
> + bool migrate_secure_bank;
> };
>
> typedef enum {
> --
> 2.34.1
>
Eric
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections
2025-09-29 15:47 ` Eric Auger
@ 2025-09-30 3:35 ` Tao Tang
0 siblings, 0 replies; 48+ messages in thread
From: Tao Tang @ 2025-09-30 3:35 UTC (permalink / raw)
To: Peter Maydell, Eric Auger
Cc: qemu-devel, qemu-arm, Chen Baozi, Pierrick Bouvier,
Philippe Mathieu-Daudé, Jean-Philippe Brucker, Mostafa Saleh
Hi Eric,
On 2025/9/29 23:47, Eric Auger wrote:
> Hi Tao,
>
> On 9/26/25 5:30 AM, Tao Tang wrote:
>> My apologies, resending patches 13-14/14 to fix a threading mistake from
>> my previous attempt.
>>
>> Introduce a generic vmstate_smmuv3_bank that serializes a single SMMUv3
>> bank (registers and queues). Add a 'smmuv3/bank_s' subsection guarded by
>> secure_impl and a new 'migrate-secure-bank' property; when enabled, the S
>> bank is migrated. Add a 'smmuv3/gbpa_secure' subsection which is only sent
>> when GBPA differs from its reset value.
>>
>> This keeps the existing migration stream unchanged by default and remains
>> backward compatible: older QEMUs can ignore unknown subsections, and with
>> 'migrate-secure-bank' defaulting to off, the stream is identical to before.
>>
>> This also prepares for future RME extensions (Realm/Root) by reusing the
>> bank subsection pattern.
>>
>> Usage:
>> -global arm-smmuv3,secure-impl=on,migrate-secure-bank=on
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>> ---
>> hw/arm/smmuv3.c | 70 +++++++++++++++++++++++++++++++++++++++++
>> include/hw/arm/smmuv3.h | 1 +
>> 2 files changed, 71 insertions(+)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 80fbc25cf5..2a1e80d179 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -2450,6 +2450,53 @@ static const VMStateDescription vmstate_smmuv3_queue = {
>> },
>> };
>>
>> +static const VMStateDescription vmstate_smmuv3_bank = {
> I would name this vmstate_smmuv3_secure_bank
Thank you for the excellent feedback on the migration implementation.
Your points are spot on.
My original thought for using the generic vmstate_smmuv3_bank was to
potentially reuse it for future Realm state migration. However, I agree
with you that naming it vmstate_smmuv3_secure_bank for now is a better
choice for clarity and precision. I will make that change.
>> +
>> +static bool smmuv3_secure_bank_needed(void *opaque)
>> +{
>> + SMMUv3State *s = opaque;
>> +
>> + return s->secure_impl && s->migrate_secure_bank;
> why is it needed to check s->migrate_secure_bank?
>>
>> +static bool smmuv3_gbpa_secure_needed(void *opaque)
>> +{
>> + SMMUv3State *s = opaque;
>> +
>> + return s->secure_impl && s->migrate_secure_bank &&
> same
>> @@ -2521,6 +2589,8 @@ static const Property smmuv3_properties[] = {
>> * Defaults to false (0)
>> */
>> DEFINE_PROP_BOOL("secure-impl", SMMUv3State, secure_impl, false),
>> + DEFINE_PROP_BOOL("migrate-secure-bank", SMMUv3State,
>> + migrate_secure_bank, false),
> I don't get why you need another migrate_secure_bank prop. You need to
> migrate the subsection if secure_impl is implemented, don't you?
You are absolutely right. My intention with the migrate_secure_bank
property was likely to maintain as much flexibility as possible.
However, I completely agree with your logic now. Forcing the migration
of the secure state whenever secure-impl is enabled is the only correct
approach to prevent inconsistent states and ensure robustness. I will
remove the migrate_secure_bank property and tie the migration directly
to secure-impl.
Thanks for helping me correct this design flaw.
Best,
Tao
>> };
>>
>> static void smmuv3_instance_init(Object *obj)
>> diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
>> index 572f15251e..5ffb609fa2 100644
>> --- a/include/hw/arm/smmuv3.h
>> +++ b/include/hw/arm/smmuv3.h
>> @@ -71,6 +71,7 @@ struct SMMUv3State {
>> QemuMutex mutex;
>> char *stage;
>> bool secure_impl;
>> + bool migrate_secure_bank;
>> };
>>
>> typedef enum {
>> --
>> 2.34.1
>>
> Eric
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (13 preceding siblings ...)
2025-09-26 3:30 ` [PATCH v2 14/14] hw/arm/smmuv3: Optional Secure bank migration via subsections Tao Tang
@ 2025-09-26 12:24 ` Eric Auger
2025-09-26 14:54 ` Tao Tang
2025-10-11 0:31 ` Pierrick Bouvier
15 siblings, 1 reply; 48+ messages in thread
From: Eric Auger @ 2025-09-26 12:24 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
Hi,
On 9/25/25 6:26 PM, Tao Tang wrote:
> Hi all,
>
> This is the second version of the patch series to introduce initial
> support for Secure SMMUv3 emulation in QEMU.
>
> This version has been significantly restructured based on the excellent
> feedback received on the RFC.
>
> This version addresses the major points raised during the RFC review.
> Nearly all issues identified in v1 have been resolved. The most
> significant changes include:
>
> - The entire series has been refactored to use a "banked register"
> architecture. This new design serves as a solid base for all secure
> functionality and significantly reduces code duplication.
>
> - The large refactoring patch from v1 has been split into smaller, more
> focused commits (e.g., STE parsing, page table handling, and TLB
> management) to make the review process easier.
>
> - Support for the complex SEL2 feature (Secure Stage 2) has been
> deferred to a future series to reduce the scope of this RFC.
>
> - The mechanism for propagating the security context now correctly uses
> the ARMSecuritySpace attribute from the incoming transaction. This
> ensures the SMMU's handling of security is aligned with the rest of the
> QEMU ARM architecture.
>
>
> The series now begins with two preparatory patches that fix pre-existing
> bugs in the SMMUv3 model. The first of these, which corrects the CR0
> reserved mask, has already been reviewed by Eric.
>
> - hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
> - hw/arm/smmuv3: Correct SMMUEN field name in CR0
>
> The subsequent patches implement the Secure SMMUv3 feature, refactored
> to address the feedback from the v1 RFC.
could you shared a branch? It does not seem to apply on master.
Thanks
Eric
>
>
> Changes from v1 RFC:
>
> - The entire feature implementation has been refactored to use a "banked
> register" approach. This significantly reduces code duplication.
>
> - Support for the SEL2 feature (Secure Stage 2) has been deferred. As
> Mostafa pointed out, a correct implementation is complex and depends on
> FEAT_TTST. This will be addressed in a separate, future patch series.
> As a result, this series now supports the following flows:
>
> - Non-secure Stage 1, Stage 2, and nested translations.
>
> - Secure Stage 1-only translations.
>
> - Nested translations (Secure Stage 1 + Non-secure Stage 2), with a
> fault generated if a Secure Stage 2 translation is required.
>
> - Writability checks for various registers (both secure and non-secure)
> have been hardened to ensure that enable bits are correctly checked.
>
> The series has been successfully validated with several test setups:
>
> - An environment using OP-TEE, Hafnium, and a custom platform
> device as V1 series described.
>
> - A new, self-contained test device (smmu-testdev) built upon the
> QTest framework, which will be submitted as a separate series as
> discussed here:
> https://lists.nongnu.org/archive/html/qemu-devel/2025-09/msg05365.html
>
> - The existing non-secure functionality was regression-tested using
> PCIe passthrough to a KVM guest running inside a TCG guest.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>
> Tao Tang (14):
> hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
> hw/arm/smmuv3: Correct SMMUEN field name in CR0
> hw/arm/smmuv3: Introduce secure registers and commands
> refactor: Move ARMSecuritySpace to a common header
> hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
> hw/arm/smmuv3: Add separate address space for secure SMMU accesses
> hw/arm/smmuv3: Make Configuration Cache security-state aware
> hw/arm/smmuv3: Add security-state handling for page table walks
> hw/arm/smmuv3: Add secure TLB entry management
> hw/arm/smmuv3: Add banked support for queues and error handling
> hw/arm/smmuv3: Harden security checks in MMIO handlers
> hw/arm/smmuv3: Use iommu_index to represent the security context
> hw/arm/smmuv3: Add property to enable Secure SMMU support
> hw/arm/smmuv3: Optional Secure bank migration via subsections
>
> hw/arm/smmu-common.c | 151 ++++-
> hw/arm/smmu-internal.h | 7 +
> hw/arm/smmuv3-internal.h | 114 +++-
> hw/arm/smmuv3.c | 1130 +++++++++++++++++++++++++--------
> hw/arm/trace-events | 9 +-
> hw/arm/virt.c | 5 +
> include/hw/arm/arm-security.h | 54 ++
> include/hw/arm/smmu-common.h | 60 +-
> include/hw/arm/smmuv3.h | 35 +-
> target/arm/cpu.h | 25 +-
> 10 files changed, 1257 insertions(+), 333 deletions(-)
> create mode 100644 include/hw/arm/arm-security.h
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State
2025-09-26 12:24 ` [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Eric Auger
@ 2025-09-26 14:54 ` Tao Tang
2025-09-26 16:12 ` Eric Auger
0 siblings, 1 reply; 48+ messages in thread
From: Tao Tang @ 2025-09-26 14:54 UTC (permalink / raw)
To: eric.auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
On 2025/9/26 20:24, Eric Auger wrote:
> Hi,
>
> On 9/25/25 6:26 PM, Tao Tang wrote:
>> Hi all,
>>
>> This is the second version of the patch series to introduce initial
>> support for Secure SMMUv3 emulation in QEMU.
>>
>> This version has been significantly restructured based on the excellent
>> feedback received on the RFC.
>>
>> This version addresses the major points raised during the RFC review.
>> Nearly all issues identified in v1 have been resolved. The most
>> significant changes include:
>>
>> - The entire series has been refactored to use a "banked register"
>> architecture. This new design serves as a solid base for all secure
>> functionality and significantly reduces code duplication.
>>
>> - The large refactoring patch from v1 has been split into smaller, more
>> focused commits (e.g., STE parsing, page table handling, and TLB
>> management) to make the review process easier.
>>
>> - Support for the complex SEL2 feature (Secure Stage 2) has been
>> deferred to a future series to reduce the scope of this RFC.
>>
>> - The mechanism for propagating the security context now correctly uses
>> the ARMSecuritySpace attribute from the incoming transaction. This
>> ensures the SMMU's handling of security is aligned with the rest of the
>> QEMU ARM architecture.
>>
>>
>> The series now begins with two preparatory patches that fix pre-existing
>> bugs in the SMMUv3 model. The first of these, which corrects the CR0
>> reserved mask, has already been reviewed by Eric.
>>
>> - hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
>> - hw/arm/smmuv3: Correct SMMUEN field name in CR0
>>
>> The subsequent patches implement the Secure SMMUv3 feature, refactored
>> to address the feedback from the v1 RFC.
> could you shared a branch? It does not seem to apply on master.
>
> Thanks
>
> Eric
Hi Eric,
Thanks for the feedback. I've rebased the patch series onto the latest
master and pushed it to a branch as you requested.
Interestingly, the rebase completed cleanly without any conflicts on my
end, so I'm not sure what the initial issue might have been. In any
case, this branch should be up-to-date.
You can find the updated branch here for review:
- [v1-rebased]
https://github.com/hnusdr/qemu/tree/secure-smmu-v1-community-newer
For historical reference, the original branch is here.
-
[v1-original] https://github.com/hnusdr/qemu/tree/secure-smmu-v1-community
Thanks,
Tao
>>
>> Changes from v1 RFC:
>>
>> - The entire feature implementation has been refactored to use a "banked
>> register" approach. This significantly reduces code duplication.
>>
>> - Support for the SEL2 feature (Secure Stage 2) has been deferred. As
>> Mostafa pointed out, a correct implementation is complex and depends on
>> FEAT_TTST. This will be addressed in a separate, future patch series.
>> As a result, this series now supports the following flows:
>>
>> - Non-secure Stage 1, Stage 2, and nested translations.
>>
>> - Secure Stage 1-only translations.
>>
>> - Nested translations (Secure Stage 1 + Non-secure Stage 2), with a
>> fault generated if a Secure Stage 2 translation is required.
>>
>> - Writability checks for various registers (both secure and non-secure)
>> have been hardened to ensure that enable bits are correctly checked.
>>
>> The series has been successfully validated with several test setups:
>>
>> - An environment using OP-TEE, Hafnium, and a custom platform
>> device as V1 series described.
>>
>> - A new, self-contained test device (smmu-testdev) built upon the
>> QTest framework, which will be submitted as a separate series as
>> discussed here:
>> https://lists.nongnu.org/archive/html/qemu-devel/2025-09/msg05365.html
>>
>> - The existing non-secure functionality was regression-tested using
>> PCIe passthrough to a KVM guest running inside a TCG guest.
>>
>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>
>> Tao Tang (14):
>> hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
>> hw/arm/smmuv3: Correct SMMUEN field name in CR0
>> hw/arm/smmuv3: Introduce secure registers and commands
>> refactor: Move ARMSecuritySpace to a common header
>> hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
>> hw/arm/smmuv3: Add separate address space for secure SMMU accesses
>> hw/arm/smmuv3: Make Configuration Cache security-state aware
>> hw/arm/smmuv3: Add security-state handling for page table walks
>> hw/arm/smmuv3: Add secure TLB entry management
>> hw/arm/smmuv3: Add banked support for queues and error handling
>> hw/arm/smmuv3: Harden security checks in MMIO handlers
>> hw/arm/smmuv3: Use iommu_index to represent the security context
>> hw/arm/smmuv3: Add property to enable Secure SMMU support
>> hw/arm/smmuv3: Optional Secure bank migration via subsections
>>
>> hw/arm/smmu-common.c | 151 ++++-
>> hw/arm/smmu-internal.h | 7 +
>> hw/arm/smmuv3-internal.h | 114 +++-
>> hw/arm/smmuv3.c | 1130 +++++++++++++++++++++++++--------
>> hw/arm/trace-events | 9 +-
>> hw/arm/virt.c | 5 +
>> include/hw/arm/arm-security.h | 54 ++
>> include/hw/arm/smmu-common.h | 60 +-
>> include/hw/arm/smmuv3.h | 35 +-
>> target/arm/cpu.h | 25 +-
>> 10 files changed, 1257 insertions(+), 333 deletions(-)
>> create mode 100644 include/hw/arm/arm-security.h
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State
2025-09-26 14:54 ` Tao Tang
@ 2025-09-26 16:12 ` Eric Auger
0 siblings, 0 replies; 48+ messages in thread
From: Eric Auger @ 2025-09-26 16:12 UTC (permalink / raw)
To: Tao Tang, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, pierrick.bouvier, philmd,
jean-philippe, smostafa
On 9/26/25 4:54 PM, Tao Tang wrote:
>
> On 2025/9/26 20:24, Eric Auger wrote:
>> Hi,
>>
>> On 9/25/25 6:26 PM, Tao Tang wrote:
>>> Hi all,
>>>
>>> This is the second version of the patch series to introduce initial
>>> support for Secure SMMUv3 emulation in QEMU.
>>>
>>> This version has been significantly restructured based on the excellent
>>> feedback received on the RFC.
>>>
>>> This version addresses the major points raised during the RFC review.
>>> Nearly all issues identified in v1 have been resolved. The most
>>> significant changes include:
>>>
>>> - The entire series has been refactored to use a "banked register"
>>> architecture. This new design serves as a solid base for all secure
>>> functionality and significantly reduces code duplication.
>>>
>>> - The large refactoring patch from v1 has been split into
>>> smaller, more
>>> focused commits (e.g., STE parsing, page table handling, and TLB
>>> management) to make the review process easier.
>>>
>>> - Support for the complex SEL2 feature (Secure Stage 2) has been
>>> deferred to a future series to reduce the scope of this RFC.
>>>
>>> - The mechanism for propagating the security context now
>>> correctly uses
>>> the ARMSecuritySpace attribute from the incoming transaction. This
>>> ensures the SMMU's handling of security is aligned with the rest
>>> of the
>>> QEMU ARM architecture.
>>>
>>>
>>> The series now begins with two preparatory patches that fix
>>> pre-existing
>>> bugs in the SMMUv3 model. The first of these, which corrects the CR0
>>> reserved mask, has already been reviewed by Eric.
>>>
>>> - hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
>>> - hw/arm/smmuv3: Correct SMMUEN field name in CR0
>>>
>>> The subsequent patches implement the Secure SMMUv3 feature, refactored
>>> to address the feedback from the v1 RFC.
>> could you shared a branch? It does not seem to apply on master.
>>
>> Thanks
>>
>> Eric
>
>
> Hi Eric,
>
> Thanks for the feedback. I've rebased the patch series onto the latest
> master and pushed it to a branch as you requested.
>
> Interestingly, the rebase completed cleanly without any conflicts on
> my end, so I'm not sure what the initial issue might have been. In any
> case, this branch should be up-to-date.
>
>
> You can find the updated branch here for review:
>
> - [v1-rebased]
> https://github.com/hnusdr/qemu/tree/secure-smmu-v1-community-newer
Thanks for the branches. I guess it is due to
[PATCH v9 00/11] hw/arm/virt: Add support for user creatable SMMUv3 device <https://lore.kernel.org/all/20250829082543.7680-1-skolothumtho@nvidia.com/#r>
which landed ~ 10d ago.
Thanks
Eric
>
>
> For historical reference, the original branch is here.
>
> -
> [v1-original] https://github.com/hnusdr/qemu/tree/secure-smmu-v1-community
>
>
> Thanks,
>
> Tao
>
>
>>>
>>> Changes from v1 RFC:
>>>
>>> - The entire feature implementation has been refactored to use a
>>> "banked
>>> register" approach. This significantly reduces code duplication.
>>>
>>> - Support for the SEL2 feature (Secure Stage 2) has been
>>> deferred. As
>>> Mostafa pointed out, a correct implementation is complex and
>>> depends on
>>> FEAT_TTST. This will be addressed in a separate, future patch
>>> series.
>>> As a result, this series now supports the following flows:
>>>
>>> - Non-secure Stage 1, Stage 2, and nested translations.
>>>
>>> - Secure Stage 1-only translations.
>>>
>>> - Nested translations (Secure Stage 1 + Non-secure Stage 2),
>>> with a
>>> fault generated if a Secure Stage 2 translation is required.
>>>
>>> - Writability checks for various registers (both secure and
>>> non-secure)
>>> have been hardened to ensure that enable bits are correctly checked.
>>>
>>> The series has been successfully validated with several test setups:
>>>
>>> - An environment using OP-TEE, Hafnium, and a custom platform
>>> device as V1 series described.
>>>
>>> - A new, self-contained test device (smmu-testdev) built upon the
>>> QTest framework, which will be submitted as a separate series as
>>> discussed here:
>>>
>>> https://lists.nongnu.org/archive/html/qemu-devel/2025-09/msg05365.html
>>>
>>> - The existing non-secure functionality was regression-tested using
>>> PCIe passthrough to a KVM guest running inside a TCG guest.
>>>
>>> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>>>
>>> Tao Tang (14):
>>> hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
>>> hw/arm/smmuv3: Correct SMMUEN field name in CR0
>>> hw/arm/smmuv3: Introduce secure registers and commands
>>> refactor: Move ARMSecuritySpace to a common header
>>> hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
>>> hw/arm/smmuv3: Add separate address space for secure SMMU accesses
>>> hw/arm/smmuv3: Make Configuration Cache security-state aware
>>> hw/arm/smmuv3: Add security-state handling for page table walks
>>> hw/arm/smmuv3: Add secure TLB entry management
>>> hw/arm/smmuv3: Add banked support for queues and error handling
>>> hw/arm/smmuv3: Harden security checks in MMIO handlers
>>> hw/arm/smmuv3: Use iommu_index to represent the security context
>>> hw/arm/smmuv3: Add property to enable Secure SMMU support
>>> hw/arm/smmuv3: Optional Secure bank migration via subsections
>>>
>>> hw/arm/smmu-common.c | 151 ++++-
>>> hw/arm/smmu-internal.h | 7 +
>>> hw/arm/smmuv3-internal.h | 114 +++-
>>> hw/arm/smmuv3.c | 1130
>>> +++++++++++++++++++++++++--------
>>> hw/arm/trace-events | 9 +-
>>> hw/arm/virt.c | 5 +
>>> include/hw/arm/arm-security.h | 54 ++
>>> include/hw/arm/smmu-common.h | 60 +-
>>> include/hw/arm/smmuv3.h | 35 +-
>>> target/arm/cpu.h | 25 +-
>>> 10 files changed, 1257 insertions(+), 333 deletions(-)
>>> create mode 100644 include/hw/arm/arm-security.h
>>>
>>> --
>>> 2.34.1
>>>
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State
2025-09-25 16:26 [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
` (14 preceding siblings ...)
2025-09-26 12:24 ` [PATCH v2 00/14] hw/arm/smmuv3: Add initial support for Secure State Eric Auger
@ 2025-10-11 0:31 ` Pierrick Bouvier
15 siblings, 0 replies; 48+ messages in thread
From: Pierrick Bouvier @ 2025-10-11 0:31 UTC (permalink / raw)
To: Tao Tang, Eric Auger, Peter Maydell
Cc: qemu-devel, qemu-arm, Chen Baozi, philmd, jean-philippe, smostafa
Hi Tao and Eric,
On 9/25/25 9:26 AM, Tao Tang wrote:
> Hi all,
>
> This is the second version of the patch series to introduce initial
> support for Secure SMMUv3 emulation in QEMU.
>
> This version has been significantly restructured based on the excellent
> feedback received on the RFC.
>
> This version addresses the major points raised during the RFC review.
> Nearly all issues identified in v1 have been resolved. The most
> significant changes include:
>
> - The entire series has been refactored to use a "banked register"
> architecture. This new design serves as a solid base for all secure
> functionality and significantly reduces code duplication.
>
> - The large refactoring patch from v1 has been split into smaller, more
> focused commits (e.g., STE parsing, page table handling, and TLB
> management) to make the review process easier.
>
> - Support for the complex SEL2 feature (Secure Stage 2) has been
> deferred to a future series to reduce the scope of this RFC.
>
> - The mechanism for propagating the security context now correctly uses
> the ARMSecuritySpace attribute from the incoming transaction. This
> ensures the SMMU's handling of security is aligned with the rest of the
> QEMU ARM architecture.
>
>
> The series now begins with two preparatory patches that fix pre-existing
> bugs in the SMMUv3 model. The first of these, which corrects the CR0
> reserved mask, has already been reviewed by Eric.
>
> - hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
> - hw/arm/smmuv3: Correct SMMUEN field name in CR0
>
> The subsequent patches implement the Secure SMMUv3 feature, refactored
> to address the feedback from the v1 RFC.
>
>
> Changes from v1 RFC:
>
> - The entire feature implementation has been refactored to use a "banked
> register" approach. This significantly reduces code duplication.
>
> - Support for the SEL2 feature (Secure Stage 2) has been deferred. As
> Mostafa pointed out, a correct implementation is complex and depends on
> FEAT_TTST. This will be addressed in a separate, future patch series.
> As a result, this series now supports the following flows:
>
> - Non-secure Stage 1, Stage 2, and nested translations.
>
> - Secure Stage 1-only translations.
>
> - Nested translations (Secure Stage 1 + Non-secure Stage 2), with a
> fault generated if a Secure Stage 2 translation is required.
>
> - Writability checks for various registers (both secure and non-secure)
> have been hardened to ensure that enable bits are correctly checked.
>
> The series has been successfully validated with several test setups:
>
> - An environment using OP-TEE, Hafnium, and a custom platform
> device as V1 series described.
>
> - A new, self-contained test device (smmu-testdev) built upon the
> QTest framework, which will be submitted as a separate series as
> discussed here:
> https://lists.nongnu.org/archive/html/qemu-devel/2025-09/msg05365.html
>
> - The existing non-secure functionality was regression-tested using
> PCIe passthrough to a KVM guest running inside a TCG guest.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
>
> Tao Tang (14):
> hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register
> hw/arm/smmuv3: Correct SMMUEN field name in CR0
> hw/arm/smmuv3: Introduce secure registers and commands
> refactor: Move ARMSecuritySpace to a common header
> hw/arm/smmuv3: Introduce banked registers for SMMUv3 state
> hw/arm/smmuv3: Add separate address space for secure SMMU accesses
> hw/arm/smmuv3: Make Configuration Cache security-state aware
> hw/arm/smmuv3: Add security-state handling for page table walks
> hw/arm/smmuv3: Add secure TLB entry management
> hw/arm/smmuv3: Add banked support for queues and error handling
> hw/arm/smmuv3: Harden security checks in MMIO handlers
> hw/arm/smmuv3: Use iommu_index to represent the security context
> hw/arm/smmuv3: Add property to enable Secure SMMU support
> hw/arm/smmuv3: Optional Secure bank migration via subsections
>
> hw/arm/smmu-common.c | 151 ++++-
> hw/arm/smmu-internal.h | 7 +
> hw/arm/smmuv3-internal.h | 114 +++-
> hw/arm/smmuv3.c | 1130 +++++++++++++++++++++++++--------
> hw/arm/trace-events | 9 +-
> hw/arm/virt.c | 5 +
> include/hw/arm/arm-security.h | 54 ++
> include/hw/arm/smmu-common.h | 60 +-
> include/hw/arm/smmuv3.h | 35 +-
> target/arm/cpu.h | 25 +-
> 10 files changed, 1257 insertions(+), 333 deletions(-)
> create mode 100644 include/hw/arm/arm-security.h
>
> --
> 2.34.1
>
I've been working this on Device Assignment software stack recently
published by Arm, to run that under QEMU.
[1]
https://git.trustedfirmware.org/plugins/gitiles/TF-RMM/tf-rmm/+/refs/heads/topics/da_alp12_v2
As part of the implementation, I had to define SMMU Realm registers and
some root registers as well.
I based the work on this series, and the banked approach works well to
add Realm registers. For Root registers, since they have different
offsets than NonSecure, Secure and Realm ones, they need their own
{read,write}_mmio function.
Just that to say that it's a great start, and I'm looking forward to
work with the v3.
Regards,
Pierrick
^ permalink raw reply [flat|nested] 48+ messages in thread