From: Eric Auger <eric.auger@redhat.com>
To: Tao Tang <tangtao1634@phytium.com.cn>,
Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, qemu-arm@nongnu.org,
"Chen Baozi" <chenbaozi@phytium.com.cn>,
"Pierrick Bouvier" <pierrick.bouvier@linaro.org>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Jean-Philippe Brucker" <jean-philippe@linaro.org>,
"Mostafa Saleh" <smostafa@google.com>
Subject: Re: [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw
Date: Tue, 2 Dec 2025 16:53:21 +0100 [thread overview]
Message-ID: <274e3061-c4ab-48f6-ba95-ed0eed0e2ce2@redhat.com> (raw)
In-Reply-To: <20251012151200.4129164-1-tangtao1634@phytium.com.cn>
On 10/12/25 5:12 PM, Tao Tang wrote:
> Enhance the page table walker to correctly handle secure and non-secure
> memory accesses. This change introduces logic to select the appropriate
> address space and enforce architectural security policies during walks.
>
> The page table walker now correctly processes Secure Stage 1
> translations. Key changes include:
>
> - The get_pte function now uses the security context to fetch table
> entries from either the Secure or Non-secure address space.
>
> - The stage 1 walker tracks the security state, respecting the NSCFG
> and NSTable attributes. It correctly handles the hierarchical security
> model: if a table descriptor in a secure walk has NSTable=1, all
> subsequent lookups for that walk are forced into the Non-secure space.
> This is a one-way transition, as specified by the architecture.
>
> - A check is added to fault nested translations that produce a Secure
> IPA when Secure stage 2 is not supported (SMMU_S_IDR1.SEL2 == 0).
>
> - The final TLB entry is tagged with the correct output address space,
> ensuring proper memory isolation.
>
> Stage 2 translations are currently limited to Non-secure lookups. Full
> support for Secure Stage 2 translation will be added in a future series.
>
> Signed-off-by: Tao Tang <tangtao1634@phytium.com.cn>
> ---
> hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++++++++++++-----
> hw/arm/trace-events | 2 +-
> 2 files changed, 59 insertions(+), 7 deletions(-)
>
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 5fabe30c75..a092bb5a8d 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -399,20 +399,26 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
> * @base_addr[@index]
> */
> static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
> - SMMUPTWEventInfo *info)
> + SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
> {
> int ret;
> dma_addr_t addr = baseaddr + index * sizeof(*pte);
> -
> + MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
> + AddressSpace *as = smmu_get_address_space(sec_sid);
> + if (!as) {
> + info->type = SMMU_PTW_ERR_WALK_EABT;
is it WALK_EABT or PERMISSION in that case? I fail to find where it is
specified in the spec. Add a reference once?
> + info->addr = addr;
> + return -EINVAL;
> + }
> /* TODO: guarantee 64-bit single-copy atomicity */
> - ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
> + ret = ldq_le_dma(as, addr, pte, attrs);
>
> if (ret != MEMTX_OK) {
> info->type = SMMU_PTW_ERR_WALK_EABT;
> info->addr = addr;
> return -EINVAL;
> }
> - trace_smmu_get_pte(baseaddr, index, addr, *pte);
> + trace_smmu_get_pte(sec_sid, baseaddr, index, addr, *pte);
> return 0;
> }
>
> @@ -543,6 +549,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
>
> baseaddr = extract64(tt->ttb, 0, cfg->oas);
> baseaddr &= ~indexmask;
> + int nscfg = tt->nscfg;
> + bool forced_ns = false; /* Track if NSTable=1 forced NS mode */
>
> while (level < VMSA_LEVELS) {
> uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
> @@ -552,7 +560,10 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS if forced by previous NSTable=1 or current nscfg */
> + int current_ns = forced_ns || nscfg;
> + SMMUSecSID sec_sid = current_ns ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
> + if (get_pte(baseaddr, offset, &pte, info, sec_sid)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, iova, subpage_size,
> @@ -577,6 +588,26 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
> }
> +
> + /*
> + * Hierarchical control of Secure/Non-secure accesses:
> + * If NSTable=1 from Secure space, force all subsequent lookups to
> + * Non-secure space and ignore future NSTable according to
> + * (IHI 0070G.b)13.4.1 Stage 1 page permissions and
> + * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory access
> + */
> + if (!forced_ns) {
> + int new_nstable = PTE_NSTABLE(pte);
> + if (!current_ns && new_nstable) {
> + /* First transition from Secure to Non-secure */
> + forced_ns = true;
> + nscfg = 1;
> + } else if (!forced_ns) {
> + /* Still in original mode, update nscfg normally */
> + nscfg = new_nstable;
> + }
> + /* If forced_ns is already true, ignore NSTable bit */
> + }
> level++;
> continue;
> } else if (is_page_pte(pte, level)) {
> @@ -619,6 +650,13 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
> goto error;
> }
>
> + tlbe->sec_sid = PTE_NS(pte) ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
> + tlbe->entry.target_as = smmu_get_address_space(tlbe->sec_sid);
> + if (!tlbe->entry.target_as) {
> + info->type = SMMU_PTW_ERR_WALK_EABT;
> + info->addr = gpa;
> + goto error;
> + }
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = iova & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -688,7 +726,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
> uint8_t s2ap;
>
> - if (get_pte(baseaddr, offset, &pte, info)) {
> + /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 0)*/
I don't really get this as you passed the sel2 in the cfg?
> + if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
> goto error;
> }
> trace_smmu_ptw_level(stage, level, ipa, subpage_size,
> @@ -741,6 +780,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
> goto error_ipa;
> }
>
> + tlbe->sec_sid = SMMU_SEC_SID_NS;
> + tlbe->entry.target_as = &address_space_memory;
> tlbe->entry.translated_addr = gpa;
> tlbe->entry.iova = ipa & ~mask;
> tlbe->entry.addr_mask = mask;
> @@ -825,6 +866,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t iova,
> return ret;
> }
>
> + if (!cfg->sel2 && tlbe->sec_sid > SMMU_SEC_SID_NS) {
> + /*
> + * Nested translation with Secure IPA output is not supported if
> + * Secure Stage 2 is not implemented.
> + */
> + info->type = SMMU_PTW_ERR_TRANSLATION;
pointer to the spec for TRANSLATION error?
Otherwise looks good
Eric
> + info->stage = SMMU_STAGE_1;
> + tlbe->entry.perm = IOMMU_NONE;
> + return -EINVAL;
> + }
> +
> ipa = CACHED_ENTRY_TO_ADDR(tlbe, iova);
> ret = smmu_ptw_64_s2(cfg, ipa, perm, &tlbe_s2, info);
> if (ret) {
> diff --git a/hw/arm/trace-events b/hw/arm/trace-events
> index 96ebd1b11b..a37e894766 100644
> --- a/hw/arm/trace-events
> +++ b/hw/arm/trace-events
> @@ -16,7 +16,7 @@ smmu_ptw_level(int stage, int level, uint64_t iova, size_t subpage_size, uint64_
> smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
> smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
> smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
> -smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
> +smmu_get_pte(int sec_sid, uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "sec_sid=%d baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64""
> smmu_iotlb_inv_all(void) "IOTLB invalidate all"
> smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d"
> smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d"
next prev parent reply other threads:[~2025-12-02 15:54 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-12 15:06 [RFC v3 00/21] hw/arm/smmuv3: Add initial support for Secure State Tao Tang
2025-10-12 15:06 ` [RFC v3 01/21] hw/arm/smmuv3: Fix incorrect reserved mask for SMMU CR0 register Tao Tang
2025-10-12 15:06 ` [RFC v3 02/21] hw/arm/smmuv3: Correct SMMUEN field name in CR0 Tao Tang
2025-10-12 15:06 ` [RFC v3 03/21] hw/arm/smmuv3: Introduce secure registers Tao Tang
2025-11-21 12:47 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 04/21] refactor: Move ARMSecuritySpace to a common header Tao Tang
2025-11-21 12:49 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 05/21] hw/arm/smmuv3: Introduce banked registers for SMMUv3 state Tao Tang
2025-11-21 13:02 ` Eric Auger
2025-11-23 9:28 ` [RESEND RFC " Tao Tang
2025-10-12 15:06 ` [RFC v3 06/21] hw/arm/smmuv3: Thread SEC_SID through helper APIs Tao Tang
2025-11-21 13:13 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 07/21] hw/arm/smmuv3: Track SEC_SID in configs and events Tao Tang
2025-12-02 11:05 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 08/21] hw/arm/smmuv3: Add separate address space for secure SMMU accesses Tao Tang
2025-12-02 13:53 ` Eric Auger
2025-12-03 13:50 ` Tao Tang
2025-12-11 22:12 ` Pierrick Bouvier
2025-12-11 22:19 ` Pierrick Bouvier
2025-10-12 15:06 ` [RFC v3 09/21] hw/arm/smmuv3: Plumb transaction attributes into config helpers Tao Tang
2025-12-02 14:03 ` Eric Auger
2025-12-03 14:03 ` Tao Tang
2025-10-12 15:06 ` [RFC v3 10/21] hw/arm/smmu-common: Key configuration cache on SMMUDevice and SEC_SID Tao Tang
2025-12-02 14:18 ` Eric Auger
2025-10-12 15:06 ` [RFC v3 11/21] hw/arm/smmuv3: Decode security attributes from descriptors Tao Tang
2025-12-02 15:19 ` Eric Auger
2025-12-03 14:30 ` Tao Tang
2025-10-12 15:12 ` [RFC v3 12/21] hw/arm/smmu-common: Implement secure state handling in ptw Tao Tang
2025-12-02 15:53 ` Eric Auger [this message]
2025-12-03 15:10 ` Tao Tang
2025-10-12 15:12 ` [RFC v3 13/21] hw/arm/smmuv3: Tag IOTLB cache keys with SEC_SID Tao Tang
2025-12-02 16:08 ` Eric Auger
2025-12-03 15:28 ` Tao Tang
2025-10-12 15:13 ` [RFC v3 14/21] hw/arm/smmuv3: Add access checks for MMIO registers Tao Tang
2025-12-02 16:31 ` Eric Auger
2025-12-03 15:32 ` Tao Tang
2025-10-12 15:13 ` [RFC v3 15/21] hw/arm/smmuv3: Determine register bank from MMIO offset Tao Tang
2025-10-14 23:31 ` Pierrick Bouvier
2025-12-04 14:21 ` Eric Auger
2025-12-05 6:31 ` Tao Tang
2025-10-12 15:13 ` [RFC v3 16/21] hw/arm/smmuv3: Implement SMMU_S_INIT register Tao Tang
2025-12-04 14:33 ` Eric Auger
2025-12-05 8:23 ` Tao Tang
2025-10-12 15:14 ` [RFC v3 17/21] hw/arm/smmuv3: Pass security state to command queue and IRQ logic Tao Tang
2025-12-04 14:46 ` Eric Auger
2025-12-05 9:42 ` Tao Tang
2025-10-12 15:14 ` [RFC v3 18/21] hw/arm/smmuv3: Harden security checks in MMIO handlers Tao Tang
2025-12-04 14:59 ` Eric Auger
2025-12-05 10:36 ` Tao Tang
2025-12-05 17:23 ` Pierrick Bouvier
2025-10-12 15:15 ` [RFC v3 19/21] hw/arm/smmuv3: Use iommu_index to represent the security context Tao Tang
2025-10-15 0:02 ` Pierrick Bouvier
2025-10-16 6:37 ` Tao Tang
2025-10-16 7:04 ` Pierrick Bouvier
2025-10-20 8:44 ` Tao Tang
2025-10-20 22:55 ` Pierrick Bouvier
2025-10-21 3:51 ` Tao Tang
2025-10-22 21:23 ` Pierrick Bouvier
2025-10-23 9:02 ` Tao Tang
2025-12-04 15:05 ` Eric Auger
2025-12-05 10:54 ` Tao Tang
2025-10-12 15:15 ` [RFC v3 20/21] hw/arm/smmuv3: Initialize the secure register bank Tao Tang
2025-12-02 16:36 ` Eric Auger
2025-12-03 15:48 ` Tao Tang
2025-10-12 15:16 ` [RFC v3 21/21] hw/arm/smmuv3: Add secure migration and enable secure state Tao Tang
2025-12-02 16:39 ` Eric Auger
2025-12-03 15:54 ` Tao Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=274e3061-c4ab-48f6-ba95-ed0eed0e2ce2@redhat.com \
--to=eric.auger@redhat.com \
--cc=chenbaozi@phytium.com.cn \
--cc=jean-philippe@linaro.org \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=pierrick.bouvier@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=smostafa@google.com \
--cc=tangtao1634@phytium.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).