qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org, Peter Maydell <peter.maydell@linaro.org>
Subject: [PATCH v4 03/24] target/arm: Use probe_access_full for BTI
Date: Mon, 10 Oct 2022 20:18:50 -0700	[thread overview]
Message-ID: <20221011031911.2408754-4-richard.henderson@linaro.org> (raw)
In-Reply-To: <20221011031911.2408754-1-richard.henderson@linaro.org>

Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
In is_guarded_page, use probe_access_full instead of just guessing
that the tlb entry is still present.  Also handles the FIXME about
executing from device memory.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu-param.h     |  9 +++++----
 target/arm/cpu.h           | 13 -------------
 target/arm/internals.h     |  1 +
 target/arm/ptw.c           |  7 ++++---
 target/arm/translate-a64.c | 21 ++++++++++-----------
 5 files changed, 20 insertions(+), 31 deletions(-)

diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index 38347b0d20..f4338fd10e 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -36,12 +36,13 @@
  *
  * For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
  * Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
+ * For shareability and guarded, as in the SH and GP fields respectively
+ * of the VMSAv8-64 PTEs.
  */
 # define TARGET_PAGE_ENTRY_EXTRA  \
-     uint8_t pte_attrs;           \
-     uint8_t shareability;
-
+    uint8_t pte_attrs;            \
+    uint8_t shareability;         \
+    bool guarded;
 #endif
 
 #define NB_MMU_MODES 8
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index f09ec8aa03..a34d496c5b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3388,19 +3388,6 @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
 /* Shared between translate-sve.c and sve_helper.c.  */
 extern const uint64_t pred_esz_masks[5];
 
-/* Helper for the macros below, validating the argument type. */
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
-{
-    return x;
-}
-
-/*
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
- * Using these should be a bit more self-documenting than using the
- * generic target bits directly.
- */
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
-
 /*
  * AArch64 usage of the PAGE_TARGET_* bits for linux-user.
  * Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 9566364dca..c3c3920ded 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1095,6 +1095,7 @@ typedef struct ARMCacheAttrs {
     unsigned int attrs:8;
     unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
     bool is_s2_format:1;
+    bool guarded:1;              /* guarded bit of the v8-64 PTE */
 } ARMCacheAttrs;
 
 /* Fields that are valid upon success. */
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
index 23f16f4ff7..2d182d62e5 100644
--- a/target/arm/ptw.c
+++ b/target/arm/ptw.c
@@ -1313,9 +1313,10 @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
          */
         result->f.attrs.secure = false;
     }
-    /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB.  */
-    if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
-        arm_tlb_bti_gp(&result->f.attrs) = true;
+
+    /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB.  */
+    if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
+        result->f.guarded = guarded;
     }
 
     if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 5b67375f4e..60ff753d81 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14601,22 +14601,21 @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
 #ifdef CONFIG_USER_ONLY
     return page_get_flags(addr) & PAGE_BTI;
 #else
+    CPUTLBEntryFull *full;
+    void *host;
     int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
-    unsigned int index = tlb_index(env, mmu_idx, addr);
-    CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
+    int flags;
 
     /*
      * We test this immediately after reading an insn, which means
-     * that any normal page must be in the TLB.  The only exception
-     * would be for executing from flash or device memory, which
-     * does not retain the TLB entry.
-     *
-     * FIXME: Assume false for those, for now.  We could use
-     * arm_cpu_get_phys_page_attrs_debug to re-read the page
-     * table entry even for that case.
+     * that the TLB entry must be present and valid, and thus this
+     * access will never raise an exception.
      */
-    return (tlb_hit(entry->addr_code, addr) &&
-            arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
+    flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
+                              false, &host, &full, 0);
+    assert(!(flags & TLB_INVALID_MASK));
+
+    return full->guarded;
 #endif
 }
 
-- 
2.34.1



  parent reply	other threads:[~2022-10-11  3:23 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-11  3:18 [PATCH v4 00/24] target/arm: Implement FEAT_HAFDBS Richard Henderson
2022-10-11  3:18 ` [PATCH v4 01/24] target/arm: Enable TARGET_PAGE_ENTRY_EXTRA Richard Henderson
2022-10-11  3:18 ` [PATCH v4 02/24] target/arm: Use probe_access_full for MTE Richard Henderson
2022-10-11  3:18 ` Richard Henderson [this message]
2022-10-11  3:18 ` [PATCH v4 04/24] target/arm: Add ARMMMUIdx_Phys_{S,NS} Richard Henderson
2022-10-11  3:18 ` [PATCH v4 05/24] target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx Richard Henderson
2022-10-14 18:09   ` Peter Maydell
2022-10-11  3:18 ` [PATCH v4 06/24] target/arm: Restrict tlb flush from vttbr_write to vmid change Richard Henderson
2022-10-14 18:12   ` Peter Maydell
2022-10-11  3:18 ` [PATCH v4 07/24] target/arm: Split out S1Translate type Richard Henderson
2022-10-17  9:53   ` Peter Maydell
2022-10-11  3:18 ` [PATCH v4 08/24] target/arm: Plumb debug into S1Translate Richard Henderson
2022-10-11  3:18 ` [PATCH v4 09/24] target/arm: Move be test for regime into S1TranslateResult Richard Henderson
2022-10-11  3:18 ` [PATCH v4 10/24] target/arm: Use softmmu tlbs for page table walking Richard Henderson
2022-10-11  3:18 ` [PATCH v4 11/24] target/arm: Split out get_phys_addr_twostage Richard Henderson
2022-10-11  3:18 ` [PATCH v4 12/24] target/arm: Use bool consistently for get_phys_addr subroutines Richard Henderson
2022-10-11  3:19 ` [PATCH v4 13/24] target/arm: Add ptw_idx to S1Translate Richard Henderson
2022-10-17 10:01   ` Peter Maydell
2022-10-20  3:16     ` Richard Henderson
2022-10-11  3:19 ` [PATCH v4 14/24] target/arm: Add isar predicates for FEAT_HAFDBS Richard Henderson
2022-10-11  3:19 ` [PATCH v4 15/24] target/arm: Extract HA and HD in aa64_va_parameters Richard Henderson
2022-10-11  3:19 ` [PATCH v4 16/24] target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptw Richard Henderson
2022-10-11  3:19 ` [PATCH v4 17/24] target/arm: Add ARMFault_UnsuppAtomicUpdate Richard Henderson
2022-10-11  3:19 ` [PATCH v4 18/24] target/arm: Remove loop from get_phys_addr_lpae Richard Henderson
2022-10-11  3:19 ` [PATCH v4 19/24] target/arm: Fix fault reporting in get_phys_addr_lpae Richard Henderson
2022-10-11  3:19 ` [PATCH v4 20/24] target/arm: Don't shift attrs " Richard Henderson
2022-10-11  3:19 ` [PATCH v4 21/24] target/arm: Consider GP an attribute " Richard Henderson
2022-10-11  3:19 ` [PATCH v4 22/24] target/arm: Implement FEAT_HAFDBS, access flag portion Richard Henderson
2022-10-17 10:45   ` Peter Maydell
2022-10-11  3:19 ` [PATCH v4 23/24] target/arm: Implement FEAT_HAFDBS, dirty bit portion Richard Henderson
2022-10-17 11:01   ` Peter Maydell
2022-10-11  3:19 ` [PATCH v4 24/24] target/arm: Use the max page size in a 2-stage ptw Richard Henderson
2022-10-17 12:49 ` [PATCH v4 00/24] target/arm: Implement FEAT_HAFDBS Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221011031911.2408754-4-richard.henderson@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).