All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/13] target/arm: add support for MTE4
@ 2026-03-09 21:59 Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 01/13] target/arm: implement MTE_PERM Gabriel Brookman
                   ` (13 more replies)
  0 siblings, 14 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

This series implements ARM's Enhanced Memory Tagging Extension
(MTE4). MTE4 implies the presence of several subfeatures:
FEAT_MTE_CANONICAL_TAGS, FEAT_MTE_TAGGED_FAR, FEAT_MTE_STORE_ONLY,
FEAT_MTE_NO_ADDRESS_TAGS, and FEAT_MTE_PERM, none of which are
currently implemented in QEMU. This patch implements all five.

Testing:
  - Included for FAR and STORE_ONLY.
  - The MTE_CANONICAL/NAT test from the previous email, modified so
    MTE_CANONICAL is enabled in user mode.
  - A bare-metal testsuite that sets up page tables for S1 and S2
    translation, to test the Tagged NoTagAccess fault.
  - The bare-metal testsuite also was used to test LDGM and similar
    instructions not permitted in user-mode.
  - The bare-metal testsuite also was used to test the mtx related
    patches.

Thanks,
Gabriel Brookman

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3116
Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
Changes in v4:
- MTX now interacts with PAuth.
- Canonical tag checking only takes place in canonically tagged regions
- MTX bits enable tag checking
- MTX bits are placed in MTEDESC for access in mte_check helper
- Separate feature bits are used to delineate each feature
- PRCTL functions renamed and refactored as per Richard's suggestion
- Link to v3: https://lore.kernel.org/qemu-devel/20260105-feat-mte4-v3-0-86a0d99ef2e4@gmail.com

Changes in v3:
- Added prctl for MTE_STORE_ONLY to linux-user
- mte_check is no longer generated on read when STORE_ONLY enabled
- Implemented LDGM instruction
- Removed "long" datatype as per Richard's suggestion
- Implemented masking for VA range checks when MTX bit enabled
- Implemented MTE_PERM, with NoTagAccess attribute
- Removed user-mode test for MTE_CANONICAL, since can't enable in
  user-mode.
- Removed TBI from mte_check generation logic
- Link to v2: https://lore.kernel.org/qemu-devel/20251116-feat-mte4-v2-0-9a7122b7fa76@gmail.com

Changes in v2:
- Added tests for STORE_ONLY.
- Refined commit messages.
- Added FEAT_MTE_CANONICAL_TAGS and FEAT_MTE_NO_ADDRESS_TAGS + tests.
- fixed TCSO bit macro names.
- Link to v1: https://lore.kernel.org/qemu-devel/20251111-feat-mte4-v1-0-72ef5cf276f9@gmail.com

---
Gabriel Brookman (13):
      target/arm: implement MTE_PERM
      target/arm: add TCSO bitmasks to SCTLR
      target/arm: mte_check unemitted on STORE_ONLY load
      linux-user: add MTE_STORE_ONLY to prctl
      target/arm: tag check emitted when MTX and not TBI
      target/arm: add canonical tag check logic
      target/arm: ldg on canonical tag loads the tag
      target/arm: storing to canonical tag faults
      target/arm: with MTX, no tag bit bounds check
      target/arm: with MTX, tag is not a part of PAuth
      docs: add MTE4 features to docs
      tests/tcg: add test for MTE FAR
      tests/tcg: add test for MTE_STORE_ONLY

 docs/system/arm/emulation.rst        |   5 ++
 linux-user/aarch64/mte_user_helper.c |  11 ++-
 linux-user/aarch64/mte_user_helper.h |  14 +--
 linux-user/aarch64/target_prctl.h    |   6 +-
 target/arm/cpu-features.h            |  15 ++++
 target/arm/cpu.h                     |   5 ++
 target/arm/gdbstub64.c               |   2 +-
 target/arm/helper.c                  |  36 ++++++--
 target/arm/internals.h               |  47 +++++++++-
 target/arm/ptw.c                     |  53 +++++++++--
 target/arm/tcg/cpu64.c               |   8 ++
 target/arm/tcg/helper-a64.c          |   9 +-
 target/arm/tcg/hflags.c              |  25 +++++-
 target/arm/tcg/mte_helper.c          | 164 ++++++++++++++++++++++++++++++++++-
 target/arm/tcg/pauth_helper.c        |  14 ++-
 target/arm/tcg/translate-a64.c       |  15 +++-
 target/arm/tcg/translate.h           |   3 +
 tests/tcg/aarch64/Makefile.target    |   2 +-
 tests/tcg/aarch64/mte-10.c           |  49 +++++++++++
 tests/tcg/aarch64/mte-9.c            |  48 ++++++++++
 tests/tcg/aarch64/mte.h              |   7 +-
 21 files changed, 497 insertions(+), 41 deletions(-)
---
base-commit: de61484ec39f418e5c0d4603017695f9ffb8fe24
change-id: 20251109-feat-mte4-6740a6202e83

Best regards,
-- 
Gabriel Brookman <brookmangabriel@gmail.com>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v4 01/13] target/arm: implement MTE_PERM
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-04 23:17   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR Gabriel Brookman
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

Introduces a new stage 2 memory attribute, NoTagAccess, that raises a
stage 2 data abort on a tag check, tag read, or tag write.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/cpu-features.h   |  5 +++++
 target/arm/ptw.c            | 25 ++++++++++++++++++++++---
 target/arm/tcg/mte_helper.c | 30 ++++++++++++++++++++++++++++++
 3 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
index b683c9551a..1f09d01713 100644
--- a/target/arm/cpu-features.h
+++ b/target/arm/cpu-features.h
@@ -1144,6 +1144,11 @@ static inline bool isar_feature_aa64_mte3(const ARMISARegisters *id)
     return FIELD_EX64_IDREG(id, ID_AA64PFR1, MTE) >= 3;
 }
 
+static inline bool isar_feature_aa64_mteperm(const ARMISARegisters *id)
+{
+    return FIELD_EX64_IDREG(id, ID_AA64PFR2, MTEPERM) >= 1;
+}
+
 static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
 {
     return FIELD_EX64_IDREG(id, ID_AA64PFR1, SME) != 0;
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
index 8b8dc09e72..d381413ef7 100644
--- a/target/arm/ptw.c
+++ b/target/arm/ptw.c
@@ -3383,7 +3383,7 @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
                                         ARMCacheAttrs s1, ARMCacheAttrs s2)
 {
     ARMCacheAttrs ret;
-    bool tagged = false;
+    bool tagged, notagaccess = false;
 
     assert(!s1.is_s2_format);
     ret.is_s2_format = false;
@@ -3393,6 +3393,18 @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
         s1.attrs = 0xff;
     }
 
+    if (hcr & HCR_FWB) {
+        if (s2.attrs >= 0xe) {
+            notagaccess = true;
+            s2.attrs = 0x7;
+        }
+    } else {
+        if (s2.attrs == 0x4) {
+            notagaccess = true;
+            s2.attrs = 0xf;
+        }
+    }
+
     /* Combine shareability attributes (table D4-43) */
     if (s1.shareability == 2 || s2.shareability == 2) {
         /* if either are outer-shareable, the result is outer-shareable */
@@ -3424,9 +3436,16 @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
         ret.shareability = 2;
     }
 
-    /* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */
+    /*
+     * The attr encoding 0xe0 corresponds to Tagged NoTagAccess and is only
+     * valid with FEAT_MTE_PERM (otherwise RESERVED, constrained
+     * unpredictable)). The presence of this feature is checked in
+     * allocation_tag_mem_probe, where Tagged NoTagAccess has its effect. See
+     * J1.3.5.2 EncodePARAttrs.
+     * TODO: CombineS1S2Desc does not consider transient, only WB, RWA.
+     */
     if (tagged && ret.attrs == 0xff) {
-        ret.attrs = 0xf0;
+        ret.attrs = notagaccess ? 0xe0 : 0xf0;
     }
 
     return ret;
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index a9fb979f63..4deec80208 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -58,6 +58,27 @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude)
     return tag;
 }
 
+#ifndef CONFIG_USER_ONLY
+/*
+ * Constructs S2 Permission Fault as described in ARM ARM "Stage 2 Memory
+ * Tagging Attributes".
+ */
+static void mte_perm_check_fail(CPUARMState *env, uint64_t dirty_ptr,
+                                uintptr_t ra, bool is_write)
+{
+    uint64_t syn;
+
+    env->exception.vaddress = dirty_ptr;
+
+    syn = syn_data_abort_no_iss(0, 0, 0, 0, 0, is_write, 0);
+
+    syn |= BIT_ULL(41); /* TagAccess is bit 41 */
+
+    raise_exception_ra(env, EXCP_DATA_ABORT, syn, 2, ra);
+    g_assert_not_reached();
+}
+#endif
+
 uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
                                   uint64_t ptr, MMUAccessType ptr_access,
                                   int ptr_size, MMUAccessType tag_access,
@@ -117,6 +138,15 @@ uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
     }
     assert(!(flags & TLB_INVALID_MASK));
 
+    /*
+     * If the virtual page MemAttr == Tagged NoTagAccess, throw S2 permission
+     * fault (conditional on mteperm being implemented and RA != 0).
+     */
+    if (ra && cpu_isar_feature(aa64_mteperm, env_archcpu(env))
+        && full->extra.arm.pte_attrs == 0xe0) {
+        mte_perm_check_fail(env, ptr, ra, tag_access == MMU_DATA_STORE);
+    }
+
     /* If the virtual page MemAttr != Tagged, access unchecked. */
     if (full->extra.arm.pte_attrs != 0xf0) {
         return NULL;

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 01/13] target/arm: implement MTE_PERM Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-04 23:27   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load Gabriel Brookman
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

These are the bitmasks used to control the FEAT_MTE_STORE_ONLY feature.
They are now named and setting these fields of SCTLR is ignored if MTE
or MTE4 is disabled, as per convention.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/cpu-features.h |  5 +++++
 target/arm/cpu.h          |  2 ++
 target/arm/helper.c       | 20 ++++++++++++++------
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
index 1f09d01713..38fc56b52e 100644
--- a/target/arm/cpu-features.h
+++ b/target/arm/cpu-features.h
@@ -1149,6 +1149,11 @@ static inline bool isar_feature_aa64_mteperm(const ARMISARegisters *id)
     return FIELD_EX64_IDREG(id, ID_AA64PFR2, MTEPERM) >= 1;
 }
 
+static inline bool isar_feature_aa64_mte_store_only(const ARMISARegisters *id)
+{
+    return FIELD_EX64_IDREG(id, ID_AA64PFR2, MTESTOREONLY) == 1;
+}
+
 static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
 {
     return FIELD_EX64_IDREG(id, ID_AA64PFR1, SME) != 0;
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 657ff4ab20..677ac18f6f 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1476,6 +1476,8 @@ void pmu_init(ARMCPU *cpu);
 #define SCTLR_EnAS0   (1ULL << 55) /* FEAT_LS64_ACCDATA */
 #define SCTLR_EnALS   (1ULL << 56) /* FEAT_LS64 */
 #define SCTLR_EPAN    (1ULL << 57) /* FEAT_PAN3 */
+#define SCTLR_TCSO0   (1ULL << 58) /* FEAT_MTE_STORE_ONLY */
+#define SCTLR_TCSO    (1ULL << 59) /* FEAT_MTE_STORE_ONLY */
 #define SCTLR_EnTP2   (1ULL << 60) /* FEAT_SME */
 #define SCTLR_NMI     (1ULL << 61) /* FEAT_NMI */
 #define SCTLR_SPINTMASK (1ULL << 62) /* FEAT_NMI */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7389f2988c..987539524a 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -3351,12 +3351,20 @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
 
     /* ??? Lots of these bits are not implemented.  */
 
-    if (ri->state == ARM_CP_STATE_AA64 && !cpu_isar_feature(aa64_mte, cpu)) {
-        if (ri->opc1 == 6) { /* SCTLR_EL3 */
-            value &= ~(SCTLR_ITFSB | SCTLR_TCF | SCTLR_ATA);
-        } else {
-            value &= ~(SCTLR_ITFSB | SCTLR_TCF0 | SCTLR_TCF |
-                       SCTLR_ATA0 | SCTLR_ATA);
+    if (ri->state == ARM_CP_STATE_AA64) {
+        if (!cpu_isar_feature(aa64_mte, cpu)) {
+            if (ri->opc1 == 6) { /* SCTLR_EL3 */
+                value &= ~(SCTLR_ITFSB | SCTLR_TCF | SCTLR_ATA | SCTLR_TCSO);
+            } else {
+                value &= ~(SCTLR_ITFSB | SCTLR_TCF0 | SCTLR_TCF |
+                           SCTLR_ATA0 | SCTLR_ATA | SCTLR_TCSO | SCTLR_TCSO0);
+            }
+        } else if (!cpu_isar_feature(aa64_mte_store_only, cpu)) { /* not mte4 */
+            if (ri->opc1 == 6) { /* SCTLR_EL3 */
+                value &= ~SCTLR_TCSO;
+            } else {
+                value &= ~(SCTLR_TCSO | SCTLR_TCSO0);
+            }
         }
     }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 01/13] target/arm: implement MTE_PERM Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-04 23:37   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl Gabriel Brookman
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

This feature disables generation of the mte check helper on loads when
STORE_ONLY tag checking mode is enabled.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/cpu.h               |  2 ++
 target/arm/tcg/hflags.c        | 12 ++++++++++++
 target/arm/tcg/translate-a64.c |  8 ++++++--
 target/arm/tcg/translate.h     |  2 ++
 4 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 677ac18f6f..7911912c3e 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2525,6 +2525,8 @@ FIELD(TBFLAG_A64, ZT0EXC_EL, 39, 2)
 FIELD(TBFLAG_A64, GCS_EN, 41, 1)
 FIELD(TBFLAG_A64, GCS_RVCEN, 42, 1)
 FIELD(TBFLAG_A64, GCSSTR_EL, 43, 2)
+FIELD(TBFLAG_A64, MTE_STORE_ONLY, 45, 1)
+FIELD(TBFLAG_A64, MTE0_STORE_ONLY, 46, 1)
 
 /*
  * Helpers for using the above. Note that only the A64 accessors use
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
index 7e6f8d3647..75c55b1a6d 100644
--- a/target/arm/tcg/hflags.c
+++ b/target/arm/tcg/hflags.c
@@ -423,6 +423,15 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
                      */
                     DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
                 }
+                /*
+                 * Repeat for MTE_STORE_ONLY
+                 */
+                if ((el == 0 ? SCTLR_TCSO0 : SCTLR_TCSO) & sctlr) {
+                    DP_TBFLAG_A64(flags, MTE_STORE_ONLY, 1);
+                    if (!EX_TBFLAG_A64(flags, UNPRIV)) {
+                        DP_TBFLAG_A64(flags, MTE0_STORE_ONLY, 1);
+                    }
+                }
             }
         }
         /* And again for unprivileged accesses, if required.  */
@@ -432,6 +441,9 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
             && (sctlr & SCTLR_TCF0)
             && allocation_tag_access_enabled(env, 0, sctlr)) {
             DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
+            if (SCTLR_TCSO0 & sctlr) {
+                DP_TBFLAG_A64(flags, MTE0_STORE_ONLY, 1);
+            }
         }
         /*
          * For unpriv tag-setting accesses we also need ATA0. Again, in
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5d261a5e32..874174a15b 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -301,7 +301,8 @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
                                       MemOp memop, bool is_unpriv,
                                       int core_idx)
 {
-    if (tag_checked && s->mte_active[is_unpriv]) {
+    if (tag_checked && s->mte_active[is_unpriv] &&
+        (is_write || !s->mte_store_only[is_unpriv])) {
         TCGv_i64 ret;
         int desc = 0;
 
@@ -333,7 +334,8 @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
 TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
                         bool tag_checked, int total_size, MemOp single_mop)
 {
-    if (tag_checked && s->mte_active[0]) {
+    if (tag_checked && s->mte_active[0] &&
+        (is_write || !s->mte_store_only[0])) {
         TCGv_i64 ret;
         int desc = 0;
 
@@ -10696,6 +10698,8 @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
     dc->ata[1] = EX_TBFLAG_A64(tb_flags, ATA0);
     dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
     dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
+    dc->mte_store_only[0] = EX_TBFLAG_A64(tb_flags, MTE_STORE_ONLY);
+    dc->mte_store_only[1] = EX_TBFLAG_A64(tb_flags, MTE0_STORE_ONLY);
     dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
     dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
     dc->sme_trap_nonstreaming = EX_TBFLAG_A64(tb_flags, SME_TRAP_NONSTREAMING);
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 3e3094a463..74143161f4 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -140,6 +140,8 @@ typedef struct DisasContext {
     bool ata[2];
     /* True if v8.5-MTE tag checks affect the PE; index with is_unpriv.  */
     bool mte_active[2];
+    /* True if v8.5-MTE tag checks disabled for reads; index with is_unpriv. */
+    bool mte_store_only[2];
     /* True with v8.5-BTI and SCTLR_ELx.BT* set.  */
     bool bt;
     /* True if any CP15 access is trapped by HSTR_EL2 */

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (2 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-04 23:39   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI Gabriel Brookman
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

Linux-user processes can now control whether MTE_STORE_ONLY is enabled
using the prctl syscall.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 linux-user/aarch64/mte_user_helper.c | 11 ++++++++++-
 linux-user/aarch64/mte_user_helper.h | 14 +++++++++-----
 linux-user/aarch64/target_prctl.h    |  6 +++++-
 target/arm/gdbstub64.c               |  2 +-
 tests/tcg/aarch64/mte.h              |  3 +++
 5 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/linux-user/aarch64/mte_user_helper.c b/linux-user/aarch64/mte_user_helper.c
index a5b1c8503b..b5c4dafcda 100644
--- a/linux-user/aarch64/mte_user_helper.c
+++ b/linux-user/aarch64/mte_user_helper.c
@@ -10,7 +10,7 @@
 #include "qemu.h"
 #include "mte_user_helper.h"
 
-void arm_set_mte_tcf0(CPUArchState *env, abi_long value)
+void arm_set_tagged_addr_ctrl(CPUArchState *env, abi_long value)
 {
     /*
      * Write PR_MTE_TCF to SCTLR_EL1[TCF0].
@@ -32,4 +32,13 @@ void arm_set_mte_tcf0(CPUArchState *env, abi_long value)
         tcf = 2;
     }
     env->cp15.sctlr_el[1] = deposit64(env->cp15.sctlr_el[1], 38, 2, tcf);
+
+    /*
+     * If MTE_STORE_ONLY is enabled, set the corresponding sctlr_el1 bit
+     */
+    if (value & PR_MTE_STORE_ONLY) {
+        env->cp15.sctlr_el[1] |= SCTLR_TCSO0;
+    } else {
+        env->cp15.sctlr_el[1] &= ~SCTLR_TCSO0;
+    }
 }
diff --git a/linux-user/aarch64/mte_user_helper.h b/linux-user/aarch64/mte_user_helper.h
index 0c53abda22..8a46f743f4 100644
--- a/linux-user/aarch64/mte_user_helper.h
+++ b/linux-user/aarch64/mte_user_helper.h
@@ -20,15 +20,19 @@
 # define PR_MTE_TAG_SHIFT       3
 # define PR_MTE_TAG_MASK        (0xffffUL << PR_MTE_TAG_SHIFT)
 #endif
+#ifndef PR_MTE_STORE_ONLY
+# define PR_MTE_STORE_ONLY      (1UL << 19)
+#endif
 
 /**
- * arm_set_mte_tcf0 - Set TCF0 field in SCTLR_EL1 register
+ * arm_set_tagged_addr_ctrl - Set TCF0 and TCSO0 fields in SCTLR_EL1 register
  * @env: The CPU environment
- * @value: The value to be set for the Tag Check Fault in EL0 field.
+ * @value: The value to be set for the Tag Check Fault and Tag Check Store Only
+ * in EL0 field.
  *
- * Only SYNC and ASYNC modes can be selected. If ASYMM mode is given, the SYNC
- * mode is selected instead. So, there is no way to set the ASYMM mode.
+ * Only SYNC and ASYNC modes can be selected for TCF0. If ASYMM mode is given,
+ * the SYNC mode is selected instead. So, there is no way to set the ASYMM mode.
  */
-void arm_set_mte_tcf0(CPUArchState *env, abi_long value);
+void arm_set_tagged_addr_ctrl(CPUArchState *env, abi_long value);
 
 #endif /* AARCH64_MTE_USER_HELPER_H */
diff --git a/linux-user/aarch64/target_prctl.h b/linux-user/aarch64/target_prctl.h
index 621be5727f..d91e75d60d 100644
--- a/linux-user/aarch64/target_prctl.h
+++ b/linux-user/aarch64/target_prctl.h
@@ -168,6 +168,9 @@ static abi_long do_prctl_set_tagged_addr_ctrl(CPUArchState *env, abi_long arg2)
     if (cpu_isar_feature(aa64_mte, cpu)) {
         valid_mask |= PR_MTE_TCF_MASK;
         valid_mask |= PR_MTE_TAG_MASK;
+        if (cpu_isar_feature(aa64_mte_store_only, cpu)) {
+            valid_mask |= PR_MTE_STORE_ONLY;
+        }
     }
 
     if (arg2 & ~valid_mask) {
@@ -176,7 +179,7 @@ static abi_long do_prctl_set_tagged_addr_ctrl(CPUArchState *env, abi_long arg2)
     env->tagged_addr_enable = arg2 & PR_TAGGED_ADDR_ENABLE;
 
     if (cpu_isar_feature(aa64_mte, cpu)) {
-        arm_set_mte_tcf0(env, arg2);
+        arm_set_tagged_addr_ctrl(env, arg2);
 
         /*
          * Write PR_MTE_TAG to GCR_EL1[Exclude].
@@ -185,6 +188,7 @@ static abi_long do_prctl_set_tagged_addr_ctrl(CPUArchState *env, abi_long arg2)
          */
         env->cp15.gcr_el1 =
             deposit64(env->cp15.gcr_el1, 0, 16, ~arg2 >> PR_MTE_TAG_SHIFT);
+
         arm_rebuild_hflags(env);
     }
     return 0;
diff --git a/target/arm/gdbstub64.c b/target/arm/gdbstub64.c
index b71666c3a1..3d24c09ccc 100644
--- a/target/arm/gdbstub64.c
+++ b/target/arm/gdbstub64.c
@@ -684,7 +684,7 @@ int aarch64_gdb_set_tag_ctl_reg(CPUState *cs, uint8_t *buf, int reg)
      * expose options regarding the type of MTE fault that can be controlled at
      * runtime.
      */
-    arm_set_mte_tcf0(env, tcf);
+    arm_set_tagged_addr_ctrl(env, tcf);
 
     return 1;
 #else
diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h
index 0805676b11..17b932f3f1 100644
--- a/tests/tcg/aarch64/mte.h
+++ b/tests/tcg/aarch64/mte.h
@@ -20,6 +20,9 @@
 #ifndef PR_TAGGED_ADDR_ENABLE
 # define PR_TAGGED_ADDR_ENABLE    (1UL << 0)
 #endif
+#ifndef PR_MTE_STORE_ONLY
+# define PR_MTE_STORE_ONLY        (1UL << 19)
+#endif
 #ifndef PR_MTE_TCF_SHIFT
 # define PR_MTE_TCF_SHIFT         1
 # define PR_MTE_TCF_NONE          (0UL << PR_MTE_TCF_SHIFT)

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (3 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-05  0:31   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 06/13] target/arm: add canonical tag check logic Gabriel Brookman
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

Previously, the TBI bit was used to mediate whether tag checks happened.
With MTE4, if the MTX bits are enabled, then tag checking happens even
if TBI is disabled. See AccessIsTagChecked.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/helper.c         | 10 ++++++++++
 target/arm/internals.h      | 10 +++++++++-
 target/arm/tcg/helper-a64.c |  9 +++++----
 target/arm/tcg/hflags.c     |  9 +++++----
 target/arm/tcg/mte_helper.c |  9 ++++++---
 5 files changed, 35 insertions(+), 12 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 987539524a..56858367fd 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9613,6 +9613,16 @@ uint64_t arm_sctlr(CPUARMState *env, int el)
     return env->cp15.sctlr_el[el];
 }
 
+int aa64_va_parameter_mtx(uint64_t tcr, ARMMMUIdx mmu_idx)
+{
+    if (regime_has_2_ranges(mmu_idx)) {
+        return extract64(tcr, 60, 2);
+    } else {
+        /* Replicate the single MTX bit so we always have 2 bits.  */
+        return extract64(tcr, 33, 1) * 3;
+    }
+}
+
 int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
 {
     if (regime_has_2_ranges(mmu_idx)) {
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 8ec2750847..a45119caa2 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1411,6 +1411,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
                                    ARMMMUIdx mmu_idx, bool data,
                                    bool el1_is_aa32);
 
+int aa64_va_parameter_mtx(uint64_t tcr, ARMMMUIdx mmu_idx);
 int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx);
 int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx);
 int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx);
@@ -1546,7 +1547,8 @@ FIELD(MTEDESC, TBI,   4, 2)
 FIELD(MTEDESC, TCMA,  6, 2)
 FIELD(MTEDESC, WRITE, 8, 1)
 FIELD(MTEDESC, ALIGN, 9, 3)
-FIELD(MTEDESC, SIZEM1, 12, 32 - 12)  /* size - 1 */
+FIELD(MTEDESC, MTX,   12, 2)
+FIELD(MTEDESC, SIZEM1, 14, 32 - 14)  /* size - 1 */
 
 bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
 uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
@@ -1622,6 +1624,12 @@ static inline bool tbi_check(uint32_t desc, int bit55)
     return (desc >> (R_MTEDESC_TBI_SHIFT + bit55)) & 1;
 }
 
+/* Return true if mtx bits mean that the access is canonically checked.  */
+static inline bool mtx_check(uint32_t desc, int bit55)
+{
+    return (desc >> (R_MTEDESC_MTX_SHIFT + bit55)) & 1;
+}
+
 /* Return true if tcma bits mean that the access is unchecked.  */
 static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
 {
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
index 2dec587d38..5f739d999c 100644
--- a/target/arm/tcg/helper-a64.c
+++ b/target/arm/tcg/helper-a64.c
@@ -1054,7 +1054,7 @@ static int mops_sizereg(uint32_t syndrome)
 }
 
 /*
- * Return true if TCMA and TBI bits mean we need to do MTE checks.
+ * Return true if the TCMA, TBI, and MTX bits mean we need to do MTE checks.
  * We only need to do this once per MOPS insn, not for every page.
  */
 static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
@@ -1062,12 +1062,13 @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
     int bit55 = extract64(ptr, 55, 1);
 
     /*
-     * Note that tbi_check() returns true for "access checked" but
-     * tcma_check() returns true for "access unchecked".
+     * Note that tbi_check() and mtx_check() return true for "access checked",
+     * but tcma_check() returns true for "access unchecked".
      */
-    if (!tbi_check(desc, bit55)) {
+    if (!tbi_check(desc, bit55) && !mtx_check(desc, bit55)) {
         return false;
     }
+
     return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr));
 }
 
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
index 75c55b1a6d..e753124c4c 100644
--- a/target/arm/tcg/hflags.c
+++ b/target/arm/tcg/hflags.c
@@ -245,13 +245,14 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
     uint64_t tcr = regime_tcr(env, mmu_idx);
     uint64_t hcr = arm_hcr_el2_eff(env);
     uint64_t sctlr;
-    int tbii, tbid;
+    int tbii, tbid, mtx;
 
     DP_TBFLAG_ANY(flags, AARCH64_STATE, 1);
 
     /* Get control bits for tagged addresses.  */
     tbid = aa64_va_parameter_tbi(tcr, mmu_idx);
     tbii = tbid & ~aa64_va_parameter_tbid(tcr, mmu_idx);
+    mtx = aa64_va_parameter_mtx(tcr, mmu_idx);
 
     DP_TBFLAG_A64(flags, TBII, tbii);
     DP_TBFLAG_A64(flags, TBID, tbid);
@@ -403,14 +404,14 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
         /*
          * Set MTE_ACTIVE if any access may be Checked, and leave clear
          * if all accesses must be Unchecked:
-         * 1) If no TBI, then there are no tags in the address to check,
+         * 1) If TBI and MTX are both unset, accesses are Unchecked.
          * 2) If Tag Check Override, then all accesses are Unchecked,
          * 3) If Tag Check Fail == 0, then Checked access have no effect,
          * 4) If no Allocation Tag Access, then all accesses are Unchecked.
          */
         if (allocation_tag_access_enabled(env, el, sctlr)) {
             DP_TBFLAG_A64(flags, ATA, 1);
-            if (tbid
+            if ((tbid || mtx)
                 && !(env->pstate & PSTATE_TCO)
                 && (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) {
                 DP_TBFLAG_A64(flags, MTE_ACTIVE, 1);
@@ -436,7 +437,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
         }
         /* And again for unprivileged accesses, if required.  */
         if (EX_TBFLAG_A64(flags, UNPRIV)
-            && tbid
+            && (tbid || mtx)
             && !(env->pstate & PSTATE_TCO)
             && (sctlr & SCTLR_TCF0)
             && allocation_tag_access_enabled(env, 0, sctlr)) {
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 4deec80208..1484087a19 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -819,8 +819,11 @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
     bit55 = extract64(ptr, 55, 1);
     *fault = ptr;
 
-    /* If TBI is disabled, the access is unchecked, and ptr is not dirty. */
-    if (unlikely(!tbi_check(desc, bit55))) {
+    /*
+     * If TBI and MTX are disabled, the access is unchecked, and ptr is not
+     * dirty.
+     */
+    if (unlikely(!tbi_check(desc, bit55) && !mtx_check(desc, bit55))) {
         return -1;
     }
 
@@ -961,7 +964,7 @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
     bit55 = extract64(ptr, 55, 1);
 
     /* If TBI is disabled, the access is unchecked, and ptr is not dirty. */
-    if (unlikely(!tbi_check(desc, bit55))) {
+    if (unlikely(!tbi_check(desc, bit55) && !mtx_check(desc, bit55))) {
         return ptr;
     }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 06/13] target/arm: add canonical tag check logic
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (4 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-05 21:46   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag Gabriel Brookman
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

This feature causes tag checks to compare logical address tags against
their canonical form rather than against allocation tags, when the check
happens in a canonically tagged memory region. Described in the ARM ARM
section "Logical Address Tagging".

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/cpu-features.h      |  5 +++++
 target/arm/cpu.h               |  1 +
 target/arm/internals.h         | 31 ++++++++++++++++++++++++++++++-
 target/arm/tcg/hflags.c        |  4 ++++
 target/arm/tcg/mte_helper.c    | 21 +++++++++++++++++++++
 target/arm/tcg/translate-a64.c |  7 +++++++
 target/arm/tcg/translate.h     |  1 +
 7 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
index 38fc56b52e..5e3dc5256f 100644
--- a/target/arm/cpu-features.h
+++ b/target/arm/cpu-features.h
@@ -1154,6 +1154,11 @@ static inline bool isar_feature_aa64_mte_store_only(const ARMISARegisters *id)
     return FIELD_EX64_IDREG(id, ID_AA64PFR2, MTESTOREONLY) == 1;
 }
 
+static inline bool isar_feature_aa64_mte_mtx(const ARMISARegisters *id)
+{
+    return FIELD_EX64_IDREG(id, ID_AA64PFR1, MTEX) == 1;
+}
+
 static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
 {
     return FIELD_EX64_IDREG(id, ID_AA64PFR1, SME) != 0;
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 7911912c3e..1f33c0d163 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2527,6 +2527,7 @@ FIELD(TBFLAG_A64, GCS_RVCEN, 42, 1)
 FIELD(TBFLAG_A64, GCSSTR_EL, 43, 2)
 FIELD(TBFLAG_A64, MTE_STORE_ONLY, 45, 1)
 FIELD(TBFLAG_A64, MTE0_STORE_ONLY, 46, 1)
+FIELD(TBFLAG_A64, MTX, 47, 2)
 
 /*
  * Helpers for using the above. Note that only the A64 accessors use
diff --git a/target/arm/internals.h b/target/arm/internals.h
index a45119caa2..52597a351c 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1630,6 +1630,12 @@ static inline bool mtx_check(uint32_t desc, int bit55)
     return (desc >> (R_MTEDESC_MTX_SHIFT + bit55)) & 1;
 }
 
+/* Return whether or not the second nibble of a VA matches bit 55.  */
+static inline bool tag_is_canonical(int ptr_tag, int bit55)
+{
+    return ((ptr_tag + bit55) & 0xf) == 0;
+}
+
 /* Return true if tcma bits mean that the access is unchecked.  */
 static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
 {
@@ -1637,11 +1643,34 @@ static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
      * We had extracted bit55 and ptr_tag for other reasons, so fold
      * (ptr<59:55> == 00000 || ptr<59:55> == 11111) into a single test.
      */
-    bool match = ((ptr_tag + bit55) & 0xf) == 0;
+    bool match = tag_is_canonical(ptr_tag, bit55);
     bool tcma = (desc >> (R_MTEDESC_TCMA_SHIFT + bit55)) & 1;
     return tcma && match;
 }
 
+/* Return true if Canonical Tagging is enabled. */
+static inline bool canonical_tagging_enabled(CPUARMState *env, bool selector)
+{
+    int mmu_idx;
+    uint64_t tcr, mtx_bit;
+
+    /* If mte4 is not implemented, then mtx is by definition not enabled */
+    if (!cpu_isar_feature(aa64_mte_mtx, env_archcpu(env))) {
+        return false;
+    }
+
+    mmu_idx = arm_mmu_idx_el(env, arm_current_el(env));
+    tcr = regime_tcr(env, mmu_idx);
+
+    /*
+     * In two-range regimes, mtx is governed by bit 60 or 61 of TCR, and in
+     * one-range regimes, bit 33 is used.
+     */
+    mtx_bit = regime_has_2_ranges(mmu_idx) ? 60 + selector : 33;
+
+    return extract64(tcr, mtx_bit, 1);
+}
+
 /*
  * For TBI, ideally, we would do nothing.  Proper behaviour on fault is
  * for the tag to be present in the FAR_ELx register.  But for user-only
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
index e753124c4c..40a934a8af 100644
--- a/target/arm/tcg/hflags.c
+++ b/target/arm/tcg/hflags.c
@@ -460,6 +460,10 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
         }
         /* Cache TCMA as well as TBI. */
         DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx));
+        /* Cache MTX. */
+        if (cpu_isar_feature(aa64_mte_mtx, env_archcpu(env))) {
+            DP_TBFLAG_A64(flags, MTX, mtx);
+        }
     }
 
     if (cpu_isar_feature(aa64_gcs, env_archcpu(env))) {
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 1484087a19..b54fbd11c0 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -854,6 +854,13 @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
         mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
                                   MMU_DATA_LOAD, ra);
         if (!mem1) {
+            /*
+             * If mtx is enabled, then the access is MemTag_CanonicallyTagged,
+             * otherwise it is Untagged. See AArch64.CheckTag.
+             */
+            if (mtx_check(desc, bit55)) {
+                return tag_is_canonical(ptr_tag, bit55);
+            }
             return 1;
         }
         /* Perform all of the comparisons. */
@@ -867,6 +874,12 @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
                                   ptr_last - next_page + 1,
                                   MMU_DATA_LOAD, ra);
 
+        /* If either region is canonically tagged, do a canonical tag check */
+        if (mtx_check(desc, bit55) && (!mem1 || !mem2)
+            && (!tag_is_canonical(ptr_tag, bit55))) {
+            return 0;
+        }
+
         /*
          * Perform all of the comparisons.
          * Note the possible but unlikely case of the operation spanning
@@ -974,6 +987,7 @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
         goto done;
     }
 
+
     /*
      * In arm_cpu_realizefn, we asserted that dcz > LOG2_TAG_GRANULE+1,
      * i.e. 32 bytes, which is an unreasonably small dcz anyway, to make
@@ -995,6 +1009,13 @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
     mem = allocation_tag_mem(env, mmu_idx, align_ptr, MMU_DATA_STORE,
                              dcz_bytes, MMU_DATA_LOAD, ra);
     if (!mem) {
+        /*
+         * If mtx is enabled, then the access is MemTag_CanonicallyTagged,
+         * otherwise it is Untagged. See AArch64.CheckTag.
+         */
+        if (mtx_check(desc, bit55) && !tag_is_canonical(ptr_tag, bit55)) {
+            mte_check_fail(env, desc, ptr, ra);
+        }
         goto done;
     }
 
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 874174a15b..366830f7f0 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -311,6 +311,7 @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
         desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
         desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
         desc = FIELD_DP32(desc, MTEDESC, ALIGN, memop_alignment_bits(memop));
+        desc = FIELD_DP32(desc, MTEDESC, MTX, s->mtx);
         desc = FIELD_DP32(desc, MTEDESC, SIZEM1, memop_size(memop) - 1);
 
         ret = tcg_temp_new_i64();
@@ -344,6 +345,7 @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
         desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
         desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
         desc = FIELD_DP32(desc, MTEDESC, ALIGN, memop_alignment_bits(single_mop));
+        desc = FIELD_DP32(desc, MTEDESC, MTX, s->mtx);
         desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
 
         ret = tcg_temp_new_i64();
@@ -3002,6 +3004,7 @@ static void handle_sys(DisasContext *s, bool isread,
             desc = FIELD_DP32(desc, MTEDESC, MIDX, get_mem_index(s));
             desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
             desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
+            desc = FIELD_DP32(desc, MTEDESC, MTX, s->mtx);
 
             tcg_rt = tcg_temp_new_i64();
             gen_helper_mte_check_zva(tcg_rt, tcg_env,
@@ -4872,6 +4875,7 @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue,
         desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
         desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
         desc = FIELD_DP32(desc, MTEDESC, WRITE, true);
+        desc = FIELD_DP32(desc, MTEDESC, MTX, s->mtx);
         /* SIZEM1 and ALIGN we leave 0 (byte write) */
     }
     /* The helper function always needs the memidx even with MTE disabled */
@@ -4926,11 +4930,13 @@ static bool do_CPY(DisasContext *s, arg_cpy *a, bool is_epilogue, CpyFn fn)
     if (s->mte_active[runpriv]) {
         rdesc = FIELD_DP32(rdesc, MTEDESC, TBI, s->tbid);
         rdesc = FIELD_DP32(rdesc, MTEDESC, TCMA, s->tcma);
+        rdesc = FIELD_DP32(rdesc, MTEDESC, MTX, s->mtx);
     }
     if (s->mte_active[wunpriv]) {
         wdesc = FIELD_DP32(wdesc, MTEDESC, TBI, s->tbid);
         wdesc = FIELD_DP32(wdesc, MTEDESC, TCMA, s->tcma);
         wdesc = FIELD_DP32(wdesc, MTEDESC, WRITE, true);
+        wdesc = FIELD_DP32(wdesc, MTEDESC, MTX, s->mtx);
     }
     /* The helper function needs these parts of the descriptor regardless */
     rdesc = FIELD_DP32(rdesc, MTEDESC, MIDX, rmemidx);
@@ -10700,6 +10706,7 @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
     dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
     dc->mte_store_only[0] = EX_TBFLAG_A64(tb_flags, MTE_STORE_ONLY);
     dc->mte_store_only[1] = EX_TBFLAG_A64(tb_flags, MTE0_STORE_ONLY);
+    dc->mtx = EX_TBFLAG_A64(tb_flags, MTX);
     dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
     dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
     dc->sme_trap_nonstreaming = EX_TBFLAG_A64(tb_flags, SME_TRAP_NONSTREAMING);
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 74143161f4..846e383c70 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -82,6 +82,7 @@ typedef struct DisasContext {
     uint8_t tbii;      /* TBI1|TBI0 for insns */
     uint8_t tbid;      /* TBI1|TBI0 for data */
     uint8_t tcma;      /* TCMA1|TCMA0 for MTE */
+    uint8_t mtx;       /* MTX1|MTX0 for MTE */
     bool ns;        /* Use non-secure CPREG bank on access */
     int fp_excp_el; /* FP exception EL or 0 if enabled */
     int sve_excp_el; /* SVE exception EL or 0 if enabled */

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (5 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 06/13] target/arm: add canonical tag check logic Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-05 22:20   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 08/13] target/arm: storing to canonical tag faults Gabriel Brookman
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

According to ARM ARM, section "Memory Tagging Region Types", loading
tags from canonically tagged regions should use the canonical tags, not
allocation tags.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/tcg/mte_helper.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index b54fbd11c0..07797aecf9 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -313,6 +313,11 @@ uint64_t HELPER(ldg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
     /* Load if page supports tags. */
     if (mem) {
         rtag = load_tag1(ptr, mem);
+    } else {
+        uint64_t bit55 = extract64(ptr, 55, 1);
+        if (canonical_tagging_enabled(env, bit55)) {
+            rtag = 0xF * bit55;
+        }
     }
 
     return address_with_allocation_tag(xt, rtag);
@@ -463,8 +468,10 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
     void *tag_mem;
     uint64_t ret;
     int shift;
+    bool bit55;
 
     ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
+    bit55 = extract64(ptr, 55, 1);
 
     /* Trap if accessing an invalid page.  */
     tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
@@ -472,6 +479,34 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
 
     /* The tag is squashed to zero if the page does not support tags.  */
     if (!tag_mem) {
+        /* Load canonical value if mtx is set (untagged memory region) */
+        if (canonical_tagging_enabled(env, bit55)) {
+            switch (gm_bs) {
+            case 3:
+                /* 32 bytes -> 2 tags -> 8 result bits */
+                ret = -(uint8_t)bit55;
+                break;
+            case 4:
+                /* 64 bytes -> 4 tags -> 16 result bits */
+                ret = -(uint16_t)bit55;
+                break;
+            case 5:
+                /* 128 bytes -> 8 tags -> 32 result bits */
+                ret = -(uint32_t)bit55;
+                break;
+            case 6:
+                /* 256 bytes -> 16 tags -> 64 result bits */
+                return -(uint64_t)bit55;
+            default:
+                /*
+                 * CPU configured with unsupported/invalid gm blocksize.
+                 * This is detected early in arm_cpu_realizefn.
+                 */
+                g_assert_not_reached();
+            }
+            shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
+            return ret << shift;
+        }
         return 0;
     }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 08/13] target/arm: storing to canonical tag faults
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (6 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-05 22:37   ` Richard Henderson
  2026-03-09 21:59 ` [PATCH v4 09/13] target/arm: with MTX, no tag bit bounds check Gabriel Brookman
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

According to ARM ARM, section "Memory region tagging types", tag-store
instructions targeting canonically tagged regions cause a stage 1
permission fault with MTX enabled.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/tcg/mte_helper.c | 69 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 07797aecf9..ddf4ffc51b 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -227,6 +227,20 @@ uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
 #endif
 }
 
+static void canonical_tag_write_fail(CPUARMState *env,
+                                uint64_t dirty_ptr, uintptr_t ra)
+{
+    uint64_t syn;
+
+    env->exception.vaddress = dirty_ptr;
+
+    syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0, 1, 0);
+    syn |= BIT_ULL(42); /* TnD is bit 42 */
+
+    raise_exception_ra(env, EXCP_DATA_ABORT, syn, exception_target_el(env), ra);
+    g_assert_not_reached();
+}
+
 static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
                                    uint64_t ptr, MMUAccessType ptr_access,
                                    int ptr_size, MMUAccessType tag_access,
@@ -372,7 +386,11 @@ static inline void do_stg(CPUARMState *env, uint64_t ptr, uint64_t xt,
     /* Store if page supports tags. */
     if (mem) {
         store1(ptr, mem, allocation_tag_from_addr(xt));
+    } else if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+        canonical_tag_write_fail(env, ptr, ra);
+        return;
     }
+
 }
 
 void HELPER(stg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
@@ -389,9 +407,19 @@ void HELPER(stg_stub)(CPUARMState *env, uint64_t ptr)
 {
     int mmu_idx = arm_env_mmu_index(env);
     uintptr_t ra = GETPC();
+    uint8_t *mem;
 
     check_tag_aligned(env, ptr, ra);
     probe_write(env, ptr, TAG_GRANULE, mmu_idx, ra);
+
+    /* If we are storing to a canonically tagged memory region, fault. */
+    if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+        mem = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE,
+                                       TAG_GRANULE, MMU_DATA_STORE, true, ra);
+        if (!mem) {
+            canonical_tag_write_fail(env, ptr, ra);
+        }
+    }
 }
 
 static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
@@ -415,6 +443,11 @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
                                   MMU_DATA_STORE, TAG_GRANULE,
                                   MMU_DATA_STORE, ra);
 
+        if (!(mem1 && mem2) && canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+            canonical_tag_write_fail(env, ptr, ra);
+            return;
+        }
+
         /* Store if page(s) support tags. */
         if (mem1) {
             store1(TAG_GRANULE, mem1, tag);
@@ -426,9 +459,14 @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
         /* Two stores aligned mod TAG_GRANULE*2 -- modify one byte. */
         mem1 = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
                                   2 * TAG_GRANULE, MMU_DATA_STORE, ra);
+
         if (mem1) {
             tag |= tag << 4;
             qatomic_set(mem1, tag);
+        } else if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+            /* Writing tags to canonically tagged memory region: faults */
+            canonical_tag_write_fail(env, ptr, ra);
+            return;
         }
     }
 }
@@ -448,6 +486,7 @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
     int mmu_idx = arm_env_mmu_index(env);
     uintptr_t ra = GETPC();
     int in_page = -(ptr | TARGET_PAGE_MASK);
+    uint8_t *mem1, *mem2;
 
     check_tag_aligned(env, ptr, ra);
 
@@ -457,6 +496,29 @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
         probe_write(env, ptr, TAG_GRANULE, mmu_idx, ra);
         probe_write(env, ptr + TAG_GRANULE, TAG_GRANULE, mmu_idx, ra);
     }
+
+    /* If we are storing to a canonically tagged memory region, fault. */
+    if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+        if (likely(in_page >= 2 * TAG_GRANULE)) {
+            mem1 = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE,
+                                           2 * TAG_GRANULE, MMU_DATA_STORE,
+                                           true, ra);
+            if (!mem1) {
+                canonical_tag_write_fail(env, ptr, ra);
+            }
+        } else {
+            mem1 = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE,
+                                           TAG_GRANULE, MMU_DATA_STORE,
+                                           true, ra);
+            mem2 = allocation_tag_mem_probe(env, mmu_idx,
+                                                  ptr + TAG_GRANULE,
+                                                  MMU_DATA_STORE, TAG_GRANULE,
+                                                  MMU_DATA_STORE, true, ra);
+            if (!mem1 || !mem2) {
+                canonical_tag_write_fail(env, ptr, ra);
+            }
+        }
+    }
 }
 
 uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
@@ -569,6 +631,10 @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
      * and if the OS has enabled access to the tags.
      */
     if (!tag_mem) {
+        /* Storing tags to canonically tagged region: fault. */
+        if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+            canonical_tag_write_fail(env, ptr, ra);
+        }
         return;
     }
 
@@ -619,9 +685,12 @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
 
     mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE, dcz_bytes,
                              MMU_DATA_STORE, ra);
+
     if (mem) {
         int tag_pair = (val & 0xf) * 0x11;
         memset(mem, tag_pair, tag_bytes);
+    } else if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
+        canonical_tag_write_fail(env, ptr, ra);
     }
 }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 09/13] target/arm: with MTX, no tag bit bounds check
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (7 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 08/13] target/arm: storing to canonical tag faults Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 10/13] target/arm: with MTX, tag is not a part of PAuth Gabriel Brookman
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

Virtual address canonicity checks should ignore mismatch in tag bits
during translation step if MTX is set.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/helper.c    |  6 +++++-
 target/arm/internals.h |  1 +
 target/arm/ptw.c       | 28 +++++++++++++++++++++++++---
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 56858367fd..a61944dedd 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -9747,7 +9747,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
 {
     uint64_t tcr = regime_tcr(env, mmu_idx);
     bool epd, hpd, tsz_oob, ds, ha, hd, pie = false;
-    bool aie = false;
+    bool aie, mtx = false;
     int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
     ARMGranuleSize gran;
     ARMCPU *cpu = env_archcpu(env);
@@ -9784,6 +9784,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
         ha = extract32(tcr, 21, 1) && cpu_isar_feature(aa64_hafs, cpu);
         hd = extract32(tcr, 22, 1) && cpu_isar_feature(aa64_hdbs, cpu);
         ds = extract64(tcr, 32, 1);
+        mtx = extract64(tcr, 33, 1) && cpu_isar_feature(aa64_mte_mtx, cpu);
     } else {
         bool e0pd;
 
@@ -9799,6 +9800,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
             sh = extract32(tcr, 12, 2);
             hpd = extract64(tcr, 41, 1);
             e0pd = extract64(tcr, 55, 1);
+            mtx = extract64(tcr, 60, 1) && cpu_isar_feature(aa64_mte_mtx, cpu);
         } else {
             tsz = extract32(tcr, 16, 6);
             gran = tg1_to_gran_size(extract32(tcr, 30, 2));
@@ -9806,6 +9808,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
             sh = extract32(tcr, 28, 2);
             hpd = extract64(tcr, 42, 1);
             e0pd = extract64(tcr, 56, 1);
+            mtx = extract64(tcr, 61, 1) && cpu_isar_feature(aa64_mte_mtx, cpu);
         }
         ps = extract64(tcr, 32, 3);
         ha = extract64(tcr, 39, 1) && cpu_isar_feature(aa64_hafs, cpu);
@@ -9905,6 +9908,7 @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
         .gran = gran,
         .pie = pie,
         .aie = aie,
+        .mtx = mtx,
     };
 }
 
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 52597a351c..2c4369cc16 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1396,6 +1396,7 @@ typedef struct ARMVAParameters {
     ARMGranuleSize gran : 2;
     bool pie        : 1;
     bool aie        : 1;
+    bool mtx        : 1;
 } ARMVAParameters;
 
 /**
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
index d381413ef7..e31b3085f8 100644
--- a/target/arm/ptw.c
+++ b/target/arm/ptw.c
@@ -1929,7 +1929,16 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
      * validation to do here.
      */
     if (inputsize < addrsize) {
-        uint64_t top_bits = sextract64(address, inputsize,
+        /*
+         * If MTX is enabled, bits 56-59 aren't checked for canonicity
+         * during translation, since they will later be checked during
+         * the tag check step.
+         */
+        uint64_t masked_address = address;
+        if (param.mtx) {
+            masked_address = deposit64(address, 56, 4, param.select * 0xf);
+        }
+        uint64_t top_bits = sextract64(masked_address, inputsize,
                                            addrsize - inputsize);
         if (-top_bits != param.select) {
             /* The gap between the two regions is a Translation fault */
@@ -3481,15 +3490,28 @@ static bool get_phys_addr_disabled(CPUARMState *env,
         if (arm_el_is_aa64(env, r_el)) {
             int pamax = arm_pamax(env_archcpu(env));
             uint64_t tcr = env->cp15.tcr_el[r_el];
-            int addrtop, tbi;
+            int addrtop, tbi, mtx;
+            bool bit55;
 
             tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
+            mtx = aa64_va_parameter_mtx(tcr, mmu_idx);
             if (access_type == MMU_INST_FETCH) {
                 tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
             }
-            tbi = (tbi >> extract64(address, 55, 1)) & 1;
+            bit55 = extract64(address, 55, 1);
+            tbi = (tbi >> bit55) & 1;
+            mtx = (mtx >> bit55) & 1;
             addrtop = (tbi ? 55 : 63);
 
+            /*
+             * With MTX enabled, bits 56-59 are not checked according to
+             * AArch64.S1DisabledOutput.
+             */
+            if (cpu_isar_feature(aa64_mte_mtx, env_archcpu(env)) && mtx &&
+                access_type != MMU_INST_FETCH) {
+                address = deposit64(address, 56, 4, ((mmu_idx) && bit55) * 0xF);
+            }
+
             if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
                 fi->type = ARMFault_AddressSize;
                 fi->level = 0;

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 10/13] target/arm: with MTX, tag is not a part of PAuth
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (8 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 09/13] target/arm: with MTX, no tag bit bounds check Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 11/13] docs: add MTE4 features to docs Gabriel Brookman
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

As described in the section on MTX, tag bits should not be used to store
or compute the PAC when MTX is set. See also Authenticate(),
InsertPAC(), and Strip().

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
---
 target/arm/internals.h        |  5 ++++-
 target/arm/tcg/pauth_helper.c | 14 +++++++++++++-
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index 2c4369cc16..71d8b419e2 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1820,7 +1820,10 @@ static inline uint64_t pauth_ptr_mask(ARMVAParameters param)
     int bot_pac_bit = 64 - param.tsz;
     int top_pac_bit = 64 - 8 * param.tbi;
 
-    return MAKE_64BIT_MASK(bot_pac_bit, top_pac_bit - bot_pac_bit);
+    uint64_t mask = MAKE_64BIT_MASK(bot_pac_bit, top_pac_bit - bot_pac_bit);
+
+    /* If mtx is enabled, second nibble is not part of PAC */
+    return mask & ~(-(uint64_t)param.mtx & MAKE_64BIT_MASK(56, 4));
 }
 
 /* Add the cpreg definitions for debug related system registers */
diff --git a/target/arm/tcg/pauth_helper.c b/target/arm/tcg/pauth_helper.c
index 67c0d59d9e..08dd230614 100644
--- a/target/arm/tcg/pauth_helper.c
+++ b/target/arm/tcg/pauth_helper.c
@@ -342,9 +342,12 @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
     }
 
     /* Build a pointer with known good extension bits.  */
-    top_bit = 64 - 8 * param.tbi;
+    top_bit = 64 - 8 * (param.tbi || param.mtx);
     bot_bit = 64 - param.tsz;
     ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext);
+    if (param.mtx && !param.tbi) {
+        ext_ptr = deposit64(ext_ptr, 60, 4, ext);
+    }
 
     pac = pauth_computepac(env, ext_ptr, modifier, *key);
 
@@ -377,6 +380,11 @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
     if (param.tbi) {
         ptr &= ~MAKE_64BIT_MASK(bot_bit, 55 - bot_bit + 1);
         pac &= MAKE_64BIT_MASK(bot_bit, 54 - bot_bit + 1);
+    } else if (param.mtx) {
+        ptr &= ~(MAKE_64BIT_MASK(60, 4)
+                | MAKE_64BIT_MASK(bot_bit, 55 - bot_bit + 1));
+        pac &= MAKE_64BIT_MASK(60, 4)
+            | MAKE_64BIT_MASK(bot_bit, 54 - bot_bit + 1);
     } else {
         ptr &= MAKE_64BIT_MASK(0, bot_bit);
         pac &= ~(MAKE_64BIT_MASK(55, 1) | MAKE_64BIT_MASK(0, bot_bit));
@@ -424,6 +432,10 @@ static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
     cmp_mask = MAKE_64BIT_MASK(bot_bit, top_bit - bot_bit);
     cmp_mask &= ~MAKE_64BIT_MASK(55, 1);
 
+    if (param.mtx) {
+        cmp_mask &= ~MAKE_64BIT_MASK(56, 4);
+    }
+
     if (pauth_feature >= PauthFeat_2) {
         ARMPauthFeature fault_feature =
             is_combined ? PauthFeat_FPACCOMBINED : PauthFeat_FPAC;

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 11/13] docs: add MTE4 features to docs
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (9 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 10/13] target/arm: with MTX, tag is not a part of PAuth Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 12/13] tests/tcg: add test for MTE FAR Gabriel Brookman
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

The implemented MTE4 features are now present in
docs/system/arm/emulation.rst

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 docs/system/arm/emulation.rst | 5 +++++
 target/arm/tcg/cpu64.c        | 8 ++++++++
 2 files changed, 13 insertions(+)

diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
index 7787691853..0f529ba6bb 100644
--- a/docs/system/arm/emulation.rst
+++ b/docs/system/arm/emulation.rst
@@ -109,6 +109,11 @@ the following architecture extensions:
 - FEAT_MTE3 (MTE Asymmetric Fault Handling)
 - FEAT_MTE_ASYM_FAULT (Memory tagging asymmetric faults)
 - FEAT_MTE_ASYNC (Asynchronous reporting of Tag Check Fault)
+- FEAT_MTE_PERM (NoTagAccess memory attribute)
+- FEAT_MTE_TAGGED_FAR (Full address reporting of Tag Check Fault)
+- FEAT_MTE_STORE_ONLY (Store-only tag checking)
+- FEAT_MTE_CANONICAL_TAGS (Canonical tag checking)
+- FEAT_MTE_NO_ADDRESS_TAGS (Address tagging disabled)
 - FEAT_NMI (Non-maskable Interrupt)
 - FEAT_NV (Nested Virtualization)
 - FEAT_NV2 (Enhanced nested virtualization support)
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
index 84857fb706..7838bb52ea 100644
--- a/target/arm/tcg/cpu64.c
+++ b/target/arm/tcg/cpu64.c
@@ -1281,8 +1281,16 @@ void aarch64_max_tcg_initfn(Object *obj)
     t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_3 */
     t = FIELD_DP64(t, ID_AA64PFR1, NMI, 1);       /* FEAT_NMI */
     t = FIELD_DP64(t, ID_AA64PFR1, GCS, 1);       /* FEAT_GCS */
+    t = FIELD_DP64(t, ID_AA64PFR1,
+            MTEX, 1);   /* FEAT_MTE_NO_ADDRESS_TAGS + FEAT_MTE_CANONICAL_TAGS */
     SET_IDREG(isar, ID_AA64PFR1, t);
 
+    t = GET_IDREG(isar, ID_AA64PFR2);
+    t = FIELD_DP64(t, ID_AA64PFR2, MTEFAR, 1);    /* FEAT_MTE_TAGGED_FAR */
+    t = FIELD_DP64(t, ID_AA64PFR2, MTESTOREONLY, 1);   /* FEAT_MTE_STORE_ONLY */
+    t = FIELD_DP64(t, ID_AA64PFR2, MTEPERM, 1);    /* FEAT_MTE_PERM */
+    SET_IDREG(isar, ID_AA64PFR2, t);
+
     t = GET_IDREG(isar, ID_AA64MMFR0);
     t = FIELD_DP64(t, ID_AA64MMFR0, PARANGE, 6); /* FEAT_LPA: 52 bits */
     t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN16, 1);   /* 16k pages supported */

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 12/13] tests/tcg: add test for MTE FAR
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (10 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 11/13] docs: add MTE4 features to docs Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-03-09 21:59 ` [PATCH v4 13/13] tests/tcg: add test for MTE_STORE_ONLY Gabriel Brookman
  2026-04-04  1:20 ` [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

This functionality was previously enabled but not advertised or tested.
This commit adds a new test, mte-9, that tests the code for proper
full-address reporting. FEAT_MTE_TAGGED_FAR requires that FAR_ELx
report the full logical address, including tag bits.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 tests/tcg/aarch64/Makefile.target |  2 +-
 tests/tcg/aarch64/mte-9.c         | 48 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
index 9fa8687453..b491cfb5e1 100644
--- a/tests/tcg/aarch64/Makefile.target
+++ b/tests/tcg/aarch64/Makefile.target
@@ -64,7 +64,7 @@ AARCH64_TESTS += bti-2
 
 # MTE Tests
 ifneq ($(CROSS_CC_HAS_ARMV8_MTE),)
-AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-7 mte-8
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-7 mte-8 mte-9
 mte-%: CFLAGS += $(CROSS_CC_HAS_ARMV8_MTE)
 endif
 
diff --git a/tests/tcg/aarch64/mte-9.c b/tests/tcg/aarch64/mte-9.c
new file mode 100644
index 0000000000..9626a90c13
--- /dev/null
+++ b/tests/tcg/aarch64/mte-9.c
@@ -0,0 +1,48 @@
+/*
+ * Memory tagging, full-address reporting.
+ *
+ * Copyright (c) 2021 Linaro Ltd
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "mte.h"
+
+static void *faulting_ptr;
+
+void pass(int sig, siginfo_t *info, void *uc)
+{
+    assert(faulting_ptr == info->si_addr);
+    exit(0);
+}
+
+int main(int ac, char **av)
+{
+    struct sigaction sa;
+    int *p0, *p1, *p2;
+    long excl = 1;
+
+    enable_mte(PR_MTE_TCF_SYNC);
+    p0 = alloc_mte_mem(sizeof(*p0));
+
+    /* Create two differently tagged pointers. */
+    asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
+    asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
+    assert(excl != 1);
+    asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
+    assert(p1 != p2);
+
+    /* Store the tag from the first pointer.  */
+    asm("stg %0, [%0]" : : "r"(p1));
+
+    *p1 = 0;
+
+    memset(&sa, 0, sizeof(sa));
+    sa.sa_sigaction = pass;
+    sa.sa_flags = SA_SIGINFO;
+    sigaction(SIGSEGV, &sa, NULL);
+
+    faulting_ptr = p2;
+    *p2 = 0;
+
+    abort();
+}

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 13/13] tests/tcg: add test for MTE_STORE_ONLY
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (11 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 12/13] tests/tcg: add test for MTE FAR Gabriel Brookman
@ 2026-03-09 21:59 ` Gabriel Brookman
  2026-04-04  1:20 ` [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-03-09 21:59 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier, Gabriel Brookman

Added a test that checks that MTE checks are not performed on loads when
MTE_STORE_ONLY is enabled.

Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 tests/tcg/aarch64/Makefile.target |  2 +-
 tests/tcg/aarch64/mte-10.c        | 49 +++++++++++++++++++++++++++++++++++++++
 tests/tcg/aarch64/mte.h           |  4 ++--
 3 files changed, 52 insertions(+), 3 deletions(-)

diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
index b491cfb5e1..6203ac9b51 100644
--- a/tests/tcg/aarch64/Makefile.target
+++ b/tests/tcg/aarch64/Makefile.target
@@ -64,7 +64,7 @@ AARCH64_TESTS += bti-2
 
 # MTE Tests
 ifneq ($(CROSS_CC_HAS_ARMV8_MTE),)
-AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-7 mte-8 mte-9
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-7 mte-8 mte-9 mte-10
 mte-%: CFLAGS += $(CROSS_CC_HAS_ARMV8_MTE)
 endif
 
diff --git a/tests/tcg/aarch64/mte-10.c b/tests/tcg/aarch64/mte-10.c
new file mode 100644
index 0000000000..46d26fe97f
--- /dev/null
+++ b/tests/tcg/aarch64/mte-10.c
@@ -0,0 +1,49 @@
+/*
+ * Memory tagging, write-only tag checking
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "mte.h"
+
+void pass(int sig, siginfo_t *info, void *uc)
+{
+    exit(0);
+}
+
+int main(int ac, char **av)
+{
+    struct sigaction sa;
+    int *p0, *p1, *p2;
+    long excl = 1;
+
+    enable_mte(PR_MTE_TCF_SYNC | PR_MTE_STORE_ONLY);
+    p0 = alloc_mte_mem(sizeof(*p0));
+
+    /* Create two differently tagged pointers. */
+    asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
+    asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
+    assert(excl != 1);
+    asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
+    assert(p1 != p2);
+
+    /* Store the tag from the first pointer.  */
+    asm("stg %0, [%0]" : : "r"(p1));
+
+    /*
+     * We write to p1 (stg above makes this check pass) and read from
+     * p2 (improperly tagged, but since it's a read, we don't care).
+     */
+    *p1 = *p2;
+
+    /* enable handler */
+    memset(&sa, 0, sizeof(sa));
+    sa.sa_sigaction = pass;
+    sa.sa_flags = SA_SIGINFO;
+    sigaction(SIGSEGV, &sa, NULL);
+
+    /* now we write to badly tagged p2, should fault. */
+    *p2 = 0;
+
+    abort();
+}
diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h
index 17b932f3f1..7093b93dc7 100644
--- a/tests/tcg/aarch64/mte.h
+++ b/tests/tcg/aarch64/mte.h
@@ -40,10 +40,10 @@
 # define SEGV_MTESERR    9
 #endif
 
-static void enable_mte(int tcf)
+static void enable_mte(int flags)
 {
     int r = prctl(PR_SET_TAGGED_ADDR_CTRL,
-                  PR_TAGGED_ADDR_ENABLE | tcf | (0xfffe << PR_MTE_TAG_SHIFT),
+                  PR_TAGGED_ADDR_ENABLE | flags | (0xfffe << PR_MTE_TAG_SHIFT),
                   0, 0, 0);
     if (r < 0) {
         perror("PR_SET_TAGGED_ADDR_CTRL");

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 00/13] target/arm: add support for MTE4
  2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
                   ` (12 preceding siblings ...)
  2026-03-09 21:59 ` [PATCH v4 13/13] tests/tcg: add test for MTE_STORE_ONLY Gabriel Brookman
@ 2026-04-04  1:20 ` Gabriel Brookman
  13 siblings, 0 replies; 23+ messages in thread
From: Gabriel Brookman @ 2026-04-04  1:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Gustavo Romero, Richard Henderson, qemu-arm,
	Laurent Vivier, Pierrick Bouvier

Ping.

https://lore.kernel.org/qemu-devel/20260309-feat-mte4-v4-0-daaf0375620d@gmail.com/

On Mon, Mar 9, 2026 at 6:00 PM Gabriel Brookman
<brookmangabriel@gmail.com> wrote:
>
> This series implements ARM's Enhanced Memory Tagging Extension
> (MTE4). MTE4 implies the presence of several subfeatures:
> FEAT_MTE_CANONICAL_TAGS, FEAT_MTE_TAGGED_FAR, FEAT_MTE_STORE_ONLY,
> FEAT_MTE_NO_ADDRESS_TAGS, and FEAT_MTE_PERM, none of which are
> currently implemented in QEMU. This patch implements all five.
>
> Testing:
>   - Included for FAR and STORE_ONLY.
>   - The MTE_CANONICAL/NAT test from the previous email, modified so
>     MTE_CANONICAL is enabled in user mode.
>   - A bare-metal testsuite that sets up page tables for S1 and S2
>     translation, to test the Tagged NoTagAccess fault.
>   - The bare-metal testsuite also was used to test LDGM and similar
>     instructions not permitted in user-mode.
>   - The bare-metal testsuite also was used to test the mtx related
>     patches.
>
> Thanks,
> Gabriel Brookman
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3116
> Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
> ---
> Changes in v4:
> - MTX now interacts with PAuth.
> - Canonical tag checking only takes place in canonically tagged regions
> - MTX bits enable tag checking
> - MTX bits are placed in MTEDESC for access in mte_check helper
> - Separate feature bits are used to delineate each feature
> - PRCTL functions renamed and refactored as per Richard's suggestion
> - Link to v3: https://lore.kernel.org/qemu-devel/20260105-feat-mte4-v3-0-86a0d99ef2e4@gmail.com
>
> Changes in v3:
> - Added prctl for MTE_STORE_ONLY to linux-user
> - mte_check is no longer generated on read when STORE_ONLY enabled
> - Implemented LDGM instruction
> - Removed "long" datatype as per Richard's suggestion
> - Implemented masking for VA range checks when MTX bit enabled
> - Implemented MTE_PERM, with NoTagAccess attribute
> - Removed user-mode test for MTE_CANONICAL, since can't enable in
>   user-mode.
> - Removed TBI from mte_check generation logic
> - Link to v2: https://lore.kernel.org/qemu-devel/20251116-feat-mte4-v2-0-9a7122b7fa76@gmail.com
>
> Changes in v2:
> - Added tests for STORE_ONLY.
> - Refined commit messages.
> - Added FEAT_MTE_CANONICAL_TAGS and FEAT_MTE_NO_ADDRESS_TAGS + tests.
> - fixed TCSO bit macro names.
> - Link to v1: https://lore.kernel.org/qemu-devel/20251111-feat-mte4-v1-0-72ef5cf276f9@gmail.com
>
> ---
> Gabriel Brookman (13):
>       target/arm: implement MTE_PERM
>       target/arm: add TCSO bitmasks to SCTLR
>       target/arm: mte_check unemitted on STORE_ONLY load
>       linux-user: add MTE_STORE_ONLY to prctl
>       target/arm: tag check emitted when MTX and not TBI
>       target/arm: add canonical tag check logic
>       target/arm: ldg on canonical tag loads the tag
>       target/arm: storing to canonical tag faults
>       target/arm: with MTX, no tag bit bounds check
>       target/arm: with MTX, tag is not a part of PAuth
>       docs: add MTE4 features to docs
>       tests/tcg: add test for MTE FAR
>       tests/tcg: add test for MTE_STORE_ONLY
>
>  docs/system/arm/emulation.rst        |   5 ++
>  linux-user/aarch64/mte_user_helper.c |  11 ++-
>  linux-user/aarch64/mte_user_helper.h |  14 +--
>  linux-user/aarch64/target_prctl.h    |   6 +-
>  target/arm/cpu-features.h            |  15 ++++
>  target/arm/cpu.h                     |   5 ++
>  target/arm/gdbstub64.c               |   2 +-
>  target/arm/helper.c                  |  36 ++++++--
>  target/arm/internals.h               |  47 +++++++++-
>  target/arm/ptw.c                     |  53 +++++++++--
>  target/arm/tcg/cpu64.c               |   8 ++
>  target/arm/tcg/helper-a64.c          |   9 +-
>  target/arm/tcg/hflags.c              |  25 +++++-
>  target/arm/tcg/mte_helper.c          | 164 ++++++++++++++++++++++++++++++++++-
>  target/arm/tcg/pauth_helper.c        |  14 ++-
>  target/arm/tcg/translate-a64.c       |  15 +++-
>  target/arm/tcg/translate.h           |   3 +
>  tests/tcg/aarch64/Makefile.target    |   2 +-
>  tests/tcg/aarch64/mte-10.c           |  49 +++++++++++
>  tests/tcg/aarch64/mte-9.c            |  48 ++++++++++
>  tests/tcg/aarch64/mte.h              |   7 +-
>  21 files changed, 497 insertions(+), 41 deletions(-)
> ---
> base-commit: de61484ec39f418e5c0d4603017695f9ffb8fe24
> change-id: 20251109-feat-mte4-6740a6202e83
>
> Best regards,
> --
> Gabriel Brookman <brookmangabriel@gmail.com>
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 01/13] target/arm: implement MTE_PERM
  2026-03-09 21:59 ` [PATCH v4 01/13] target/arm: implement MTE_PERM Gabriel Brookman
@ 2026-04-04 23:17   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-04 23:17 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> @@ -117,6 +138,15 @@ uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
>       }
>       assert(!(flags & TLB_INVALID_MASK));
>   
> +    /*
> +     * If the virtual page MemAttr == Tagged NoTagAccess, throw S2 permission
> +     * fault (conditional on mteperm being implemented and RA != 0).
> +     */
> +    if (ra && cpu_isar_feature(aa64_mteperm, env_archcpu(env))
> +        && full->extra.arm.pte_attrs == 0xe0) {
> +        mte_perm_check_fail(env, ptr, ra, tag_access == MMU_DATA_STORE);
> +    }
> +
>       /* If the virtual page MemAttr != Tagged, access unchecked. */
>       if (full->extra.arm.pte_attrs != 0xf0) {
>           return NULL;
> 

This isn't quite right, as you're not checking for probe.

I suggest

     switch (full->extra.arm.pte_attrs) {
     case 0xf0: /* Tagged */
         break;

     case 0xe0: /* NoTagAccess */
         if (cpu_isar_feature(aa64_mteperm, env_archcpu(env)) {
             if (probe) {
                 return NULL;
             }
             assert(ra);
             mte_perm_check_fail(env, ptr, ra, tag_access == MMU_DATA_STORE);
         }
         /* fall through */

     default: /* Not Tagged */
         return NULL;
     }

See the block comment just below the #else that starts the system mode implementation.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR
  2026-03-09 21:59 ` [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR Gabriel Brookman
@ 2026-04-04 23:27   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-04 23:27 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> These are the bitmasks used to control the FEAT_MTE_STORE_ONLY feature.
> They are now named and setting these fields of SCTLR is ignored if MTE
> or MTE4 is disabled, as per convention.
> 
> Signed-off-by: Gabriel Brookman<brookmangabriel@gmail.com>
> ---
>   target/arm/cpu-features.h |  5 +++++
>   target/arm/cpu.h          |  2 ++
>   target/arm/helper.c       | 20 ++++++++++++++------
>   3 files changed, 21 insertions(+), 6 deletions(-)


Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load
  2026-03-09 21:59 ` [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load Gabriel Brookman
@ 2026-04-04 23:37   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-04 23:37 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> This feature disables generation of the mte check helper on loads when
> STORE_ONLY tag checking mode is enabled.
> 
> Signed-off-by: Gabriel Brookman<brookmangabriel@gmail.com>
> ---
>   target/arm/cpu.h               |  2 ++
>   target/arm/tcg/hflags.c        | 12 ++++++++++++
>   target/arm/tcg/translate-a64.c |  8 ++++++--
>   target/arm/tcg/translate.h     |  2 ++
>   4 files changed, 22 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl
  2026-03-09 21:59 ` [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl Gabriel Brookman
@ 2026-04-04 23:39   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-04 23:39 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> Linux-user processes can now control whether MTE_STORE_ONLY is enabled
> using the prctl syscall.
> 
> Signed-off-by: Gabriel Brookman<brookmangabriel@gmail.com>
> ---
>   linux-user/aarch64/mte_user_helper.c | 11 ++++++++++-
>   linux-user/aarch64/mte_user_helper.h | 14 +++++++++-----
>   linux-user/aarch64/target_prctl.h    |  6 +++++-
>   target/arm/gdbstub64.c               |  2 +-
>   tests/tcg/aarch64/mte.h              |  3 +++
>   5 files changed, 28 insertions(+), 8 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI
  2026-03-09 21:59 ` [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI Gabriel Brookman
@ 2026-04-05  0:31   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-05  0:31 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> @@ -1062,12 +1062,13 @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc)
>       int bit55 = extract64(ptr, 55, 1);
>   
>       /*
> -     * Note that tbi_check() returns true for "access checked" but
> -     * tcma_check() returns true for "access unchecked".
> +     * Note that tbi_check() and mtx_check() return true for "access checked",
> +     * but tcma_check() returns true for "access unchecked".
>        */
> -    if (!tbi_check(desc, bit55)) {
> +    if (!tbi_check(desc, bit55) && !mtx_check(desc, bit55)) {

This pattern happens enough that it's probably worth creating a combined helper:

static inline bool tbi_or_mtx_check(uint32_t desc, int bit55)
{
     uint32_t mask = (1u << R_MTEDESC_TBI_SHIFT) | (1u << R_MTEDESC_MTX_SHIFT);
     return desc & (mask << bit55);
}

There seem to be several more uses that were not updated in sme_helper.c and sve_helper.c.

There seem to be no uses of tbi_check that shouldn't also test mtx_check, so I think the 
combined test should replace the existing tbi_check.


r~



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 06/13] target/arm: add canonical tag check logic
  2026-03-09 21:59 ` [PATCH v4 06/13] target/arm: add canonical tag check logic Gabriel Brookman
@ 2026-04-05 21:46   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-05 21:46 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> This feature causes tag checks to compare logical address tags against
> their canonical form rather than against allocation tags, when the check
> happens in a canonically tagged memory region. Described in the ARM ARM
> section "Logical Address Tagging".
> 
> Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
> ---
>   target/arm/cpu-features.h      |  5 +++++
>   target/arm/cpu.h               |  1 +
>   target/arm/internals.h         | 31 ++++++++++++++++++++++++++++++-
>   target/arm/tcg/hflags.c        |  4 ++++
>   target/arm/tcg/mte_helper.c    | 21 +++++++++++++++++++++
>   target/arm/tcg/translate-a64.c |  7 +++++++
>   target/arm/tcg/translate.h     |  1 +
>   7 files changed, 69 insertions(+), 1 deletion(-)
> 
> diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
> index 38fc56b52e..5e3dc5256f 100644
> --- a/target/arm/cpu-features.h
> +++ b/target/arm/cpu-features.h
> @@ -1154,6 +1154,11 @@ static inline bool isar_feature_aa64_mte_store_only(const ARMISARegisters *id)
>       return FIELD_EX64_IDREG(id, ID_AA64PFR2, MTESTOREONLY) == 1;
>   }
>   
> +static inline bool isar_feature_aa64_mte_mtx(const ARMISARegisters *id)
> +{
> +    return FIELD_EX64_IDREG(id, ID_AA64PFR1, MTEX) == 1;
> +}

!= 0.

> +/* Return true if Canonical Tagging is enabled. */
> +static inline bool canonical_tagging_enabled(CPUARMState *env, bool selector)
> +{
> +    int mmu_idx;
> +    uint64_t tcr, mtx_bit;
> +
> +    /* If mte4 is not implemented, then mtx is by definition not enabled */
> +    if (!cpu_isar_feature(aa64_mte_mtx, env_archcpu(env))) {
> +        return false;
> +    }
> +
> +    mmu_idx = arm_mmu_idx_el(env, arm_current_el(env));
> +    tcr = regime_tcr(env, mmu_idx);
> +
> +    /*
> +     * In two-range regimes, mtx is governed by bit 60 or 61 of TCR, and in
> +     * one-range regimes, bit 33 is used.
> +     */
> +    mtx_bit = regime_has_2_ranges(mmu_idx) ? 60 + selector : 33;
> +
> +    return extract64(tcr, mtx_bit, 1);
> +}

Unused?

> diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
> index 1484087a19..b54fbd11c0 100644
> --- a/target/arm/tcg/mte_helper.c
> +++ b/target/arm/tcg/mte_helper.c
> @@ -854,6 +854,13 @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
>           mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
>                                     MMU_DATA_LOAD, ra);
>           if (!mem1) {
> +            /*
> +             * If mtx is enabled, then the access is MemTag_CanonicallyTagged,
> +             * otherwise it is Untagged. See AArch64.CheckTag.

CheckTag is not where this is decided, but S1DecodeMemAttrs (and similarly in 
S1DisabledOutput).

> @@ -867,6 +874,12 @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
>                                     ptr_last - next_page + 1,
>                                     MMU_DATA_LOAD, ra);
>   
> +        /* If either region is canonically tagged, do a canonical tag check */
> +        if (mtx_check(desc, bit55) && (!mem1 || !mem2)
> +            && (!tag_is_canonical(ptr_tag, bit55))) {
> +            return 0;
> +        }
> +

This fails to set *fault correctly for the second page.
You need to split checks into the separate !mem1 and !mem2 tests below.

> @@ -974,6 +987,7 @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr)
>           goto done;
>       }
>   
> +
>       /*

Watch the spurrious changes.

The whole patch could be split into smaller parts.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag
  2026-03-09 21:59 ` [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag Gabriel Brookman
@ 2026-04-05 22:20   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-05 22:20 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> According to ARM ARM, section "Memory Tagging Region Types", loading
> tags from canonically tagged regions should use the canonical tags, not
> allocation tags.
> 
> Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
> ---
>   target/arm/tcg/mte_helper.c | 35 +++++++++++++++++++++++++++++++++++
>   1 file changed, 35 insertions(+)
> 
> diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
> index b54fbd11c0..07797aecf9 100644
> --- a/target/arm/tcg/mte_helper.c
> +++ b/target/arm/tcg/mte_helper.c
> @@ -313,6 +313,11 @@ uint64_t HELPER(ldg)(CPUARMState *env, uint64_t ptr, uint64_t xt)
>       /* Load if page supports tags. */
>       if (mem) {
>           rtag = load_tag1(ptr, mem);
> +    } else {
> +        uint64_t bit55 = extract64(ptr, 55, 1);
> +        if (canonical_tagging_enabled(env, bit55)) {
> +            rtag = 0xF * bit55;
> +        }
>       }

Rather than look up MTX again, just pass it down from the translator with a new argument.

> @@ -472,6 +479,34 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
>   
>       /* The tag is squashed to zero if the page does not support tags.  */
>       if (!tag_mem) {
> +        /* Load canonical value if mtx is set (untagged memory region) */
> +        if (canonical_tagging_enabled(env, bit55)) {

Likewise.

> +            switch (gm_bs) {
> +            case 3:
> +                /* 32 bytes -> 2 tags -> 8 result bits */
> +                ret = -(uint8_t)bit55;
> +                break;
> +            case 4:
> +                /* 64 bytes -> 4 tags -> 16 result bits */
> +                ret = -(uint16_t)bit55;
> +                break;

This doesn't do what you expect, because uint8_t and uint16_t promote to int.

   ret = (uint8_t)-bit55;

will promote bool to int, negate, then truncate to {0, 0xff}.
Whether or not that's better than

   ret = bit55 ? 0xff : 0;

I don't know off-hand.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 08/13] target/arm: storing to canonical tag faults
  2026-03-09 21:59 ` [PATCH v4 08/13] target/arm: storing to canonical tag faults Gabriel Brookman
@ 2026-04-05 22:37   ` Richard Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Richard Henderson @ 2026-04-05 22:37 UTC (permalink / raw)
  To: Gabriel Brookman, qemu-devel
  Cc: Peter Maydell, Gustavo Romero, qemu-arm, Laurent Vivier,
	Pierrick Bouvier

On 3/10/26 08:59, Gabriel Brookman wrote:
> According to ARM ARM, section "Memory region tagging types", tag-store
> instructions targeting canonically tagged regions cause a stage 1
> permission fault with MTX enabled.
> 
> Signed-off-by: Gabriel Brookman <brookmangabriel@gmail.com>
> ---
>   target/arm/tcg/mte_helper.c | 69 +++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 69 insertions(+)
> 
> diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
> index 07797aecf9..ddf4ffc51b 100644
> --- a/target/arm/tcg/mte_helper.c
> +++ b/target/arm/tcg/mte_helper.c
> @@ -227,6 +227,20 @@ uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx,
>   #endif
>   }
>   
> +static void canonical_tag_write_fail(CPUARMState *env,
> +                                uint64_t dirty_ptr, uintptr_t ra)
> +{
> +    uint64_t syn;
> +
> +    env->exception.vaddress = dirty_ptr;
> +
> +    syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0, 1, 0);
> +    syn |= BIT_ULL(42); /* TnD is bit 42 */
> +
> +    raise_exception_ra(env, EXCP_DATA_ABORT, syn, exception_target_el(env), ra);
> +    g_assert_not_reached();
> +}
> +
>   static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
>                                      uint64_t ptr, MMUAccessType ptr_access,
>                                      int ptr_size, MMUAccessType tag_access,
> @@ -372,7 +386,11 @@ static inline void do_stg(CPUARMState *env, uint64_t ptr, uint64_t xt,
>       /* Store if page supports tags. */
>       if (mem) {
>           store1(ptr, mem, allocation_tag_from_addr(xt));
> +    } else if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {

Pass mtx from translator, all instances.

> @@ -415,6 +443,11 @@ static inline void do_st2g(CPUARMState *env, uint64_t ptr, uint64_t xt,
>                                     MMU_DATA_STORE, TAG_GRANULE,
>                                     MMU_DATA_STORE, ra);
>   
> +        if (!(mem1 && mem2) && canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
> +            canonical_tag_write_fail(env, ptr, ra);
> +            return;
> +        }
> +

This isn't correct.  The store to mem1 is independent from mem2 and might succeed; the 
fault from mem2 must increment the address.

> @@ -448,6 +486,7 @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
>       int mmu_idx = arm_env_mmu_index(env);
>       uintptr_t ra = GETPC();
>       int in_page = -(ptr | TARGET_PAGE_MASK);
> +    uint8_t *mem1, *mem2;
>   
>       check_tag_aligned(env, ptr, ra);
>   
> @@ -457,6 +496,29 @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
>           probe_write(env, ptr, TAG_GRANULE, mmu_idx, ra);
>           probe_write(env, ptr + TAG_GRANULE, TAG_GRANULE, mmu_idx, ra);
>       }
> +
> +    /* If we are storing to a canonically tagged memory region, fault. */
> +    if (canonical_tagging_enabled(env, 1 & (ptr >> 55))) {
> +        if (likely(in_page >= 2 * TAG_GRANULE)) {
> +            mem1 = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE,
> +                                           2 * TAG_GRANULE, MMU_DATA_STORE,
> +                                           true, ra);
> +            if (!mem1) {
> +                canonical_tag_write_fail(env, ptr, ra);
> +            }
> +        } else {
> +            mem1 = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE,
> +                                           TAG_GRANULE, MMU_DATA_STORE,
> +                                           true, ra);
> +            mem2 = allocation_tag_mem_probe(env, mmu_idx,
> +                                                  ptr + TAG_GRANULE,
> +                                                  MMU_DATA_STORE, TAG_GRANULE,
> +                                                  MMU_DATA_STORE, true, ra);
> +            if (!mem1 || !mem2) {
> +                canonical_tag_write_fail(env, ptr, ra);
> +            }
> +        }
> +    }
>   }

No changes required for st2g_stub, because that's only used when tagaccess is false.  See 
AArch64.S1CheckPermissions.


r~


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2026-04-05 22:38 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 21:59 [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman
2026-03-09 21:59 ` [PATCH v4 01/13] target/arm: implement MTE_PERM Gabriel Brookman
2026-04-04 23:17   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 02/13] target/arm: add TCSO bitmasks to SCTLR Gabriel Brookman
2026-04-04 23:27   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 03/13] target/arm: mte_check unemitted on STORE_ONLY load Gabriel Brookman
2026-04-04 23:37   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 04/13] linux-user: add MTE_STORE_ONLY to prctl Gabriel Brookman
2026-04-04 23:39   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 05/13] target/arm: tag check emitted when MTX and not TBI Gabriel Brookman
2026-04-05  0:31   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 06/13] target/arm: add canonical tag check logic Gabriel Brookman
2026-04-05 21:46   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 07/13] target/arm: ldg on canonical tag loads the tag Gabriel Brookman
2026-04-05 22:20   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 08/13] target/arm: storing to canonical tag faults Gabriel Brookman
2026-04-05 22:37   ` Richard Henderson
2026-03-09 21:59 ` [PATCH v4 09/13] target/arm: with MTX, no tag bit bounds check Gabriel Brookman
2026-03-09 21:59 ` [PATCH v4 10/13] target/arm: with MTX, tag is not a part of PAuth Gabriel Brookman
2026-03-09 21:59 ` [PATCH v4 11/13] docs: add MTE4 features to docs Gabriel Brookman
2026-03-09 21:59 ` [PATCH v4 12/13] tests/tcg: add test for MTE FAR Gabriel Brookman
2026-03-09 21:59 ` [PATCH v4 13/13] tests/tcg: add test for MTE_STORE_ONLY Gabriel Brookman
2026-04-04  1:20 ` [PATCH v4 00/13] target/arm: add support for MTE4 Gabriel Brookman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.