From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org, Peter Maydell <peter.maydell@linaro.org>
Subject: [PATCH v2 17/20] target/arm: Move mte check for store-exclusive
Date: Thu, 25 May 2023 16:25:55 -0700 [thread overview]
Message-ID: <20230525232558.1758967-18-richard.henderson@linaro.org> (raw)
In-Reply-To: <20230525232558.1758967-1-richard.henderson@linaro.org>
Push the mte check behind the exclusive_addr check.
Document the several ways that we are still out of spec
with this implementation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 42 +++++++++++++++++++++++++++++-----
1 file changed, 36 insertions(+), 6 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 49cb7a7dd5..9654c5746a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2524,17 +2524,47 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
*/
TCGLabel *fail_label = gen_new_label();
TCGLabel *done_label = gen_new_label();
- TCGv_i64 tmp, dirty_addr, clean_addr;
+ TCGv_i64 tmp, clean_addr;
MemOp memop;
- memop = (size + is_pair) | MO_ALIGN;
- memop = finalize_memop(s, memop);
-
- dirty_addr = cpu_reg_sp(s, rn);
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
+ /*
+ * FIXME: We are out of spec here. We have recorded only the address
+ * from load_exclusive, not the entire range, and we assume that the
+ * size of the access on both sides match. The architecture allows the
+ * store to be smaller than the load, so long as the stored bytes are
+ * within the range recorded by the load.
+ */
+ /* See AArch64.ExclusiveMonitorsPass() and AArch64.IsExclusiveVA(). */
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
+ /*
+ * The write, and any associated faults, only happen if the virtual
+ * and physical addresses pass the exclusive monitor check. These
+ * faults are exceedingly unlikely, because normally the guest uses
+ * the exact same address register for the load_exclusive, and we
+ * would have recognized these faults there.
+ *
+ * It is possible to trigger an alignment fault pre-LSE2, e.g. with an
+ * unaligned 4-byte write within the range of an aligned 8-byte load.
+ * With LSE2, the store would need to cross a 16-byte boundary when the
+ * load did not, which would mean the store is outside the range
+ * recorded for the monitor, which would have failed a corrected monitor
+ * check above. For now, we assume no size change and retain the
+ * MO_ALIGN to let tcg know what we checked in the load_exclusive.
+ *
+ * It is possible to trigger an MTE fault, by performing the load with
+ * a virtual address with a valid tag and performing the store with the
+ * same virtual address and a different invalid tag.
+ */
+ memop = size + is_pair;
+ if (memop == MO_128 || !dc_isar_feature(aa64_lse2, s)) {
+ memop |= MO_ALIGN;
+ }
+ memop = finalize_memop(s, memop);
+ gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
+
tmp = tcg_temp_new_i64();
if (is_pair) {
if (size == 2) {
--
2.34.1
next prev parent reply other threads:[~2023-05-25 23:28 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-25 23:25 [PATCH v2 00/20] target/arm: Implement FEAT_LSE2 Richard Henderson
2023-05-25 23:25 ` [PATCH v2 01/20] target/arm: Add commentary for CPUARMState.exclusive_high Richard Henderson
2023-05-26 8:56 ` Philippe Mathieu-Daudé
2023-05-26 9:49 ` Juan Quintela
2023-05-26 14:43 ` Richard Henderson
2023-05-30 15:11 ` Peter Maydell
2023-05-25 23:25 ` [PATCH v2 02/20] target/arm: Add feature test for FEAT_LSE2 Richard Henderson
2023-05-30 12:50 ` Philippe Mathieu-Daudé
2023-05-25 23:25 ` [PATCH v2 03/20] target/arm: Introduce finalize_memop_{atom,pair} Richard Henderson
2023-05-30 12:48 ` [PATCH v2 03/20] target/arm: Introduce finalize_memop_{atom, pair} Philippe Mathieu-Daudé
2023-05-30 15:24 ` Peter Maydell
2023-05-25 23:25 ` [PATCH v2 04/20] target/arm: Use tcg_gen_qemu_ld_i128 for LDXP Richard Henderson
2023-05-30 15:26 ` Peter Maydell
2023-05-25 23:25 ` [PATCH v2 05/20] target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld} Richard Henderson
2023-05-30 15:29 ` Peter Maydell
2023-05-25 23:25 ` [PATCH v2 06/20] target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G Richard Henderson
2023-05-25 23:25 ` [PATCH v2 07/20] target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r Richard Henderson
2023-05-25 23:25 ` [PATCH v2 08/20] target/arm: Sink gen_mte_check1 into load/store_exclusive Richard Henderson
2023-05-25 23:25 ` [PATCH v2 09/20] target/arm: Load/store integer pair with one tcg operation Richard Henderson
2023-05-25 23:25 ` [PATCH v2 10/20] target/arm: Hoist finalize_memop out of do_gpr_{ld, st} Richard Henderson
2023-05-25 23:25 ` [PATCH v2 11/20] target/arm: Hoist finalize_memop out of do_fp_{ld, st} Richard Henderson
2023-05-30 12:59 ` Philippe Mathieu-Daudé
2023-05-25 23:25 ` [PATCH v2 12/20] target/arm: Pass memop to gen_mte_check1* Richard Henderson
2023-05-25 23:25 ` [PATCH v2 13/20] target/arm: Pass single_memop to gen_mte_checkN Richard Henderson
2023-05-25 23:25 ` [PATCH v2 14/20] target/arm: Check alignment in helper_mte_check Richard Henderson
2023-05-25 23:25 ` [PATCH v2 15/20] target/arm: Add SCTLR.nAA to TBFLAG_A64 Richard Henderson
2023-05-25 23:25 ` [PATCH v2 16/20] target/arm: Relax ordered/atomic alignment checks for LSE2 Richard Henderson
2023-05-25 23:25 ` Richard Henderson [this message]
2023-05-25 23:25 ` [PATCH v2 18/20] tests/tcg/aarch64: Use stz2g in mte-7.c Richard Henderson
2023-05-30 15:30 ` Peter Maydell
2023-05-25 23:25 ` [PATCH v2 19/20] tests/tcg/multiarch: Adjust sigbus.c Richard Henderson
2023-05-25 23:25 ` [PATCH v2 20/20] target/arm: Enable FEAT_LSE2 for -cpu max Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230525232558.1758967-18-richard.henderson@linaro.org \
--to=richard.henderson@linaro.org \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).