* [PATCH v2 01/23] target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 02/23] target/arm: Return correct result for LDG when ATA=0 Peter Maydell
` (21 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
The atomic memory operations are supposed to return the old memory
data value in the destination register. This value is not
sign-extended, even if the operation is the signed minimum or
maximum. (In the pseudocode for the instructions the returned data
value is passed to ZeroExtend() to create the value in the register.)
We got this wrong because we were doing a 32-to-64 zero extend on the
result for 8 and 16 bit data values, rather than the correct amount
of zero extension.
Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data
sizes rather than ext32u.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org
---
target/arm/tcg/translate-a64.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index aa93f37e216..246e3c15145 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3545,8 +3545,22 @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
*/
fn(tcg_rt, clean_addr, tcg_rs, get_mem_index(s), mop);
- if ((mop & MO_SIGN) && size != MO_64) {
- tcg_gen_ext32u_i64(tcg_rt, tcg_rt);
+ if (mop & MO_SIGN) {
+ switch (size) {
+ case MO_8:
+ tcg_gen_ext8u_i64(tcg_rt, tcg_rt);
+ break;
+ case MO_16:
+ tcg_gen_ext16u_i64(tcg_rt, tcg_rt);
+ break;
+ case MO_32:
+ tcg_gen_ext32u_i64(tcg_rt, tcg_rt);
+ break;
+ case MO_64:
+ break;
+ default:
+ g_assert_not_reached();
+ }
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 02/23] target/arm: Return correct result for LDG when ATA=0
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
2023-06-11 16:00 ` [PATCH v2 01/23] target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode Peter Maydell
` (20 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
The LDG instruction loads the tag from a memory address (identified
by [Xn + offset]), and then merges that tag into the destination
register Xt. We implemented this correctly for the case when
allocation tags are enabled, but didn't get it right when ATA=0:
instead of merging the tag bits into Xt, we merged them into the
memory address [Xn + offset] and then set Xt to that.
Merge the tag bits into the old Xt value, as they should be.
Cc: qemu-stable@nongnu.org
Fixes: c15294c1e36a7dd9b25 ("target/arm: Implement LDG, STG, ST2G instructions")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 246e3c15145..4ec857bcd8d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4201,9 +4201,13 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
if (s->ata) {
gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
} else {
+ /*
+ * Tag access disabled: we must check for aborts on the load
+ * load from [rn+offset], and then insert a 0 tag into rt.
+ */
clean_addr = clean_data_tbi(s, addr);
gen_probe_access(s, clean_addr, MMU_DATA_LOAD, MO_8);
- gen_address_with_allocation_tag0(tcg_rt, addr);
+ gen_address_with_allocation_tag0(tcg_rt, tcg_rt);
}
} else {
tcg_rt = cpu_reg_sp(s, rt);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
2023-06-11 16:00 ` [PATCH v2 01/23] target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics Peter Maydell
2023-06-11 16:00 ` [PATCH v2 02/23] target/arm: Return correct result for LDG when ATA=0 Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-12 9:31 ` Philippe Mathieu-Daudé
2023-06-14 5:18 ` Richard Henderson
2023-06-11 16:00 ` [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores Peter Maydell
` (19 subsequent siblings)
22 siblings, 2 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
In disas_ldst_reg_imm9() we missed one place where a call to
a gen_mte_check* function should now be passed the memop we
have created rather than just being passed the size. Fix this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 4ec857bcd8d..d271449431a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3226,7 +3226,7 @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
writeback || rn != 31,
- size, is_unpriv, memidx);
+ memop, is_unpriv, memidx);
if (is_vector) {
if (is_store) {
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
2023-06-11 16:00 ` [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode Peter Maydell
@ 2023-06-12 9:31 ` Philippe Mathieu-Daudé
2023-06-14 5:18 ` Richard Henderson
1 sibling, 0 replies; 31+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-06-12 9:31 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 11/6/23 18:00, Peter Maydell wrote:
> In disas_ldst_reg_imm9() we missed one place where a call to
> a gen_mte_check* function should now be passed the memop we
> have created rather than just being passed the size. Fix this.
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> target/arm/tcg/translate-a64.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Fixes: 0a9091424d ("target/arm: Pass memop to gen_mte_check1*")
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
2023-06-11 16:00 ` [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode Peter Maydell
2023-06-12 9:31 ` Philippe Mathieu-Daudé
@ 2023-06-14 5:18 ` Richard Henderson
1 sibling, 0 replies; 31+ messages in thread
From: Richard Henderson @ 2023-06-14 5:18 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 6/11/23 18:00, Peter Maydell wrote:
> In disas_ldst_reg_imm9() we missed one place where a call to
> a gen_mte_check* function should now be passed the memop we
> have created rather than just being passed the size. Fix this.
>
> Signed-off-by: Peter Maydell<peter.maydell@linaro.org>
> ---
> target/arm/tcg/translate-a64.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (2 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 03/23] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-14 5:20 ` Richard Henderson
2023-06-11 16:00 ` [PATCH v2 05/23] target/arm: Convert hint instruction space to decodetree Peter Maydell
` (18 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d271449431a..1108f8287b8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3309,6 +3309,7 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3321,6 +3322,7 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3333,7 +3335,6 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
if (is_vector) {
@@ -3398,6 +3399,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3410,6 +3412,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3419,7 +3422,6 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
offset = imm12 << size;
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
if (is_vector) {
@@ -3861,7 +3863,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
* promote consecutive little-endian elements below.
*/
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
- total, finalize_memop(s, size));
+ total, finalize_memop_asimd(s, size));
/*
* Consecutive little-endian elements from a single register
@@ -4019,7 +4021,7 @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
total = selem << scale;
tcg_rn = cpu_reg_sp(s, rn);
- mop = finalize_memop(s, scale);
+ mop = finalize_memop_asimd(s, scale);
clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
total, mop);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores
2023-06-11 16:00 ` [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores Peter Maydell
@ 2023-06-14 5:20 ` Richard Henderson
0 siblings, 0 replies; 31+ messages in thread
From: Richard Henderson @ 2023-06-14 5:20 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 6/11/23 18:00, Peter Maydell wrote:
> In the recent refactoring we missed a few places which should be
> calling finalize_memop_asimd() for ASIMD loads and stores but
> instead are just calling finalize_memop(); fix these.
>
> For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
> cases, this is not a behaviour change because there the size
> is never MO_128 and the two finalize functions do the same thing.
>
> Signed-off-by: Peter Maydell<peter.maydell@linaro.org>
> ---
> target/arm/tcg/translate-a64.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 05/23] target/arm: Convert hint instruction space to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (3 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 06/23] target/arm: Convert barrier insns " Peter Maydell
` (17 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the various instructions in the hint instruction space
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-3-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 31 ++++
target/arm/tcg/translate-a64.c | 277 ++++++++++++++++++---------------
2 files changed, 185 insertions(+), 123 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 12a310d0a31..1efd436e175 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -150,3 +150,34 @@ ERETA 1101011 0100 11111 00001 m:1 11111 11111 &reta # ERETAA, ERETAB
# the processor is in halting debug state (which we don't implement).
# The pattern is listed here as documentation.
# DRPS 1101011 0101 11111 000000 11111 00000
+
+# Hint instruction group
+{
+ [
+ YIELD 1101 0101 0000 0011 0010 0000 001 11111
+ WFE 1101 0101 0000 0011 0010 0000 010 11111
+ WFI 1101 0101 0000 0011 0010 0000 011 11111
+ # We implement WFE to never block, so our SEV/SEVL are NOPs
+ # SEV 1101 0101 0000 0011 0010 0000 100 11111
+ # SEVL 1101 0101 0000 0011 0010 0000 101 11111
+ # Our DGL is a NOP because we don't merge memory accesses anyway.
+ # DGL 1101 0101 0000 0011 0010 0000 110 11111
+ XPACLRI 1101 0101 0000 0011 0010 0000 111 11111
+ PACIA1716 1101 0101 0000 0011 0010 0001 000 11111
+ PACIB1716 1101 0101 0000 0011 0010 0001 010 11111
+ AUTIA1716 1101 0101 0000 0011 0010 0001 100 11111
+ AUTIB1716 1101 0101 0000 0011 0010 0001 110 11111
+ ESB 1101 0101 0000 0011 0010 0010 000 11111
+ PACIAZ 1101 0101 0000 0011 0010 0011 000 11111
+ PACIASP 1101 0101 0000 0011 0010 0011 001 11111
+ PACIBZ 1101 0101 0000 0011 0010 0011 010 11111
+ PACIBSP 1101 0101 0000 0011 0010 0011 011 11111
+ AUTIAZ 1101 0101 0000 0011 0010 0011 100 11111
+ AUTIASP 1101 0101 0000 0011 0010 0011 101 11111
+ AUTIBZ 1101 0101 0000 0011 0010 0011 110 11111
+ AUTIBSP 1101 0101 0000 0011 0010 0011 111 11111
+ ]
+ # The canonical NOP has CRm == op2 == 0, but all of the space
+ # that isn't specifically allocated to an instruction must NOP
+ NOP 1101 0101 0000 0011 0010 ---- --- 11111
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1108f8287b8..eb8addac1b3 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1649,133 +1649,167 @@ static bool trans_ERETA(DisasContext *s, arg_reta *a)
return true;
}
-/* HINT instruction group, including various allocated HINTs */
-static void handle_hint(DisasContext *s, uint32_t insn,
- unsigned int op1, unsigned int op2, unsigned int crm)
+static bool trans_NOP(DisasContext *s, arg_NOP *a)
{
- unsigned int selector = crm << 3 | op2;
+ return true;
+}
- if (op1 != 3) {
- unallocated_encoding(s);
- return;
+static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
+{
+ /*
+ * When running in MTTCG we don't generate jumps to the yield and
+ * WFE helpers as it won't affect the scheduling of other vCPUs.
+ * If we wanted to more completely model WFE/SEV so we don't busy
+ * spin unnecessarily we would need to do something more involved.
+ */
+ if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
+ s->base.is_jmp = DISAS_YIELD;
}
+ return true;
+}
- switch (selector) {
- case 0b00000: /* NOP */
- break;
- case 0b00011: /* WFI */
- s->base.is_jmp = DISAS_WFI;
- break;
- case 0b00001: /* YIELD */
- /* When running in MTTCG we don't generate jumps to the yield and
- * WFE helpers as it won't affect the scheduling of other vCPUs.
- * If we wanted to more completely model WFE/SEV so we don't busy
- * spin unnecessarily we would need to do something more involved.
+static bool trans_WFI(DisasContext *s, arg_WFI *a)
+{
+ s->base.is_jmp = DISAS_WFI;
+ return true;
+}
+
+static bool trans_WFE(DisasContext *s, arg_WFI *a)
+{
+ /*
+ * When running in MTTCG we don't generate jumps to the yield and
+ * WFE helpers as it won't affect the scheduling of other vCPUs.
+ * If we wanted to more completely model WFE/SEV so we don't busy
+ * spin unnecessarily we would need to do something more involved.
+ */
+ if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
+ s->base.is_jmp = DISAS_WFE;
+ }
+ return true;
+}
+
+static bool trans_XPACLRI(DisasContext *s, arg_XPACLRI *a)
+{
+ if (s->pauth_active) {
+ gen_helper_xpaci(cpu_X[30], cpu_env, cpu_X[30]);
+ }
+ return true;
+}
+
+static bool trans_PACIA1716(DisasContext *s, arg_PACIA1716 *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
+ }
+ return true;
+}
+
+static bool trans_PACIB1716(DisasContext *s, arg_PACIB1716 *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
+ }
+ return true;
+}
+
+static bool trans_AUTIA1716(DisasContext *s, arg_AUTIA1716 *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
+ }
+ return true;
+}
+
+static bool trans_AUTIB1716(DisasContext *s, arg_AUTIB1716 *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
+ }
+ return true;
+}
+
+static bool trans_ESB(DisasContext *s, arg_ESB *a)
+{
+ /* Without RAS, we must implement this as NOP. */
+ if (dc_isar_feature(aa64_ras, s)) {
+ /*
+ * QEMU does not have a source of physical SErrors,
+ * so we are only concerned with virtual SErrors.
+ * The pseudocode in the ARM for this case is
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
+ * AArch64.vESBOperation();
+ * Most of the condition can be evaluated at translation time.
+ * Test for EL2 present, and defer test for SEL2 to runtime.
*/
- if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
- s->base.is_jmp = DISAS_YIELD;
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
+ gen_helper_vesb(cpu_env);
}
- break;
- case 0b00010: /* WFE */
- if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
- s->base.is_jmp = DISAS_WFE;
- }
- break;
- case 0b00100: /* SEV */
- case 0b00101: /* SEVL */
- case 0b00110: /* DGH */
- /* we treat all as NOP at least for now */
- break;
- case 0b00111: /* XPACLRI */
- if (s->pauth_active) {
- gen_helper_xpaci(cpu_X[30], cpu_env, cpu_X[30]);
- }
- break;
- case 0b01000: /* PACIA1716 */
- if (s->pauth_active) {
- gen_helper_pacia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
- }
- break;
- case 0b01010: /* PACIB1716 */
- if (s->pauth_active) {
- gen_helper_pacib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
- }
- break;
- case 0b01100: /* AUTIA1716 */
- if (s->pauth_active) {
- gen_helper_autia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
- }
- break;
- case 0b01110: /* AUTIB1716 */
- if (s->pauth_active) {
- gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
- }
- break;
- case 0b10000: /* ESB */
- /* Without RAS, we must implement this as NOP. */
- if (dc_isar_feature(aa64_ras, s)) {
- /*
- * QEMU does not have a source of physical SErrors,
- * so we are only concerned with virtual SErrors.
- * The pseudocode in the ARM for this case is
- * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
- * AArch64.vESBOperation();
- * Most of the condition can be evaluated at translation time.
- * Test for EL2 present, and defer test for SEL2 to runtime.
- */
- if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
- gen_helper_vesb(cpu_env);
- }
- }
- break;
- case 0b11000: /* PACIAZ */
- if (s->pauth_active) {
- gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
- tcg_constant_i64(0));
- }
- break;
- case 0b11001: /* PACIASP */
- if (s->pauth_active) {
- gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
- }
- break;
- case 0b11010: /* PACIBZ */
- if (s->pauth_active) {
- gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30],
- tcg_constant_i64(0));
- }
- break;
- case 0b11011: /* PACIBSP */
- if (s->pauth_active) {
- gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
- }
- break;
- case 0b11100: /* AUTIAZ */
- if (s->pauth_active) {
- gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30],
- tcg_constant_i64(0));
- }
- break;
- case 0b11101: /* AUTIASP */
- if (s->pauth_active) {
- gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
- }
- break;
- case 0b11110: /* AUTIBZ */
- if (s->pauth_active) {
- gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30],
- tcg_constant_i64(0));
- }
- break;
- case 0b11111: /* AUTIBSP */
- if (s->pauth_active) {
- gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
- }
- break;
- default:
- /* default specified as NOP equivalent */
- break;
}
+ return true;
+}
+
+static bool trans_PACIAZ(DisasContext *s, arg_PACIAZ *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30], tcg_constant_i64(0));
+ }
+ return true;
+}
+
+static bool trans_PACIASP(DisasContext *s, arg_PACIASP *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
+ }
+ return true;
+}
+
+static bool trans_PACIBZ(DisasContext *s, arg_PACIBZ *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30], tcg_constant_i64(0));
+ }
+ return true;
+}
+
+static bool trans_PACIBSP(DisasContext *s, arg_PACIBSP *a)
+{
+ if (s->pauth_active) {
+ gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
+ }
+ return true;
+}
+
+static bool trans_AUTIAZ(DisasContext *s, arg_AUTIAZ *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30], tcg_constant_i64(0));
+ }
+ return true;
+}
+
+static bool trans_AUTIASP(DisasContext *s, arg_AUTIASP *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
+ }
+ return true;
+}
+
+static bool trans_AUTIBZ(DisasContext *s, arg_AUTIBZ *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30], tcg_constant_i64(0));
+ }
+ return true;
+}
+
+static bool trans_AUTIBSP(DisasContext *s, arg_AUTIBSP *a)
+{
+ if (s->pauth_active) {
+ gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
+ }
+ return true;
}
static void gen_clrex(DisasContext *s, uint32_t insn)
@@ -2302,9 +2336,6 @@ static void disas_system(DisasContext *s, uint32_t insn)
return;
}
switch (crn) {
- case 2: /* HINT (including allocated hints like NOP, YIELD, etc) */
- handle_hint(s, insn, op1, op2, crm);
- break;
case 3: /* CLREX, DSB, DMB, ISB */
handle_sync(s, insn, op1, op2, crm);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 06/23] target/arm: Convert barrier insns to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (4 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 05/23] target/arm: Convert hint instruction space to decodetree Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-12 9:37 ` Philippe Mathieu-Daudé
2023-06-11 16:00 ` [PATCH v2 07/23] target/arm: Convert CFINV, XAFLAG and AXFLAG " Peter Maydell
` (16 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the insns in the "Barriers" instruction class to
decodetree: CLREX, DSB, DMB, ISB and SB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 7 +++
target/arm/tcg/translate-a64.c | 92 ++++++++++++++--------------------
2 files changed, 46 insertions(+), 53 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1efd436e175..b3608d38dc9 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -181,3 +181,10 @@ ERETA 1101011 0100 11111 00001 m:1 11111 11111 &reta # ERETAA, ERETAB
# that isn't specifically allocated to an instruction must NOP
NOP 1101 0101 0000 0011 0010 ---- --- 11111
}
+
+# Barriers
+
+CLREX 1101 0101 0000 0011 0011 ---- 010 11111
+DSB_DMB 1101 0101 0000 0011 0011 domain:2 types:2 10- 11111
+ISB 1101 0101 0000 0011 0011 ---- 110 11111
+SB 1101 0101 0000 0011 0011 0000 111 11111
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index eb8addac1b3..088dfd8b1fd 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1812,67 +1812,56 @@ static bool trans_AUTIBSP(DisasContext *s, arg_AUTIBSP *a)
return true;
}
-static void gen_clrex(DisasContext *s, uint32_t insn)
+static bool trans_CLREX(DisasContext *s, arg_CLREX *a)
{
tcg_gen_movi_i64(cpu_exclusive_addr, -1);
+ return true;
}
-/* CLREX, DSB, DMB, ISB */
-static void handle_sync(DisasContext *s, uint32_t insn,
- unsigned int op1, unsigned int op2, unsigned int crm)
+static bool trans_DSB_DMB(DisasContext *s, arg_DSB_DMB *a)
{
+ /* We handle DSB and DMB the same way */
TCGBar bar;
- if (op1 != 3) {
- unallocated_encoding(s);
- return;
+ switch (a->types) {
+ case 1: /* MBReqTypes_Reads */
+ bar = TCG_BAR_SC | TCG_MO_LD_LD | TCG_MO_LD_ST;
+ break;
+ case 2: /* MBReqTypes_Writes */
+ bar = TCG_BAR_SC | TCG_MO_ST_ST;
+ break;
+ default: /* MBReqTypes_All */
+ bar = TCG_BAR_SC | TCG_MO_ALL;
+ break;
}
+ tcg_gen_mb(bar);
+ return true;
+}
- switch (op2) {
- case 2: /* CLREX */
- gen_clrex(s, insn);
- return;
- case 4: /* DSB */
- case 5: /* DMB */
- switch (crm & 3) {
- case 1: /* MBReqTypes_Reads */
- bar = TCG_BAR_SC | TCG_MO_LD_LD | TCG_MO_LD_ST;
- break;
- case 2: /* MBReqTypes_Writes */
- bar = TCG_BAR_SC | TCG_MO_ST_ST;
- break;
- default: /* MBReqTypes_All */
- bar = TCG_BAR_SC | TCG_MO_ALL;
- break;
- }
- tcg_gen_mb(bar);
- return;
- case 6: /* ISB */
- /* We need to break the TB after this insn to execute
- * a self-modified code correctly and also to take
- * any pending interrupts immediately.
- */
- reset_btype(s);
- gen_goto_tb(s, 0, 4);
- return;
+static bool trans_ISB(DisasContext *s, arg_ISB *a)
+{
+ /*
+ * We need to break the TB after this insn to execute
+ * self-modifying code correctly and also to take
+ * any pending interrupts immediately.
+ */
+ reset_btype(s);
+ gen_goto_tb(s, 0, 4);
+ return true;
+}
- case 7: /* SB */
- if (crm != 0 || !dc_isar_feature(aa64_sb, s)) {
- goto do_unallocated;
- }
- /*
- * TODO: There is no speculation barrier opcode for TCG;
- * MB and end the TB instead.
- */
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
- gen_goto_tb(s, 0, 4);
- return;
-
- default:
- do_unallocated:
- unallocated_encoding(s);
- return;
+static bool trans_SB(DisasContext *s, arg_SB *a)
+{
+ if (!dc_isar_feature(aa64_sb, s)) {
+ return false;
}
+ /*
+ * TODO: There is no speculation barrier opcode for TCG;
+ * MB and end the TB instead.
+ */
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
+ gen_goto_tb(s, 0, 4);
+ return true;
}
static void gen_xaflag(void)
@@ -2336,9 +2325,6 @@ static void disas_system(DisasContext *s, uint32_t insn)
return;
}
switch (crn) {
- case 3: /* CLREX, DSB, DMB, ISB */
- handle_sync(s, insn, op1, op2, crm);
- break;
case 4: /* MSR (immediate) */
handle_msr_i(s, insn, op1, op2, crm);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 06/23] target/arm: Convert barrier insns to decodetree
2023-06-11 16:00 ` [PATCH v2 06/23] target/arm: Convert barrier insns " Peter Maydell
@ 2023-06-12 9:37 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 31+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-06-12 9:37 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 11/6/23 18:00, Peter Maydell wrote:
> Convert the insns in the "Barriers" instruction class to
> decodetree: CLREX, DSB, DMB, ISB and SB.
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org
> ---
> target/arm/tcg/a64.decode | 7 +++
> target/arm/tcg/translate-a64.c | 92 ++++++++++++++--------------------
> 2 files changed, 46 insertions(+), 53 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 07/23] target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (5 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 06/23] target/arm: Convert barrier insns " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 08/23] target/arm: Convert MSR (immediate) " Peter Maydell
` (15 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree.
The old decoder handles these in handle_msr_i(), but
the architecture defines them as separate instructions
from MSR (immediate).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-5-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 6 ++++
target/arm/tcg/translate-a64.c | 53 +++++++++++++++++-----------------
2 files changed, 32 insertions(+), 27 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index b3608d38dc9..fd23fc3e0ff 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -188,3 +188,9 @@ CLREX 1101 0101 0000 0011 0011 ---- 010 11111
DSB_DMB 1101 0101 0000 0011 0011 domain:2 types:2 10- 11111
ISB 1101 0101 0000 0011 0011 ---- 110 11111
SB 1101 0101 0000 0011 0011 0000 111 11111
+
+# PSTATE
+
+CFINV 1101 0101 0000 0 000 0100 0000 000 11111
+XAFLAG 1101 0101 0000 0 000 0100 0000 001 11111
+AXFLAG 1101 0101 0000 0 000 0100 0000 010 11111
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 088dfd8b1fd..c1b02b96183 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1864,9 +1864,24 @@ static bool trans_SB(DisasContext *s, arg_SB *a)
return true;
}
-static void gen_xaflag(void)
+static bool trans_CFINV(DisasContext *s, arg_CFINV *a)
{
- TCGv_i32 z = tcg_temp_new_i32();
+ if (!dc_isar_feature(aa64_condm_4, s)) {
+ return false;
+ }
+ tcg_gen_xori_i32(cpu_CF, cpu_CF, 1);
+ return true;
+}
+
+static bool trans_XAFLAG(DisasContext *s, arg_XAFLAG *a)
+{
+ TCGv_i32 z;
+
+ if (!dc_isar_feature(aa64_condm_5, s)) {
+ return false;
+ }
+
+ z = tcg_temp_new_i32();
tcg_gen_setcondi_i32(TCG_COND_EQ, z, cpu_ZF, 0);
@@ -1890,10 +1905,16 @@ static void gen_xaflag(void)
/* C | Z */
tcg_gen_or_i32(cpu_CF, cpu_CF, z);
+
+ return true;
}
-static void gen_axflag(void)
+static bool trans_AXFLAG(DisasContext *s, arg_AXFLAG *a)
{
+ if (!dc_isar_feature(aa64_condm_5, s)) {
+ return false;
+ }
+
tcg_gen_sari_i32(cpu_VF, cpu_VF, 31); /* V ? -1 : 0 */
tcg_gen_andc_i32(cpu_CF, cpu_CF, cpu_VF); /* C & !V */
@@ -1902,6 +1923,8 @@ static void gen_axflag(void)
tcg_gen_movi_i32(cpu_NF, 0);
tcg_gen_movi_i32(cpu_VF, 0);
+
+ return true;
}
/* MSR (immediate) - move immediate to processor state field */
@@ -1914,30 +1937,6 @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
s->base.is_jmp = DISAS_TOO_MANY;
switch (op) {
- case 0x00: /* CFINV */
- if (crm != 0 || !dc_isar_feature(aa64_condm_4, s)) {
- goto do_unallocated;
- }
- tcg_gen_xori_i32(cpu_CF, cpu_CF, 1);
- s->base.is_jmp = DISAS_NEXT;
- break;
-
- case 0x01: /* XAFlag */
- if (crm != 0 || !dc_isar_feature(aa64_condm_5, s)) {
- goto do_unallocated;
- }
- gen_xaflag();
- s->base.is_jmp = DISAS_NEXT;
- break;
-
- case 0x02: /* AXFlag */
- if (crm != 0 || !dc_isar_feature(aa64_condm_5, s)) {
- goto do_unallocated;
- }
- gen_axflag();
- s->base.is_jmp = DISAS_NEXT;
- break;
-
case 0x03: /* UAO */
if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
goto do_unallocated;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 08/23] target/arm: Convert MSR (immediate) to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (6 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 07/23] target/arm: Convert CFINV, XAFLAG and AXFLAG " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL " Peter Maydell
` (14 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the MSR (immediate) insn to decodetree. Our implementation
has basically no commonality between the different destinations,
so we decode the destination register in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-6-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 13 ++
target/arm/tcg/translate-a64.c | 251 ++++++++++++++++-----------------
2 files changed, 136 insertions(+), 128 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index fd23fc3e0ff..4f94a08907b 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -194,3 +194,16 @@ SB 1101 0101 0000 0011 0011 0000 111 11111
CFINV 1101 0101 0000 0 000 0100 0000 000 11111
XAFLAG 1101 0101 0000 0 000 0100 0000 001 11111
AXFLAG 1101 0101 0000 0 000 0100 0000 010 11111
+
+# These are architecturally all "MSR (immediate)"; we decode the destination
+# register too because there is no commonality in our implementation.
+@msr_i .... .... .... . ... .... imm:4 ... .....
+MSR_i_UAO 1101 0101 0000 0 000 0100 .... 011 11111 @msr_i
+MSR_i_PAN 1101 0101 0000 0 000 0100 .... 100 11111 @msr_i
+MSR_i_SPSEL 1101 0101 0000 0 000 0100 .... 101 11111 @msr_i
+MSR_i_SBSS 1101 0101 0000 0 011 0100 .... 001 11111 @msr_i
+MSR_i_DIT 1101 0101 0000 0 011 0100 .... 010 11111 @msr_i
+MSR_i_TCO 1101 0101 0000 0 011 0100 .... 100 11111 @msr_i
+MSR_i_DAIFSET 1101 0101 0000 0 011 0100 .... 110 11111 @msr_i
+MSR_i_DAIFCLEAR 1101 0101 0000 0 011 0100 .... 111 11111 @msr_i
+MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index c1b02b96183..8c57b48d81f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1927,124 +1927,130 @@ static bool trans_AXFLAG(DisasContext *s, arg_AXFLAG *a)
return true;
}
-/* MSR (immediate) - move immediate to processor state field */
-static void handle_msr_i(DisasContext *s, uint32_t insn,
- unsigned int op1, unsigned int op2, unsigned int crm)
+static bool trans_MSR_i_UAO(DisasContext *s, arg_i *a)
{
- int op = op1 << 3 | op2;
-
- /* End the TB by default, chaining is ok. */
- s->base.is_jmp = DISAS_TOO_MANY;
-
- switch (op) {
- case 0x03: /* UAO */
- if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
- goto do_unallocated;
- }
- if (crm & 1) {
- set_pstate_bits(PSTATE_UAO);
- } else {
- clear_pstate_bits(PSTATE_UAO);
- }
- gen_rebuild_hflags(s);
- break;
-
- case 0x04: /* PAN */
- if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
- goto do_unallocated;
- }
- if (crm & 1) {
- set_pstate_bits(PSTATE_PAN);
- } else {
- clear_pstate_bits(PSTATE_PAN);
- }
- gen_rebuild_hflags(s);
- break;
-
- case 0x05: /* SPSel */
- if (s->current_el == 0) {
- goto do_unallocated;
- }
- gen_helper_msr_i_spsel(cpu_env, tcg_constant_i32(crm & PSTATE_SP));
- break;
-
- case 0x19: /* SSBS */
- if (!dc_isar_feature(aa64_ssbs, s)) {
- goto do_unallocated;
- }
- if (crm & 1) {
- set_pstate_bits(PSTATE_SSBS);
- } else {
- clear_pstate_bits(PSTATE_SSBS);
- }
- /* Don't need to rebuild hflags since SSBS is a nop */
- break;
-
- case 0x1a: /* DIT */
- if (!dc_isar_feature(aa64_dit, s)) {
- goto do_unallocated;
- }
- if (crm & 1) {
- set_pstate_bits(PSTATE_DIT);
- } else {
- clear_pstate_bits(PSTATE_DIT);
- }
- /* There's no need to rebuild hflags because DIT is a nop */
- break;
-
- case 0x1e: /* DAIFSet */
- gen_helper_msr_i_daifset(cpu_env, tcg_constant_i32(crm));
- break;
-
- case 0x1f: /* DAIFClear */
- gen_helper_msr_i_daifclear(cpu_env, tcg_constant_i32(crm));
- /* For DAIFClear, exit the cpu loop to re-evaluate pending IRQs. */
- s->base.is_jmp = DISAS_UPDATE_EXIT;
- break;
-
- case 0x1c: /* TCO */
- if (dc_isar_feature(aa64_mte, s)) {
- /* Full MTE is enabled -- set the TCO bit as directed. */
- if (crm & 1) {
- set_pstate_bits(PSTATE_TCO);
- } else {
- clear_pstate_bits(PSTATE_TCO);
- }
- gen_rebuild_hflags(s);
- /* Many factors, including TCO, go into MTE_ACTIVE. */
- s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
- } else if (dc_isar_feature(aa64_mte_insn_reg, s)) {
- /* Only "instructions accessible at EL0" -- PSTATE.TCO is WI. */
- s->base.is_jmp = DISAS_NEXT;
- } else {
- goto do_unallocated;
- }
- break;
-
- case 0x1b: /* SVCR* */
- if (!dc_isar_feature(aa64_sme, s) || crm < 2 || crm > 7) {
- goto do_unallocated;
- }
- if (sme_access_check(s)) {
- int old = s->pstate_sm | (s->pstate_za << 1);
- int new = (crm & 1) * 3;
- int msk = (crm >> 1) & 3;
-
- if ((old ^ new) & msk) {
- /* At least one bit changes. */
- gen_helper_set_svcr(cpu_env, tcg_constant_i32(new),
- tcg_constant_i32(msk));
- } else {
- s->base.is_jmp = DISAS_NEXT;
- }
- }
- break;
-
- default:
- do_unallocated:
- unallocated_encoding(s);
- return;
+ if (!dc_isar_feature(aa64_uao, s) || s->current_el == 0) {
+ return false;
}
+ if (a->imm & 1) {
+ set_pstate_bits(PSTATE_UAO);
+ } else {
+ clear_pstate_bits(PSTATE_UAO);
+ }
+ gen_rebuild_hflags(s);
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_PAN(DisasContext *s, arg_i *a)
+{
+ if (!dc_isar_feature(aa64_pan, s) || s->current_el == 0) {
+ return false;
+ }
+ if (a->imm & 1) {
+ set_pstate_bits(PSTATE_PAN);
+ } else {
+ clear_pstate_bits(PSTATE_PAN);
+ }
+ gen_rebuild_hflags(s);
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_SPSEL(DisasContext *s, arg_i *a)
+{
+ if (s->current_el == 0) {
+ return false;
+ }
+ gen_helper_msr_i_spsel(cpu_env, tcg_constant_i32(a->imm & PSTATE_SP));
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_SBSS(DisasContext *s, arg_i *a)
+{
+ if (!dc_isar_feature(aa64_ssbs, s)) {
+ return false;
+ }
+ if (a->imm & 1) {
+ set_pstate_bits(PSTATE_SSBS);
+ } else {
+ clear_pstate_bits(PSTATE_SSBS);
+ }
+ /* Don't need to rebuild hflags since SSBS is a nop */
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_DIT(DisasContext *s, arg_i *a)
+{
+ if (!dc_isar_feature(aa64_dit, s)) {
+ return false;
+ }
+ if (a->imm & 1) {
+ set_pstate_bits(PSTATE_DIT);
+ } else {
+ clear_pstate_bits(PSTATE_DIT);
+ }
+ /* There's no need to rebuild hflags because DIT is a nop */
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_TCO(DisasContext *s, arg_i *a)
+{
+ if (dc_isar_feature(aa64_mte, s)) {
+ /* Full MTE is enabled -- set the TCO bit as directed. */
+ if (a->imm & 1) {
+ set_pstate_bits(PSTATE_TCO);
+ } else {
+ clear_pstate_bits(PSTATE_TCO);
+ }
+ gen_rebuild_hflags(s);
+ /* Many factors, including TCO, go into MTE_ACTIVE. */
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
+ return true;
+ } else if (dc_isar_feature(aa64_mte_insn_reg, s)) {
+ /* Only "instructions accessible at EL0" -- PSTATE.TCO is WI. */
+ return true;
+ } else {
+ /* Insn not present */
+ return false;
+ }
+}
+
+static bool trans_MSR_i_DAIFSET(DisasContext *s, arg_i *a)
+{
+ gen_helper_msr_i_daifset(cpu_env, tcg_constant_i32(a->imm));
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
+static bool trans_MSR_i_DAIFCLEAR(DisasContext *s, arg_i *a)
+{
+ gen_helper_msr_i_daifclear(cpu_env, tcg_constant_i32(a->imm));
+ /* Exit the cpu loop to re-evaluate pending IRQs. */
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
+ return true;
+}
+
+static bool trans_MSR_i_SVCR(DisasContext *s, arg_MSR_i_SVCR *a)
+{
+ if (!dc_isar_feature(aa64_sme, s) || a->mask == 0) {
+ return false;
+ }
+ if (sme_access_check(s)) {
+ int old = s->pstate_sm | (s->pstate_za << 1);
+ int new = a->imm * 3;
+
+ if ((old ^ new) & a->mask) {
+ /* At least one bit changes. */
+ gen_helper_set_svcr(cpu_env, tcg_constant_i32(new),
+ tcg_constant_i32(a->mask));
+ s->base.is_jmp = DISAS_TOO_MANY;
+ }
+ }
+ return true;
}
static void gen_get_nzcv(TCGv_i64 tcg_rt)
@@ -2319,18 +2325,7 @@ static void disas_system(DisasContext *s, uint32_t insn)
rt = extract32(insn, 0, 5);
if (op0 == 0) {
- if (l || rt != 31) {
- unallocated_encoding(s);
- return;
- }
- switch (crn) {
- case 4: /* MSR (immediate) */
- handle_msr_i(s, insn, op1, op2, crm);
- break;
- default:
- unallocated_encoding(s);
- break;
- }
+ unallocated_encoding(s);
return;
}
handle_sys(s, insn, l, op0, op1, op2, crn, crm, rt);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (7 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 08/23] target/arm: Convert MSR (immediate) " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-12 10:37 ` Philippe Mathieu-Daudé
2023-06-11 16:00 ` [PATCH v2 10/23] target/arm: Convert exception generation instructions " Peter Maydell
` (13 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are
all essentially the same instruction (system register access).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 8 ++++++++
target/arm/tcg/translate-a64.c | 32 +++++---------------------------
2 files changed, 13 insertions(+), 27 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 4f94a08907b..c49215cca8d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -207,3 +207,11 @@ MSR_i_TCO 1101 0101 0000 0 011 0100 .... 100 11111 @msr_i
MSR_i_DAIFSET 1101 0101 0000 0 011 0100 .... 110 11111 @msr_i
MSR_i_DAIFCLEAR 1101 0101 0000 0 011 0100 .... 111 11111 @msr_i
MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
+
+# MRS, MSR (register), SYS, SYSL. These are all essentially the
+# same instruction as far as QEMU is concerned.
+# NB: op0 is bits [20:19], but op0=0b00 is other insns, so we have
+# to hand-decode it.
+SYS 1101 0101 00 l:1 01 op1:3 crn:4 crm:4 op2:3 rt:5 op0=1
+SYS 1101 0101 00 l:1 10 op1:3 crn:4 crm:4 op2:3 rt:5 op0=2
+SYS 1101 0101 00 l:1 11 op1:3 crn:4 crm:4 op2:3 rt:5 op0=3
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8c57b48d81f..74a389da4a7 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2122,7 +2122,7 @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
* These are all essentially the same insn in 'read' and 'write'
* versions, with varying op0 fields.
*/
-static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
+static void handle_sys(DisasContext *s, bool isread,
unsigned int op0, unsigned int op1, unsigned int op2,
unsigned int crn, unsigned int crm, unsigned int rt)
{
@@ -2307,28 +2307,10 @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
}
}
-/* System
- * 31 22 21 20 19 18 16 15 12 11 8 7 5 4 0
- * +---------------------+---+-----+-----+-------+-------+-----+------+
- * | 1 1 0 1 0 1 0 1 0 0 | L | op0 | op1 | CRn | CRm | op2 | Rt |
- * +---------------------+---+-----+-----+-------+-------+-----+------+
- */
-static void disas_system(DisasContext *s, uint32_t insn)
+static bool trans_SYS(DisasContext *s, arg_SYS *a)
{
- unsigned int l, op0, op1, crn, crm, op2, rt;
- l = extract32(insn, 21, 1);
- op0 = extract32(insn, 19, 2);
- op1 = extract32(insn, 16, 3);
- crn = extract32(insn, 12, 4);
- crm = extract32(insn, 8, 4);
- op2 = extract32(insn, 5, 3);
- rt = extract32(insn, 0, 5);
-
- if (op0 == 0) {
- unallocated_encoding(s);
- return;
- }
- handle_sys(s, insn, l, op0, op1, op2, crn, crm, rt);
+ handle_sys(s, a->l, a->op0, a->op1, a->op2, a->crn, a->crm, a->rt);
+ return true;
}
/* Exception generation
@@ -2435,11 +2417,7 @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
switch (extract32(insn, 25, 7)) {
case 0x6a: /* Exception generation / System */
if (insn & (1 << 24)) {
- if (extract32(insn, 22, 2) == 0) {
- disas_system(s, insn);
- } else {
- unallocated_encoding(s);
- }
+ unallocated_encoding(s);
} else {
disas_exc(s, insn);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree
2023-06-11 16:00 ` [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL " Peter Maydell
@ 2023-06-12 10:37 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 31+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-06-12 10:37 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 11/6/23 18:00, Peter Maydell wrote:
> Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are
> all essentially the same instruction (system register access).
>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org
> ---
> target/arm/tcg/a64.decode | 8 ++++++++
> target/arm/tcg/translate-a64.c | 32 +++++---------------------------
> 2 files changed, 13 insertions(+), 27 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 10/23] target/arm: Convert exception generation instructions to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (8 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 11/23] target/arm: Convert load/store exclusive and ordered " Peter Maydell
` (12 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the exception generation instructions SVC, HVC, SMC, BRK and
HLT to decodetree.
The old decoder decoded the halting-debug insnns DCPS1, DCPS2 and
DCPS3 just in order to then make them UNDEF; as with DRPS, we don't
bother to decode them, but document the patterns in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-8-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 15 +++
target/arm/tcg/translate-a64.c | 173 ++++++++++++---------------------
2 files changed, 79 insertions(+), 109 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index c49215cca8d..eeaca08ae83 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -215,3 +215,18 @@ MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
SYS 1101 0101 00 l:1 01 op1:3 crn:4 crm:4 op2:3 rt:5 op0=1
SYS 1101 0101 00 l:1 10 op1:3 crn:4 crm:4 op2:3 rt:5 op0=2
SYS 1101 0101 00 l:1 11 op1:3 crn:4 crm:4 op2:3 rt:5 op0=3
+
+# Exception generation
+
+@i16 .... .... ... imm:16 ... .. &i
+SVC 1101 0100 000 ................ 000 01 @i16
+HVC 1101 0100 000 ................ 000 10 @i16
+SMC 1101 0100 000 ................ 000 11 @i16
+BRK 1101 0100 001 ................ 000 00 @i16
+HLT 1101 0100 010 ................ 000 00 @i16
+# These insns always UNDEF unless in halting debug state, which
+# we don't implement. So we don't need to decode them. The patterns
+# are listed here as documentation.
+# DCPS1 1101 0100 101 ................ 000 01 @i16
+# DCPS2 1101 0100 101 ................ 000 10 @i16
+# DCPS3 1101 0100 101 ................ 000 11 @i16
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 74a389da4a7..a2a71b4062f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2313,119 +2313,77 @@ static bool trans_SYS(DisasContext *s, arg_SYS *a)
return true;
}
-/* Exception generation
- *
- * 31 24 23 21 20 5 4 2 1 0
- * +-----------------+-----+------------------------+-----+----+
- * | 1 1 0 1 0 1 0 0 | opc | imm16 | op2 | LL |
- * +-----------------------+------------------------+----------+
- */
-static void disas_exc(DisasContext *s, uint32_t insn)
+static bool trans_SVC(DisasContext *s, arg_i *a)
{
- int opc = extract32(insn, 21, 3);
- int op2_ll = extract32(insn, 0, 5);
- int imm16 = extract32(insn, 5, 16);
- uint32_t syndrome;
-
- switch (opc) {
- case 0:
- /* For SVC, HVC and SMC we advance the single-step state
- * machine before taking the exception. This is architecturally
- * mandated, to ensure that single-stepping a system call
- * instruction works properly.
- */
- switch (op2_ll) {
- case 1: /* SVC */
- syndrome = syn_aa64_svc(imm16);
- if (s->fgt_svc) {
- gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
- break;
- }
- gen_ss_advance(s);
- gen_exception_insn(s, 4, EXCP_SWI, syndrome);
- break;
- case 2: /* HVC */
- if (s->current_el == 0) {
- unallocated_encoding(s);
- break;
- }
- /* The pre HVC helper handles cases when HVC gets trapped
- * as an undefined insn by runtime configuration.
- */
- gen_a64_update_pc(s, 0);
- gen_helper_pre_hvc(cpu_env);
- gen_ss_advance(s);
- gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
- break;
- case 3: /* SMC */
- if (s->current_el == 0) {
- unallocated_encoding(s);
- break;
- }
- gen_a64_update_pc(s, 0);
- gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
- gen_ss_advance(s);
- gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
- break;
- default:
- unallocated_encoding(s);
- break;
- }
- break;
- case 1:
- if (op2_ll != 0) {
- unallocated_encoding(s);
- break;
- }
- /* BRK */
- gen_exception_bkpt_insn(s, syn_aa64_bkpt(imm16));
- break;
- case 2:
- if (op2_ll != 0) {
- unallocated_encoding(s);
- break;
- }
- /* HLT. This has two purposes.
- * Architecturally, it is an external halting debug instruction.
- * Since QEMU doesn't implement external debug, we treat this as
- * it is required for halting debug disabled: it will UNDEF.
- * Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
- */
- if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
- gen_exception_internal_insn(s, EXCP_SEMIHOST);
- } else {
- unallocated_encoding(s);
- }
- break;
- case 5:
- if (op2_ll < 1 || op2_ll > 3) {
- unallocated_encoding(s);
- break;
- }
- /* DCPS1, DCPS2, DCPS3 */
- unallocated_encoding(s);
- break;
- default:
- unallocated_encoding(s);
- break;
+ /*
+ * For SVC, HVC and SMC we advance the single-step state
+ * machine before taking the exception. This is architecturally
+ * mandated, to ensure that single-stepping a system call
+ * instruction works properly.
+ */
+ uint32_t syndrome = syn_aa64_svc(a->imm);
+ if (s->fgt_svc) {
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
+ return true;
}
+ gen_ss_advance(s);
+ gen_exception_insn(s, 4, EXCP_SWI, syndrome);
+ return true;
}
-/* Branches, exception generating and system instructions */
-static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
+static bool trans_HVC(DisasContext *s, arg_i *a)
{
- switch (extract32(insn, 25, 7)) {
- case 0x6a: /* Exception generation / System */
- if (insn & (1 << 24)) {
- unallocated_encoding(s);
- } else {
- disas_exc(s, insn);
- }
- break;
- default:
+ if (s->current_el == 0) {
unallocated_encoding(s);
- break;
+ return true;
}
+ /*
+ * The pre HVC helper handles cases when HVC gets trapped
+ * as an undefined insn by runtime configuration.
+ */
+ gen_a64_update_pc(s, 0);
+ gen_helper_pre_hvc(cpu_env);
+ /* Architecture requires ss advance before we do the actual work */
+ gen_ss_advance(s);
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(a->imm), 2);
+ return true;
+}
+
+static bool trans_SMC(DisasContext *s, arg_i *a)
+{
+ if (s->current_el == 0) {
+ unallocated_encoding(s);
+ return true;
+ }
+ gen_a64_update_pc(s, 0);
+ gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(a->imm)));
+ /* Architecture requires ss advance before we do the actual work */
+ gen_ss_advance(s);
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(a->imm), 3);
+ return true;
+}
+
+static bool trans_BRK(DisasContext *s, arg_i *a)
+{
+ gen_exception_bkpt_insn(s, syn_aa64_bkpt(a->imm));
+ return true;
+}
+
+static bool trans_HLT(DisasContext *s, arg_i *a)
+{
+ /*
+ * HLT. This has two purposes.
+ * Architecturally, it is an external halting debug instruction.
+ * Since QEMU doesn't implement external debug, we treat this as
+ * it is required for halting debug disabled: it will UNDEF.
+ * Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
+ */
+ if (semihosting_enabled(s->current_el == 0) && a->imm == 0xf000) {
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
+ } else {
+ unallocated_encoding(s);
+ }
+ return true;
}
/*
@@ -14188,9 +14146,6 @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 25, 4)) {
- case 0xa: case 0xb: /* Branch, exception generation and system insns */
- disas_b_exc_sys(s, insn);
- break;
case 0x4:
case 0x6:
case 0xc:
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 11/23] target/arm: Convert load/store exclusive and ordered to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (9 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 10/23] target/arm: Convert exception generation instructions " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 12/23] target/arm: Convert LDXP, STXP, CASP, CAS " Peter Maydell
` (11 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the instructions in the load/store exclusive (STXR,
STLXR, LDXR, LDAXR) and load/store ordered (STLR, STLLR,
LDAR, LDLAR) to decodetree.
Note that for STLR, STLLR, LDAR, LDLAR this fixes an under-decoding
in the legacy decoder where we were not checking that the RES1 bits
in the Rs and Rt2 fields were set.
The new function ldst_iss_sf() is equivalent to the existing
disas_ldst_compute_iss_sf(), but it takes the pre-decoded 'ext' field
rather than taking an undecoded two-bit opc field and extracting
'ext' from it. Once all the loads and stores have been converted
to decodetree disas_ldst_compute_iss_sf() will be unused and
can be deleted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-9-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 11 +++
target/arm/tcg/translate-a64.c | 154 ++++++++++++++++++++-------------
2 files changed, 103 insertions(+), 62 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index eeaca08ae83..c5894fc06d2 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -230,3 +230,14 @@ HLT 1101 0100 010 ................ 000 00 @i16
# DCPS1 1101 0100 101 ................ 000 01 @i16
# DCPS2 1101 0100 101 ................ 000 10 @i16
# DCPS3 1101 0100 101 ................ 000 11 @i16
+
+# Loads and stores
+
+&stxr rn rt rt2 rs sz lasr
+&stlr rn rt sz lasr
+@stxr sz:2 ...... ... rs:5 lasr:1 rt2:5 rn:5 rt:5 &stxr
+@stlr sz:2 ...... ... ..... lasr:1 ..... rn:5 rt:5 &stlr
+STXR .. 001000 000 ..... . ..... ..... ..... @stxr # inc STLXR
+LDXR .. 001000 010 ..... . ..... ..... ..... @stxr # inc LDAXR
+STLR .. 001000 100 11111 . 11111 ..... ..... @stlr # inc STLLR
+LDAR .. 001000 110 11111 . 11111 ..... ..... @stlr # inc LDLAR
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index a2a71b4062f..1ba2d6a75e4 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2652,6 +2652,95 @@ static bool disas_ldst_compute_iss_sf(int size, bool is_signed, int opc)
return regsize == 64;
}
+static bool ldst_iss_sf(int size, bool sign, bool ext)
+{
+
+ if (sign) {
+ /*
+ * Signed loads are 64 bit results if we are not going to
+ * do a zero-extend from 32 to 64 after the load.
+ * (For a store, sign and ext are always false.)
+ */
+ return !ext;
+ } else {
+ /* Unsigned loads/stores work at the specified size */
+ return size == MO_64;
+ }
+}
+
+static bool trans_STXR(DisasContext *s, arg_stxr *a)
+{
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ if (a->lasr) {
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
+ }
+ gen_store_exclusive(s, a->rs, a->rt, a->rt2, a->rn, a->sz, false);
+ return true;
+}
+
+static bool trans_LDXR(DisasContext *s, arg_stxr *a)
+{
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ gen_load_exclusive(s, a->rt, a->rt2, a->rn, a->sz, false);
+ if (a->lasr) {
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ }
+ return true;
+}
+
+static bool trans_STLR(DisasContext *s, arg_stlr *a)
+{
+ TCGv_i64 clean_addr;
+ MemOp memop;
+ bool iss_sf = ldst_iss_sf(a->sz, false, false);
+
+ /*
+ * StoreLORelease is the same as Store-Release for QEMU, but
+ * needs the feature-test.
+ */
+ if (!a->lasr && !dc_isar_feature(aa64_lor, s)) {
+ return false;
+ }
+ /* Generate ISS for non-exclusive accesses including LASR. */
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
+ memop = check_ordered_align(s, a->rn, 0, true, a->sz);
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, a->rn),
+ true, a->rn != 31, memop);
+ do_gpr_st(s, cpu_reg(s, a->rt), clean_addr, memop, true, a->rt,
+ iss_sf, a->lasr);
+ return true;
+}
+
+static bool trans_LDAR(DisasContext *s, arg_stlr *a)
+{
+ TCGv_i64 clean_addr;
+ MemOp memop;
+ bool iss_sf = ldst_iss_sf(a->sz, false, false);
+
+ /* LoadLOAcquire is the same as Load-Acquire for QEMU. */
+ if (!a->lasr && !dc_isar_feature(aa64_lor, s)) {
+ return false;
+ }
+ /* Generate ISS for non-exclusive accesses including LASR. */
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ memop = check_ordered_align(s, a->rn, 0, false, a->sz);
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, a->rn),
+ false, a->rn != 31, memop);
+ do_gpr_ld(s, cpu_reg(s, a->rt), clean_addr, memop, false, true,
+ a->rt, iss_sf, a->lasr);
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ return true;
+}
+
/* Load/store exclusive
*
* 31 30 29 24 23 22 21 20 16 15 14 10 9 5 4 0
@@ -2674,70 +2763,8 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
int is_lasr = extract32(insn, 15, 1);
int o2_L_o1_o0 = extract32(insn, 21, 3) * 2 | is_lasr;
int size = extract32(insn, 30, 2);
- TCGv_i64 clean_addr;
- MemOp memop;
switch (o2_L_o1_o0) {
- case 0x0: /* STXR */
- case 0x1: /* STLXR */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- if (is_lasr) {
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
- }
- gen_store_exclusive(s, rs, rt, rt2, rn, size, false);
- return;
-
- case 0x4: /* LDXR */
- case 0x5: /* LDAXR */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- gen_load_exclusive(s, rt, rt2, rn, size, false);
- if (is_lasr) {
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
- }
- return;
-
- case 0x8: /* STLLR */
- if (!dc_isar_feature(aa64_lor, s)) {
- break;
- }
- /* StoreLORelease is the same as Store-Release for QEMU. */
- /* fall through */
- case 0x9: /* STLR */
- /* Generate ISS for non-exclusive accesses including LASR. */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
- memop = check_ordered_align(s, rn, 0, true, size);
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
- true, rn != 31, memop);
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
- disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
- return;
-
- case 0xc: /* LDLAR */
- if (!dc_isar_feature(aa64_lor, s)) {
- break;
- }
- /* LoadLOAcquire is the same as Load-Acquire for QEMU. */
- /* fall through */
- case 0xd: /* LDAR */
- /* Generate ISS for non-exclusive accesses including LASR. */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- memop = check_ordered_align(s, rn, 0, false, size);
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
- false, rn != 31, memop);
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
- rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
- return;
-
case 0x2: case 0x3: /* CASP / STXP */
if (size & 2) { /* STXP / STLXP */
if (rn == 31) {
@@ -2787,6 +2814,9 @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
return;
}
break;
+ default:
+ /* Handled in decodetree */
+ break;
}
unallocated_encoding(s);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 12/23] target/arm: Convert LDXP, STXP, CASP, CAS to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (10 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 11/23] target/arm: Convert load/store exclusive and ordered " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 13/23] target/arm: Convert load reg (literal) group " Peter Maydell
` (10 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the load/store exclusive pair (LDXP, STXP, LDAXP, STLXP),
compare-and-swap pair (CASP, CASPA, CASPAL, CASPL), and compare-and
swap (CAS, CASA, CASAL, CASL) instructions to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-10-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 11 +++
target/arm/tcg/translate-a64.c | 121 ++++++++++++---------------------
2 files changed, 53 insertions(+), 79 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index c5894fc06d2..6b1079b8bdf 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -237,7 +237,18 @@ HLT 1101 0100 010 ................ 000 00 @i16
&stlr rn rt sz lasr
@stxr sz:2 ...... ... rs:5 lasr:1 rt2:5 rn:5 rt:5 &stxr
@stlr sz:2 ...... ... ..... lasr:1 ..... rn:5 rt:5 &stlr
+%imm1_30_p2 30:1 !function=plus_2
+@stxp .. ...... ... rs:5 lasr:1 rt2:5 rn:5 rt:5 &stxr sz=%imm1_30_p2
STXR .. 001000 000 ..... . ..... ..... ..... @stxr # inc STLXR
LDXR .. 001000 010 ..... . ..... ..... ..... @stxr # inc LDAXR
STLR .. 001000 100 11111 . 11111 ..... ..... @stlr # inc STLLR
LDAR .. 001000 110 11111 . 11111 ..... ..... @stlr # inc LDLAR
+
+STXP 1 . 001000 001 ..... . ..... ..... ..... @stxp # inc STLXP
+LDXP 1 . 001000 011 ..... . ..... ..... ..... @stxp # inc LDAXP
+
+# CASP, CASPA, CASPAL, CASPL (we don't decode the bits that determine
+# acquire/release semantics because QEMU's cmpxchg always has those)
+CASP 0 . 001000 0 - 1 rs:5 - 11111 rn:5 rt:5 sz=%imm1_30_p2
+# CAS, CASA, CASAL, CASL
+CAS sz:2 001000 1 - 1 rs:5 - 11111 rn:5 rt:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1ba2d6a75e4..ff4338ee4df 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2741,84 +2741,50 @@ static bool trans_LDAR(DisasContext *s, arg_stlr *a)
return true;
}
-/* Load/store exclusive
- *
- * 31 30 29 24 23 22 21 20 16 15 14 10 9 5 4 0
- * +-----+-------------+----+---+----+------+----+-------+------+------+
- * | sz | 0 0 1 0 0 0 | o2 | L | o1 | Rs | o0 | Rt2 | Rn | Rt |
- * +-----+-------------+----+---+----+------+----+-------+------+------+
- *
- * sz: 00 -> 8 bit, 01 -> 16 bit, 10 -> 32 bit, 11 -> 64 bit
- * L: 0 -> store, 1 -> load
- * o2: 0 -> exclusive, 1 -> not
- * o1: 0 -> single register, 1 -> register pair
- * o0: 1 -> load-acquire/store-release, 0 -> not
- */
-static void disas_ldst_excl(DisasContext *s, uint32_t insn)
+static bool trans_STXP(DisasContext *s, arg_stxr *a)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rt2 = extract32(insn, 10, 5);
- int rs = extract32(insn, 16, 5);
- int is_lasr = extract32(insn, 15, 1);
- int o2_L_o1_o0 = extract32(insn, 21, 3) * 2 | is_lasr;
- int size = extract32(insn, 30, 2);
-
- switch (o2_L_o1_o0) {
- case 0x2: case 0x3: /* CASP / STXP */
- if (size & 2) { /* STXP / STLXP */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- if (is_lasr) {
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
- }
- gen_store_exclusive(s, rs, rt, rt2, rn, size, true);
- return;
- }
- if (rt2 == 31
- && ((rt | rs) & 1) == 0
- && dc_isar_feature(aa64_atomics, s)) {
- /* CASP / CASPL */
- gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
- return;
- }
- break;
-
- case 0x6: case 0x7: /* CASPA / LDXP */
- if (size & 2) { /* LDXP / LDAXP */
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- gen_load_exclusive(s, rt, rt2, rn, size, true);
- if (is_lasr) {
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
- }
- return;
- }
- if (rt2 == 31
- && ((rt | rs) & 1) == 0
- && dc_isar_feature(aa64_atomics, s)) {
- /* CASPA / CASPAL */
- gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
- return;
- }
- break;
-
- case 0xa: /* CAS */
- case 0xb: /* CASL */
- case 0xe: /* CASA */
- case 0xf: /* CASAL */
- if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
- gen_compare_and_swap(s, rs, rt, rn, size);
- return;
- }
- break;
- default:
- /* Handled in decodetree */
- break;
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
}
- unallocated_encoding(s);
+ if (a->lasr) {
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
+ }
+ gen_store_exclusive(s, a->rs, a->rt, a->rt2, a->rn, a->sz, true);
+ return true;
+}
+
+static bool trans_LDXP(DisasContext *s, arg_stxr *a)
+{
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ gen_load_exclusive(s, a->rt, a->rt2, a->rn, a->sz, true);
+ if (a->lasr) {
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ }
+ return true;
+}
+
+static bool trans_CASP(DisasContext *s, arg_CASP *a)
+{
+ if (!dc_isar_feature(aa64_atomics, s)) {
+ return false;
+ }
+ if (((a->rt | a->rs) & 1) != 0) {
+ return false;
+ }
+
+ gen_compare_and_swap_pair(s, a->rs, a->rt, a->rn, a->sz);
+ return true;
+}
+
+static bool trans_CAS(DisasContext *s, arg_CAS *a)
+{
+ if (!dc_isar_feature(aa64_atomics, s)) {
+ return false;
+ }
+ gen_compare_and_swap(s, a->rs, a->rt, a->rn, a->sz);
+ return true;
}
/*
@@ -4247,9 +4213,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x08: /* Load/store exclusive */
- disas_ldst_excl(s, insn);
- break;
case 0x18: case 0x1c: /* Load register (literal) */
disas_ld_lit(s, insn);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 13/23] target/arm: Convert load reg (literal) group to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (11 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 12/23] target/arm: Convert LDXP, STXP, CASP, CAS " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 14/23] target/arm: Convert load/store-pair " Peter Maydell
` (9 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the "Load register (literal)" instruction class to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-11-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 13 ++++++
target/arm/tcg/translate-a64.c | 76 ++++++++++------------------------
2 files changed, 35 insertions(+), 54 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 6b1079b8bdf..c2c6ac0196d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -252,3 +252,16 @@ LDXP 1 . 001000 011 ..... . ..... ..... ..... @stxp # inc LDAXP
CASP 0 . 001000 0 - 1 rs:5 - 11111 rn:5 rt:5 sz=%imm1_30_p2
# CAS, CASA, CASAL, CASL
CAS sz:2 001000 1 - 1 rs:5 - 11111 rn:5 rt:5
+
+&ldlit rt imm sz sign
+@ldlit .. ... . .. ................... rt:5 &ldlit imm=%imm19
+
+LD_lit 00 011 0 00 ................... ..... @ldlit sz=2 sign=0
+LD_lit 01 011 0 00 ................... ..... @ldlit sz=3 sign=0
+LD_lit 10 011 0 00 ................... ..... @ldlit sz=2 sign=1
+LD_lit_v 00 011 1 00 ................... ..... @ldlit sz=2 sign=0
+LD_lit_v 01 011 1 00 ................... ..... @ldlit sz=3 sign=0
+LD_lit_v 10 011 1 00 ................... ..... @ldlit sz=4 sign=0
+
+# PRFM
+NOP 11 011 0 00 ------------------- -----
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index ff4338ee4df..d1df41f2e5e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2787,62 +2787,33 @@ static bool trans_CAS(DisasContext *s, arg_CAS *a)
return true;
}
-/*
- * Load register (literal)
- *
- * 31 30 29 27 26 25 24 23 5 4 0
- * +-----+-------+---+-----+-------------------+-------+
- * | opc | 0 1 1 | V | 0 0 | imm19 | Rt |
- * +-----+-------+---+-----+-------------------+-------+
- *
- * V: 1 -> vector (simd/fp)
- * opc (non-vector): 00 -> 32 bit, 01 -> 64 bit,
- * 10-> 32 bit signed, 11 -> prefetch
- * opc (vector): 00 -> 32 bit, 01 -> 64 bit, 10 -> 128 bit (11 unallocated)
- */
-static void disas_ld_lit(DisasContext *s, uint32_t insn)
+static bool trans_LD_lit(DisasContext *s, arg_ldlit *a)
{
- int rt = extract32(insn, 0, 5);
- int64_t imm = sextract32(insn, 5, 19) << 2;
- bool is_vector = extract32(insn, 26, 1);
- int opc = extract32(insn, 30, 2);
- bool is_signed = false;
- int size = 2;
- TCGv_i64 tcg_rt, clean_addr;
+ bool iss_sf = ldst_iss_sf(a->sz, a->sign, false);
+ TCGv_i64 tcg_rt = cpu_reg(s, a->rt);
+ TCGv_i64 clean_addr = tcg_temp_new_i64();
+ MemOp memop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
+
+ gen_pc_plus_diff(s, clean_addr, a->imm);
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
+ false, true, a->rt, iss_sf, false);
+ return true;
+}
+
+static bool trans_LD_lit_v(DisasContext *s, arg_ldlit *a)
+{
+ /* Load register (literal), vector version */
+ TCGv_i64 clean_addr;
MemOp memop;
- if (is_vector) {
- if (opc == 3) {
- unallocated_encoding(s);
- return;
- }
- size = 2 + opc;
- if (!fp_access_check(s)) {
- return;
- }
- memop = finalize_memop_asimd(s, size);
- } else {
- if (opc == 3) {
- /* PRFM (literal) : prefetch */
- return;
- }
- size = 2 + extract32(opc, 0, 1);
- is_signed = extract32(opc, 1, 1);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
+ if (!fp_access_check(s)) {
+ return true;
}
-
- tcg_rt = cpu_reg(s, rt);
-
+ memop = finalize_memop_asimd(s, a->sz);
clean_addr = tcg_temp_new_i64();
- gen_pc_plus_diff(s, clean_addr, imm);
-
- if (is_vector) {
- do_fp_ld(s, rt, clean_addr, memop);
- } else {
- /* Only unsigned 32bit loads target 32bit registers. */
- bool iss_sf = opc != 0;
- do_gpr_ld(s, tcg_rt, clean_addr, memop, false, true, rt, iss_sf, false);
- }
+ gen_pc_plus_diff(s, clean_addr, a->imm);
+ do_fp_ld(s, a->rt, clean_addr, memop);
+ return true;
}
/*
@@ -4213,9 +4184,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x18: case 0x1c: /* Load register (literal) */
- disas_ld_lit(s, insn);
- break;
case 0x28: case 0x29:
case 0x2c: case 0x2d: /* Load/store pair (all forms) */
disas_ldst_pair(s, insn);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 14/23] target/arm: Convert load/store-pair to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (12 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 13/23] target/arm: Convert load reg (literal) group " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-14 5:30 ` Richard Henderson
2023-06-11 16:00 ` [PATCH v2 15/23] target/arm: Convert ld/st reg+imm9 insns " Peter Maydell
` (8 subsequent siblings)
22 siblings, 1 reply; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the load/store register pair insns (LDP, STP,
LDNP, STNP, LDPSW, STGP) to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-12-peter.maydell@linaro.org
---
This was reviewed in v1, but the underlying code
changed enough in the atomic-ops work that I've dropped
the R-by tag.
---
target/arm/tcg/a64.decode | 61 +++++
target/arm/tcg/translate-a64.c | 425 ++++++++++++++++-----------------
2 files changed, 271 insertions(+), 215 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index c2c6ac0196d..f5787919931 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -265,3 +265,64 @@ LD_lit_v 10 011 1 00 ................... ..... @ldlit sz=4 sign=0
# PRFM
NOP 11 011 0 00 ------------------- -----
+
+&ldstpair rt2 rt rn imm sz sign w p
+@ldstpair .. ... . ... . imm:s7 rt2:5 rn:5 rt:5 &ldstpair
+
+# STNP, LDNP: Signed offset, non-temporal hint. We don't emulate caches
+# so we ignore hints about data access patterns, and handle these like
+# plain signed offset.
+STP 00 101 0 000 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+LDP 00 101 0 000 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+STP 10 101 0 000 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+LDP 10 101 0 000 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+STP_v 00 101 1 000 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+LDP_v 00 101 1 000 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+STP_v 01 101 1 000 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+LDP_v 01 101 1 000 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+STP_v 10 101 1 000 0 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=0
+LDP_v 10 101 1 000 1 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=0
+
+# STP and LDP: post-indexed
+STP 00 101 0 001 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=1 w=1
+LDP 00 101 0 001 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=1 w=1
+LDP 01 101 0 001 1 ....... ..... ..... ..... @ldstpair sz=2 sign=1 p=1 w=1
+STP 10 101 0 001 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
+LDP 10 101 0 001 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
+STP_v 00 101 1 001 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=1 w=1
+LDP_v 00 101 1 001 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=1 w=1
+STP_v 01 101 1 001 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
+LDP_v 01 101 1 001 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
+STP_v 10 101 1 001 0 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=1 w=1
+LDP_v 10 101 1 001 1 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=1 w=1
+
+# STP and LDP: offset
+STP 00 101 0 010 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+LDP 00 101 0 010 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+LDP 01 101 0 010 1 ....... ..... ..... ..... @ldstpair sz=2 sign=1 p=0 w=0
+STP 10 101 0 010 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+LDP 10 101 0 010 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+STP_v 00 101 1 010 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+LDP_v 00 101 1 010 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=0
+STP_v 01 101 1 010 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+LDP_v 01 101 1 010 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+STP_v 10 101 1 010 0 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=0
+LDP_v 10 101 1 010 1 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=0
+
+# STP and LDP: pre-indexed
+STP 00 101 0 011 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=1
+LDP 00 101 0 011 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=1
+LDP 01 101 0 011 1 ....... ..... ..... ..... @ldstpair sz=2 sign=1 p=0 w=1
+STP 10 101 0 011 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
+LDP 10 101 0 011 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
+STP_v 00 101 1 011 0 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=1
+LDP_v 00 101 1 011 1 ....... ..... ..... ..... @ldstpair sz=2 sign=0 p=0 w=1
+STP_v 01 101 1 011 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
+LDP_v 01 101 1 011 1 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
+STP_v 10 101 1 011 0 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=1
+LDP_v 10 101 1 011 1 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p=0 w=1
+
+# STGP: store tag and pair
+STGP 01 101 0 001 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
+STGP 01 101 0 010 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
+STGP 01 101 0 011 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d1df41f2e5e..8b8c9939013 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2816,229 +2816,228 @@ static bool trans_LD_lit_v(DisasContext *s, arg_ldlit *a)
return true;
}
-/*
- * LDNP (Load Pair - non-temporal hint)
- * LDP (Load Pair - non vector)
- * LDPSW (Load Pair Signed Word - non vector)
- * STNP (Store Pair - non-temporal hint)
- * STP (Store Pair - non vector)
- * LDNP (Load Pair of SIMD&FP - non-temporal hint)
- * LDP (Load Pair of SIMD&FP)
- * STNP (Store Pair of SIMD&FP - non-temporal hint)
- * STP (Store Pair of SIMD&FP)
- *
- * 31 30 29 27 26 25 24 23 22 21 15 14 10 9 5 4 0
- * +-----+-------+---+---+-------+---+-----------------------------+
- * | opc | 1 0 1 | V | 0 | index | L | imm7 | Rt2 | Rn | Rt |
- * +-----+-------+---+---+-------+---+-------+-------+------+------+
- *
- * opc: LDP/STP/LDNP/STNP 00 -> 32 bit, 10 -> 64 bit
- * LDPSW/STGP 01
- * LDP/STP/LDNP/STNP (SIMD) 00 -> 32 bit, 01 -> 64 bit, 10 -> 128 bit
- * V: 0 -> GPR, 1 -> Vector
- * idx: 00 -> signed offset with non-temporal hint, 01 -> post-index,
- * 10 -> signed offset, 11 -> pre-index
- * L: 0 -> Store 1 -> Load
- *
- * Rt, Rt2 = GPR or SIMD registers to be stored
- * Rn = general purpose register containing address
- * imm7 = signed offset (multiple of 4 or 8 depending on size)
- */
-static void disas_ldst_pair(DisasContext *s, uint32_t insn)
+static void op_addr_ldstpair_pre(DisasContext *s, arg_ldstpair *a,
+ TCGv_i64 *clean_addr, TCGv_i64 *dirty_addr,
+ uint64_t offset, bool is_store, MemOp mop)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rt2 = extract32(insn, 10, 5);
- uint64_t offset = sextract64(insn, 15, 7);
- int index = extract32(insn, 23, 2);
- bool is_vector = extract32(insn, 26, 1);
- bool is_load = extract32(insn, 22, 1);
- int opc = extract32(insn, 30, 2);
- bool is_signed = false;
- bool postindex = false;
- bool wback = false;
- bool set_tag = false;
- TCGv_i64 clean_addr, dirty_addr;
- MemOp mop;
- int size;
-
- if (opc == 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (is_vector) {
- size = 2 + opc;
- } else if (opc == 1 && !is_load) {
- /* STGP */
- if (!dc_isar_feature(aa64_mte_insn_reg, s) || index == 0) {
- unallocated_encoding(s);
- return;
- }
- size = 3;
- set_tag = true;
- } else {
- size = 2 + extract32(opc, 1, 1);
- is_signed = extract32(opc, 0, 1);
- if (!is_load && is_signed) {
- unallocated_encoding(s);
- return;
- }
- }
-
- switch (index) {
- case 1: /* post-index */
- postindex = true;
- wback = true;
- break;
- case 0:
- /* signed offset with "non-temporal" hint. Since we don't emulate
- * caches we don't care about hints to the cache system about
- * data access patterns, and handle this identically to plain
- * signed offset.
- */
- if (is_signed) {
- /* There is no non-temporal-hint version of LDPSW */
- unallocated_encoding(s);
- return;
- }
- postindex = false;
- break;
- case 2: /* signed offset, rn not updated */
- postindex = false;
- break;
- case 3: /* pre-index */
- postindex = false;
- wback = true;
- break;
- }
-
- if (is_vector && !fp_access_check(s)) {
- return;
- }
-
- offset <<= (set_tag ? LOG2_TAG_GRANULE : size);
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
- if (!postindex) {
+ *dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
+ if (!a->p) {
+ tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset);
+ }
+
+ *clean_addr = gen_mte_checkN(s, *dirty_addr, is_store,
+ (a->w || a->rn != 31), 2 << a->sz, mop);
+}
+
+static void op_addr_ldstpair_post(DisasContext *s, arg_ldstpair *a,
+ TCGv_i64 dirty_addr, uint64_t offset)
+{
+ if (a->w) {
+ if (a->p) {
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
+ }
+ tcg_gen_mov_i64(cpu_reg_sp(s, a->rn), dirty_addr);
+ }
+}
+
+static bool trans_STP(DisasContext *s, arg_ldstpair *a)
+{
+ uint64_t offset = a->imm << a->sz;
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt, tcg_rt2;
+ MemOp mop = finalize_memop(s, a->sz);
+
+ op_addr_ldstpair_pre(s, a, &clean_addr, &dirty_addr, offset, true, mop);
+ tcg_rt = cpu_reg(s, a->rt);
+ tcg_rt2 = cpu_reg(s, a->rt2);
+ /*
+ * We built mop above for the single logical access -- rebuild it
+ * now for the paired operation.
+ *
+ * With LSE2, non-sign-extending pairs are treated atomically if
+ * aligned, and if unaligned one of the pair will be completely
+ * within a 16-byte block and that element will be atomic.
+ * Otherwise each element is separately atomic.
+ * In all cases, issue one operation with the correct atomicity.
+ *
+ * This treats sign-extending loads like zero-extending loads,
+ * since that reuses the most code below.
+ */
+ mop = a->sz + 1;
+ if (s->align_mem) {
+ mop |= (a->sz == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
+ }
+ mop = finalize_memop_pair(s, mop);
+ if (a->sz == 2) {
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ if (s->be_data == MO_LE) {
+ tcg_gen_concat32_i64(tmp, tcg_rt, tcg_rt2);
+ } else {
+ tcg_gen_concat32_i64(tmp, tcg_rt2, tcg_rt);
+ }
+ tcg_gen_qemu_st_i64(tmp, clean_addr, get_mem_index(s), mop);
+ } else {
+ TCGv_i128 tmp = tcg_temp_new_i128();
+
+ if (s->be_data == MO_LE) {
+ tcg_gen_concat_i64_i128(tmp, tcg_rt, tcg_rt2);
+ } else {
+ tcg_gen_concat_i64_i128(tmp, tcg_rt2, tcg_rt);
+ }
+ tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
+ }
+ op_addr_ldstpair_post(s, a, dirty_addr, offset);
+ return true;
+}
+
+static bool trans_LDP(DisasContext *s, arg_ldstpair *a)
+{
+ uint64_t offset = a->imm << a->sz;
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt, tcg_rt2;
+ MemOp mop = finalize_memop(s, a->sz);
+
+ op_addr_ldstpair_pre(s, a, &clean_addr, &dirty_addr, offset, false, mop);
+ tcg_rt = cpu_reg(s, a->rt);
+ tcg_rt2 = cpu_reg(s, a->rt2);
+
+ /*
+ * We built mop above for the single logical access -- rebuild it
+ * now for the paired operation.
+ *
+ * With LSE2, non-sign-extending pairs are treated atomically if
+ * aligned, and if unaligned one of the pair will be completely
+ * within a 16-byte block and that element will be atomic.
+ * Otherwise each element is separately atomic.
+ * In all cases, issue one operation with the correct atomicity.
+ *
+ * This treats sign-extending loads like zero-extending loads,
+ * since that reuses the most code below.
+ */
+ mop = a->sz + 1;
+ if (s->align_mem) {
+ mop |= (a->sz == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
+ }
+ mop = finalize_memop_pair(s, mop);
+ if (a->sz == 2) {
+ int o2 = s->be_data == MO_LE ? 32 : 0;
+ int o1 = o2 ^ 32;
+
+ tcg_gen_qemu_ld_i64(tcg_rt, clean_addr, get_mem_index(s), mop);
+ if (a->sign) {
+ tcg_gen_sextract_i64(tcg_rt2, tcg_rt, o2, 32);
+ tcg_gen_sextract_i64(tcg_rt, tcg_rt, o1, 32);
+ } else {
+ tcg_gen_extract_i64(tcg_rt2, tcg_rt, o2, 32);
+ tcg_gen_extract_i64(tcg_rt, tcg_rt, o1, 32);
+ }
+ } else {
+ TCGv_i128 tmp = tcg_temp_new_i128();
+
+ tcg_gen_qemu_ld_i128(tmp, clean_addr, get_mem_index(s), mop);
+ if (s->be_data == MO_LE) {
+ tcg_gen_extr_i128_i64(tcg_rt, tcg_rt2, tmp);
+ } else {
+ tcg_gen_extr_i128_i64(tcg_rt2, tcg_rt, tmp);
+ }
+ }
+ op_addr_ldstpair_post(s, a, dirty_addr, offset);
+ return true;
+}
+
+static bool trans_STP_v(DisasContext *s, arg_ldstpair *a)
+{
+ uint64_t offset = a->imm << a->sz;
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp mop;
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ /* LSE2 does not merge FP pairs; leave these as separate operations. */
+ mop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldstpair_pre(s, a, &clean_addr, &dirty_addr, offset, true, mop);
+ do_fp_st(s, a->rt, clean_addr, mop);
+ tcg_gen_addi_i64(clean_addr, clean_addr, 1 << a->sz);
+ do_fp_st(s, a->rt2, clean_addr, mop);
+ op_addr_ldstpair_post(s, a, dirty_addr, offset);
+ return true;
+}
+
+static bool trans_LDP_v(DisasContext *s, arg_ldstpair *a)
+{
+ uint64_t offset = a->imm << a->sz;
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp mop;
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ /* LSE2 does not merge FP pairs; leave these as separate operations. */
+ mop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldstpair_pre(s, a, &clean_addr, &dirty_addr, offset, false, mop);
+ do_fp_ld(s, a->rt, clean_addr, mop);
+ tcg_gen_addi_i64(clean_addr, clean_addr, 1 << a->sz);
+ do_fp_ld(s, a->rt2, clean_addr, mop);
+ op_addr_ldstpair_post(s, a, dirty_addr, offset);
+ return true;
+}
+
+static bool trans_STGP(DisasContext *s, arg_ldstpair *a)
+{
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt, tcg_rt2;
+ uint64_t offset = a->imm << LOG2_TAG_GRANULE;
+ MemOp mop;
+ TCGv_i128 tmp;
+
+ if (!dc_isar_feature(aa64_mte_insn_reg, s)) {
+ return false;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
+ if (!a->p) {
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
}
- if (set_tag) {
- if (!s->ata) {
- /*
- * TODO: We could rely on the stores below, at least for
- * system mode, if we arrange to add MO_ALIGN_16.
- */
- gen_helper_stg_stub(cpu_env, dirty_addr);
- } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
- gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr);
- } else {
- gen_helper_stg(cpu_env, dirty_addr, dirty_addr);
- }
- }
-
- if (is_vector) {
- mop = finalize_memop_asimd(s, size);
- } else {
- mop = finalize_memop(s, size);
- }
- clean_addr = gen_mte_checkN(s, dirty_addr, !is_load,
- (wback || rn != 31) && !set_tag,
- 2 << size, mop);
-
- if (is_vector) {
- /* LSE2 does not merge FP pairs; leave these as separate operations. */
- if (is_load) {
- do_fp_ld(s, rt, clean_addr, mop);
- } else {
- do_fp_st(s, rt, clean_addr, mop);
- }
- tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
- if (is_load) {
- do_fp_ld(s, rt2, clean_addr, mop);
- } else {
- do_fp_st(s, rt2, clean_addr, mop);
- }
- } else {
- TCGv_i64 tcg_rt = cpu_reg(s, rt);
- TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
-
+ if (!s->ata) {
/*
- * We built mop above for the single logical access -- rebuild it
- * now for the paired operation.
- *
- * With LSE2, non-sign-extending pairs are treated atomically if
- * aligned, and if unaligned one of the pair will be completely
- * within a 16-byte block and that element will be atomic.
- * Otherwise each element is separately atomic.
- * In all cases, issue one operation with the correct atomicity.
- *
- * This treats sign-extending loads like zero-extending loads,
- * since that reuses the most code below.
+ * TODO: We could rely on the stores below, at least for
+ * system mode, if we arrange to add MO_ALIGN_16.
*/
- mop = size + 1;
- if (s->align_mem) {
- mop |= (size == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
- }
- mop = finalize_memop_pair(s, mop);
-
- if (is_load) {
- if (size == 2) {
- int o2 = s->be_data == MO_LE ? 32 : 0;
- int o1 = o2 ^ 32;
-
- tcg_gen_qemu_ld_i64(tcg_rt, clean_addr, get_mem_index(s), mop);
- if (is_signed) {
- tcg_gen_sextract_i64(tcg_rt2, tcg_rt, o2, 32);
- tcg_gen_sextract_i64(tcg_rt, tcg_rt, o1, 32);
- } else {
- tcg_gen_extract_i64(tcg_rt2, tcg_rt, o2, 32);
- tcg_gen_extract_i64(tcg_rt, tcg_rt, o1, 32);
- }
- } else {
- TCGv_i128 tmp = tcg_temp_new_i128();
-
- tcg_gen_qemu_ld_i128(tmp, clean_addr, get_mem_index(s), mop);
- if (s->be_data == MO_LE) {
- tcg_gen_extr_i128_i64(tcg_rt, tcg_rt2, tmp);
- } else {
- tcg_gen_extr_i128_i64(tcg_rt2, tcg_rt, tmp);
- }
- }
- } else {
- if (size == 2) {
- TCGv_i64 tmp = tcg_temp_new_i64();
-
- if (s->be_data == MO_LE) {
- tcg_gen_concat32_i64(tmp, tcg_rt, tcg_rt2);
- } else {
- tcg_gen_concat32_i64(tmp, tcg_rt2, tcg_rt);
- }
- tcg_gen_qemu_st_i64(tmp, clean_addr, get_mem_index(s), mop);
- } else {
- TCGv_i128 tmp = tcg_temp_new_i128();
-
- if (s->be_data == MO_LE) {
- tcg_gen_concat_i64_i128(tmp, tcg_rt, tcg_rt2);
- } else {
- tcg_gen_concat_i64_i128(tmp, tcg_rt2, tcg_rt);
- }
- tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
- }
- }
+ gen_helper_stg_stub(cpu_env, dirty_addr);
+ } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
+ gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr);
+ } else {
+ gen_helper_stg(cpu_env, dirty_addr, dirty_addr);
}
- if (wback) {
- if (postindex) {
- tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
- }
- tcg_gen_mov_i64(cpu_reg_sp(s, rn), dirty_addr);
+ mop = finalize_memop(s, a->sz);
+ clean_addr = gen_mte_checkN(s, dirty_addr, true, false, 2 << a->sz, mop);
+
+ tcg_rt = cpu_reg(s, a->rt);
+ tcg_rt2 = cpu_reg(s, a->rt2);
+
+ assert(a->sz == 3);
+
+ tmp = tcg_temp_new_i128();
+ if (s->be_data == MO_LE) {
+ tcg_gen_concat_i64_i128(tmp, tcg_rt, tcg_rt2);
+ } else {
+ tcg_gen_concat_i64_i128(tmp, tcg_rt2, tcg_rt);
}
+ tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
+
+ op_addr_ldstpair_post(s, a, dirty_addr, offset);
+ return true;
}
/*
@@ -4184,10 +4183,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x28: case 0x29:
- case 0x2c: case 0x2d: /* Load/store pair (all forms) */
- disas_ldst_pair(s, insn);
- break;
case 0x38: case 0x39:
case 0x3c: case 0x3d: /* Load/store register (all forms) */
disas_ldst_reg(s, insn);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 14/23] target/arm: Convert load/store-pair to decodetree
2023-06-11 16:00 ` [PATCH v2 14/23] target/arm: Convert load/store-pair " Peter Maydell
@ 2023-06-14 5:30 ` Richard Henderson
0 siblings, 0 replies; 31+ messages in thread
From: Richard Henderson @ 2023-06-14 5:30 UTC (permalink / raw)
To: Peter Maydell, qemu-arm, qemu-devel
On 6/11/23 18:00, Peter Maydell wrote:
> Convert the load/store register pair insns (LDP, STP,
> LDNP, STNP, LDPSW, STGP) to decodetree.
>
> Signed-off-by: Peter Maydell<peter.maydell@linaro.org>
> Message-id:20230602155223.2040685-12-peter.maydell@linaro.org
> ---
> This was reviewed in v1, but the underlying code
> changed enough in the atomic-ops work that I've dropped
> the R-by tag.
> ---
> target/arm/tcg/a64.decode | 61 +++++
> target/arm/tcg/translate-a64.c | 425 ++++++++++++++++-----------------
> 2 files changed, 271 insertions(+), 215 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> +static bool trans_STP(DisasContext *s, arg_ldstpair *a)
> +{
> + uint64_t offset = a->imm << a->sz;
> + TCGv_i64 clean_addr, dirty_addr, tcg_rt, tcg_rt2;
> + MemOp mop = finalize_memop(s, a->sz);
> +
> + op_addr_ldstpair_pre(s, a, &clean_addr, &dirty_addr, offset, true, mop);
> + tcg_rt = cpu_reg(s, a->rt);
> + tcg_rt2 = cpu_reg(s, a->rt2);
> + /*
> + * We built mop above for the single logical access -- rebuild it
> + * now for the paired operation.
> + *
> + * With LSE2, non-sign-extending pairs are treated atomically if
> + * aligned, and if unaligned one of the pair will be completely
> + * within a 16-byte block and that element will be atomic.
> + * Otherwise each element is separately atomic.
> + * In all cases, issue one operation with the correct atomicity.
> + *
> + * This treats sign-extending loads like zero-extending loads,
> + * since that reuses the most code below.
> + */
Could lose the bit about loads within the store function.
r~
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 15/23] target/arm: Convert ld/st reg+imm9 insns to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (13 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 14/23] target/arm: Convert load/store-pair " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 16/23] target/arm: Convert LDR/STR with 12-bit immediate " Peter Maydell
` (7 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the load and store instructions which use a 9-bit
immediate offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-13-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 69 +++++++++++
target/arm/tcg/translate-a64.c | 206 ++++++++++++++-------------------
2 files changed, 153 insertions(+), 122 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index f5787919931..d55c09684a7 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -326,3 +326,72 @@ LDP_v 10 101 1 011 1 ....... ..... ..... ..... @ldstpair sz=4 sign=0 p
STGP 01 101 0 001 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=1 w=1
STGP 01 101 0 010 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=0
STGP 01 101 0 011 0 ....... ..... ..... ..... @ldstpair sz=3 sign=0 p=0 w=1
+
+# Load/store register (unscaled immediate)
+&ldst_imm rt rn imm sz sign w p unpriv ext
+@ldst_imm .. ... . .. .. . imm:s9 .. rn:5 rt:5 &ldst_imm unpriv=0 p=0 w=0
+@ldst_imm_pre .. ... . .. .. . imm:s9 .. rn:5 rt:5 &ldst_imm unpriv=0 p=0 w=1
+@ldst_imm_post .. ... . .. .. . imm:s9 .. rn:5 rt:5 &ldst_imm unpriv=0 p=1 w=1
+@ldst_imm_user .. ... . .. .. . imm:s9 .. rn:5 rt:5 &ldst_imm unpriv=1 p=0 w=0
+
+STR_i sz:2 111 0 00 00 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0
+LDR_i 00 111 0 00 01 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=1 sz=0
+LDR_i 01 111 0 00 01 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=1 sz=1
+LDR_i 10 111 0 00 01 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=1 sz=2
+LDR_i 11 111 0 00 01 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0 sz=3
+LDR_i 00 111 0 00 10 0 ......... 00 ..... ..... @ldst_imm sign=1 ext=0 sz=0
+LDR_i 01 111 0 00 10 0 ......... 00 ..... ..... @ldst_imm sign=1 ext=0 sz=1
+LDR_i 10 111 0 00 10 0 ......... 00 ..... ..... @ldst_imm sign=1 ext=0 sz=2
+LDR_i 00 111 0 00 11 0 ......... 00 ..... ..... @ldst_imm sign=1 ext=1 sz=0
+LDR_i 01 111 0 00 11 0 ......... 00 ..... ..... @ldst_imm sign=1 ext=1 sz=1
+
+STR_i sz:2 111 0 00 00 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0
+LDR_i 00 111 0 00 01 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=1 sz=0
+LDR_i 01 111 0 00 01 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=1 sz=1
+LDR_i 10 111 0 00 01 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=1 sz=2
+LDR_i 11 111 0 00 01 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0 sz=3
+LDR_i 00 111 0 00 10 0 ......... 01 ..... ..... @ldst_imm_post sign=1 ext=0 sz=0
+LDR_i 01 111 0 00 10 0 ......... 01 ..... ..... @ldst_imm_post sign=1 ext=0 sz=1
+LDR_i 10 111 0 00 10 0 ......... 01 ..... ..... @ldst_imm_post sign=1 ext=0 sz=2
+LDR_i 00 111 0 00 11 0 ......... 01 ..... ..... @ldst_imm_post sign=1 ext=1 sz=0
+LDR_i 01 111 0 00 11 0 ......... 01 ..... ..... @ldst_imm_post sign=1 ext=1 sz=1
+
+STR_i sz:2 111 0 00 00 0 ......... 10 ..... ..... @ldst_imm_user sign=0 ext=0
+LDR_i 00 111 0 00 01 0 ......... 10 ..... ..... @ldst_imm_user sign=0 ext=1 sz=0
+LDR_i 01 111 0 00 01 0 ......... 10 ..... ..... @ldst_imm_user sign=0 ext=1 sz=1
+LDR_i 10 111 0 00 01 0 ......... 10 ..... ..... @ldst_imm_user sign=0 ext=1 sz=2
+LDR_i 11 111 0 00 01 0 ......... 10 ..... ..... @ldst_imm_user sign=0 ext=0 sz=3
+LDR_i 00 111 0 00 10 0 ......... 10 ..... ..... @ldst_imm_user sign=1 ext=0 sz=0
+LDR_i 01 111 0 00 10 0 ......... 10 ..... ..... @ldst_imm_user sign=1 ext=0 sz=1
+LDR_i 10 111 0 00 10 0 ......... 10 ..... ..... @ldst_imm_user sign=1 ext=0 sz=2
+LDR_i 00 111 0 00 11 0 ......... 10 ..... ..... @ldst_imm_user sign=1 ext=1 sz=0
+LDR_i 01 111 0 00 11 0 ......... 10 ..... ..... @ldst_imm_user sign=1 ext=1 sz=1
+
+STR_i sz:2 111 0 00 00 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0
+LDR_i 00 111 0 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=1 sz=0
+LDR_i 01 111 0 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=1 sz=1
+LDR_i 10 111 0 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=1 sz=2
+LDR_i 11 111 0 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0 sz=3
+LDR_i 00 111 0 00 10 0 ......... 11 ..... ..... @ldst_imm_pre sign=1 ext=0 sz=0
+LDR_i 01 111 0 00 10 0 ......... 11 ..... ..... @ldst_imm_pre sign=1 ext=0 sz=1
+LDR_i 10 111 0 00 10 0 ......... 11 ..... ..... @ldst_imm_pre sign=1 ext=0 sz=2
+LDR_i 00 111 0 00 11 0 ......... 11 ..... ..... @ldst_imm_pre sign=1 ext=1 sz=0
+LDR_i 01 111 0 00 11 0 ......... 11 ..... ..... @ldst_imm_pre sign=1 ext=1 sz=1
+
+# PRFM : prefetch memory: a no-op for QEMU
+NOP 11 111 0 00 10 0 --------- 00 ----- -----
+
+STR_v_i sz:2 111 1 00 00 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0
+STR_v_i 00 111 1 00 10 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0 sz=4
+LDR_v_i sz:2 111 1 00 01 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0
+LDR_v_i 00 111 1 00 11 0 ......... 00 ..... ..... @ldst_imm sign=0 ext=0 sz=4
+
+STR_v_i sz:2 111 1 00 00 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0
+STR_v_i 00 111 1 00 10 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0 sz=4
+LDR_v_i sz:2 111 1 00 01 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0
+LDR_v_i 00 111 1 00 11 0 ......... 01 ..... ..... @ldst_imm_post sign=0 ext=0 sz=4
+
+STR_v_i sz:2 111 1 00 00 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0
+STR_v_i 00 111 1 00 10 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0 sz=4
+LDR_v_i sz:2 111 1 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0
+LDR_v_i 00 111 1 00 11 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0 sz=4
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8b8c9939013..4cabdadde41 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3040,134 +3040,101 @@ static bool trans_STGP(DisasContext *s, arg_ldstpair *a)
return true;
}
-/*
- * Load/store (immediate post-indexed)
- * Load/store (immediate pre-indexed)
- * Load/store (unscaled immediate)
- *
- * 31 30 29 27 26 25 24 23 22 21 20 12 11 10 9 5 4 0
- * +----+-------+---+-----+-----+---+--------+-----+------+------+
- * |size| 1 1 1 | V | 0 0 | opc | 0 | imm9 | idx | Rn | Rt |
- * +----+-------+---+-----+-----+---+--------+-----+------+------+
- *
- * idx = 01 -> post-indexed, 11 pre-indexed, 00 unscaled imm. (no writeback)
- 10 -> unprivileged
- * V = 0 -> non-vector
- * size: 00 -> 8 bit, 01 -> 16 bit, 10 -> 32 bit, 11 -> 64bit
- * opc: 00 -> store, 01 -> loadu, 10 -> loads 64, 11 -> loads 32
- */
-static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
- int opc,
- int size,
- int rt,
- bool is_vector)
+static void op_addr_ldst_imm_pre(DisasContext *s, arg_ldst_imm *a,
+ TCGv_i64 *clean_addr, TCGv_i64 *dirty_addr,
+ uint64_t offset, bool is_store, MemOp mop)
{
- int rn = extract32(insn, 5, 5);
- int imm9 = sextract32(insn, 12, 9);
- int idx = extract32(insn, 10, 2);
- bool is_signed = false;
- bool is_store = false;
- bool is_extended = false;
- bool is_unpriv = (idx == 2);
- bool iss_valid;
- bool post_index;
- bool writeback;
int memidx;
- MemOp memop;
- TCGv_i64 clean_addr, dirty_addr;
- if (is_vector) {
- size |= (opc & 2) << 1;
- if (size > 4 || is_unpriv) {
- unallocated_encoding(s);
- return;
- }
- is_store = ((opc & 1) == 0);
- if (!fp_access_check(s)) {
- return;
- }
- memop = finalize_memop_asimd(s, size);
- } else {
- if (size == 3 && opc == 2) {
- /* PRFM - prefetch */
- if (idx != 0) {
- unallocated_encoding(s);
- return;
- }
- return;
- }
- if (opc == 3 && size > 1) {
- unallocated_encoding(s);
- return;
- }
- is_store = (opc == 0);
- is_signed = !is_store && extract32(opc, 1, 1);
- is_extended = (size < 3) && extract32(opc, 0, 1);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
- }
-
- switch (idx) {
- case 0:
- case 2:
- post_index = false;
- writeback = false;
- break;
- case 1:
- post_index = true;
- writeback = true;
- break;
- case 3:
- post_index = false;
- writeback = true;
- break;
- default:
- g_assert_not_reached();
- }
-
- iss_valid = !is_vector && !writeback;
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
- if (!post_index) {
- tcg_gen_addi_i64(dirty_addr, dirty_addr, imm9);
+ *dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
+ if (!a->p) {
+ tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset);
}
+ memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
+ *clean_addr = gen_mte_check1_mmuidx(s, *dirty_addr, is_store,
+ a->w || a->rn != 31,
+ mop, a->unpriv, memidx);
+}
- memidx = is_unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
-
- clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
- writeback || rn != 31,
- memop, is_unpriv, memidx);
-
- if (is_vector) {
- if (is_store) {
- do_fp_st(s, rt, clean_addr, memop);
- } else {
- do_fp_ld(s, rt, clean_addr, memop);
- }
- } else {
- TCGv_i64 tcg_rt = cpu_reg(s, rt);
- bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
-
- if (is_store) {
- do_gpr_st_memidx(s, tcg_rt, clean_addr, memop, memidx,
- iss_valid, rt, iss_sf, false);
- } else {
- do_gpr_ld_memidx(s, tcg_rt, clean_addr, memop,
- is_extended, memidx,
- iss_valid, rt, iss_sf, false);
+static void op_addr_ldst_imm_post(DisasContext *s, arg_ldst_imm *a,
+ TCGv_i64 dirty_addr, uint64_t offset)
+{
+ if (a->w) {
+ if (a->p) {
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
}
+ tcg_gen_mov_i64(cpu_reg_sp(s, a->rn), dirty_addr);
}
+}
- if (writeback) {
- TCGv_i64 tcg_rn = cpu_reg_sp(s, rn);
- if (post_index) {
- tcg_gen_addi_i64(dirty_addr, dirty_addr, imm9);
- }
- tcg_gen_mov_i64(tcg_rn, dirty_addr);
+static bool trans_STR_i(DisasContext *s, arg_ldst_imm *a)
+{
+ bool iss_sf, iss_valid = !a->w;
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt;
+ int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
+ MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
+
+ op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop);
+
+ tcg_rt = cpu_reg(s, a->rt);
+ iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
+
+ do_gpr_st_memidx(s, tcg_rt, clean_addr, mop, memidx,
+ iss_valid, a->rt, iss_sf, false);
+ op_addr_ldst_imm_post(s, a, dirty_addr, a->imm);
+ return true;
+}
+
+static bool trans_LDR_i(DisasContext *s, arg_ldst_imm *a)
+{
+ bool iss_sf, iss_valid = !a->w;
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt;
+ int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
+ MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
+
+ op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop);
+
+ tcg_rt = cpu_reg(s, a->rt);
+ iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
+
+ do_gpr_ld_memidx(s, tcg_rt, clean_addr, mop,
+ a->ext, memidx, iss_valid, a->rt, iss_sf, false);
+ op_addr_ldst_imm_post(s, a, dirty_addr, a->imm);
+ return true;
+}
+
+static bool trans_STR_v_i(DisasContext *s, arg_ldst_imm *a)
+{
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp mop;
+
+ if (!fp_access_check(s)) {
+ return true;
}
+ mop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop);
+ do_fp_st(s, a->rt, clean_addr, mop);
+ op_addr_ldst_imm_post(s, a, dirty_addr, a->imm);
+ return true;
+}
+
+static bool trans_LDR_v_i(DisasContext *s, arg_ldst_imm *a)
+{
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp mop;
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+ mop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop);
+ do_fp_ld(s, a->rt, clean_addr, mop);
+ op_addr_ldst_imm_post(s, a, dirty_addr, a->imm);
+ return true;
}
/*
@@ -3640,12 +3607,7 @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
switch (extract32(insn, 24, 2)) {
case 0:
if (extract32(insn, 21, 1) == 0) {
- /* Load/store register (unscaled immediate)
- * Load/store immediate pre/post-indexed
- * Load/store register unprivileged
- */
- disas_ldst_reg_imm9(s, insn, opc, size, rt, is_vector);
- return;
+ break;
}
switch (extract32(insn, 10, 2)) {
case 0:
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 16/23] target/arm: Convert LDR/STR with 12-bit immediate to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (14 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 15/23] target/arm: Convert ld/st reg+imm9 insns " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 17/23] target/arm: Convert LDR/STR reg+reg " Peter Maydell
` (6 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the LDR and STR instructions which use a 12-bit immediate
offset to decodetree. We can reuse the existing LDR and STR
trans functions for these.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-14-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 25 ++++++++
target/arm/tcg/translate-a64.c | 104 +++++----------------------------
2 files changed, 41 insertions(+), 88 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d55c09684a7..d6b31c10838 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -395,3 +395,28 @@ STR_v_i sz:2 111 1 00 00 0 ......... 11 ..... ..... @ldst_imm_pre sign=0
STR_v_i 00 111 1 00 10 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0 sz=4
LDR_v_i sz:2 111 1 00 01 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0
LDR_v_i 00 111 1 00 11 0 ......... 11 ..... ..... @ldst_imm_pre sign=0 ext=0 sz=4
+
+# Load/store with an unsigned 12 bit immediate, which is scaled by the
+# element size. The function gets the sz:imm and returns the scaled immediate.
+%uimm_scaled 10:12 sz:3 !function=uimm_scaled
+
+@ldst_uimm .. ... . .. .. ............ rn:5 rt:5 &ldst_imm unpriv=0 p=0 w=0 imm=%uimm_scaled
+
+STR_i sz:2 111 0 01 00 ............ ..... ..... @ldst_uimm sign=0 ext=0
+LDR_i 00 111 0 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=1 sz=0
+LDR_i 01 111 0 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=1 sz=1
+LDR_i 10 111 0 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=1 sz=2
+LDR_i 11 111 0 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=0 sz=3
+LDR_i 00 111 0 01 10 ............ ..... ..... @ldst_uimm sign=1 ext=0 sz=0
+LDR_i 01 111 0 01 10 ............ ..... ..... @ldst_uimm sign=1 ext=0 sz=1
+LDR_i 10 111 0 01 10 ............ ..... ..... @ldst_uimm sign=1 ext=0 sz=2
+LDR_i 00 111 0 01 11 ............ ..... ..... @ldst_uimm sign=1 ext=1 sz=0
+LDR_i 01 111 0 01 11 ............ ..... ..... @ldst_uimm sign=1 ext=1 sz=1
+
+# PRFM
+NOP 11 111 0 01 10 ------------ ----- -----
+
+STR_v_i sz:2 111 1 01 00 ............ ..... ..... @ldst_uimm sign=0 ext=0
+STR_v_i 00 111 1 01 10 ............ ..... ..... @ldst_uimm sign=0 ext=0 sz=4
+LDR_v_i sz:2 111 1 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=0
+LDR_v_i 00 111 1 01 11 ............ ..... ..... @ldst_uimm sign=0 ext=0 sz=4
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 4cabdadde41..e1936c7c246 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -46,6 +46,22 @@ enum a64_shift_type {
A64_SHIFT_TYPE_ROR = 3
};
+/*
+ * Helpers for extracting complex instruction fields
+ */
+
+/*
+ * For load/store with an unsigned 12 bit immediate scaled by the element
+ * size. The input has the immediate field in bits [14:3] and the element
+ * size in [2:0].
+ */
+static int uimm_scaled(DisasContext *s, int x)
+{
+ unsigned imm = x >> 3;
+ unsigned scale = extract32(x, 0, 3);
+ return imm << scale;
+}
+
/*
* Include the generated decoders.
*/
@@ -3237,91 +3253,6 @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
}
}
-/*
- * Load/store (unsigned immediate)
- *
- * 31 30 29 27 26 25 24 23 22 21 10 9 5
- * +----+-------+---+-----+-----+------------+-------+------+
- * |size| 1 1 1 | V | 0 1 | opc | imm12 | Rn | Rt |
- * +----+-------+---+-----+-----+------------+-------+------+
- *
- * For non-vector:
- * size: 00-> byte, 01 -> 16 bit, 10 -> 32bit, 11 -> 64bit
- * opc: 00 -> store, 01 -> loadu, 10 -> loads 64, 11 -> loads 32
- * For vector:
- * size is opc<1>:size<1:0> so 100 -> 128 bit; 110 and 111 unallocated
- * opc<0>: 0 -> store, 1 -> load
- * Rn: base address register (inc SP)
- * Rt: target register
- */
-static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
- int opc,
- int size,
- int rt,
- bool is_vector)
-{
- int rn = extract32(insn, 5, 5);
- unsigned int imm12 = extract32(insn, 10, 12);
- unsigned int offset;
- TCGv_i64 clean_addr, dirty_addr;
- bool is_store;
- bool is_signed = false;
- bool is_extended = false;
- MemOp memop;
-
- if (is_vector) {
- size |= (opc & 2) << 1;
- if (size > 4) {
- unallocated_encoding(s);
- return;
- }
- is_store = !extract32(opc, 0, 1);
- if (!fp_access_check(s)) {
- return;
- }
- memop = finalize_memop_asimd(s, size);
- } else {
- if (size == 3 && opc == 2) {
- /* PRFM - prefetch */
- return;
- }
- if (opc == 3 && size > 1) {
- unallocated_encoding(s);
- return;
- }
- is_store = (opc == 0);
- is_signed = !is_store && extract32(opc, 1, 1);
- is_extended = (size < 3) && extract32(opc, 0, 1);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
- }
-
- if (rn == 31) {
- gen_check_sp_alignment(s);
- }
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
- offset = imm12 << size;
- tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
-
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
-
- if (is_vector) {
- if (is_store) {
- do_fp_st(s, rt, clean_addr, memop);
- } else {
- do_fp_ld(s, rt, clean_addr, memop);
- }
- } else {
- TCGv_i64 tcg_rt = cpu_reg(s, rt);
- bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
- if (is_store) {
- do_gpr_st(s, tcg_rt, clean_addr, memop, true, rt, iss_sf, false);
- } else {
- do_gpr_ld(s, tcg_rt, clean_addr, memop,
- is_extended, true, rt, iss_sf, false);
- }
- }
-}
-
/* Atomic memory operations
*
* 31 30 27 26 24 22 21 16 15 12 10 5 0
@@ -3621,9 +3552,6 @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
return;
}
break;
- case 1:
- disas_ldst_reg_unsigned_imm(s, insn, opc, size, rt, is_vector);
- return;
}
unallocated_encoding(s);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 17/23] target/arm: Convert LDR/STR reg+reg to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (15 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 16/23] target/arm: Convert LDR/STR with 12-bit immediate " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 18/23] target/arm: Convert atomic memory ops " Peter Maydell
` (5 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the LDR and STR instructions which take a register
plus register offset to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-15-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 22 +++++
target/arm/tcg/translate-a64.c | 173 +++++++++++++++------------------
2 files changed, 103 insertions(+), 92 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d6b31c10838..5c086d6af6d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -420,3 +420,25 @@ STR_v_i sz:2 111 1 01 00 ............ ..... ..... @ldst_uimm sign=0 ext=
STR_v_i 00 111 1 01 10 ............ ..... ..... @ldst_uimm sign=0 ext=0 sz=4
LDR_v_i sz:2 111 1 01 01 ............ ..... ..... @ldst_uimm sign=0 ext=0
LDR_v_i 00 111 1 01 11 ............ ..... ..... @ldst_uimm sign=0 ext=0 sz=4
+
+# Load/store with register offset
+&ldst rm rn rt sign ext sz opt s
+@ldst .. ... . .. .. . rm:5 opt:3 s:1 .. rn:5 rt:5 &ldst
+STR sz:2 111 0 00 00 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0
+LDR 00 111 0 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=1 sz=0
+LDR 01 111 0 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=1 sz=1
+LDR 10 111 0 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=1 sz=2
+LDR 11 111 0 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0 sz=3
+LDR 00 111 0 00 10 1 ..... ... . 10 ..... ..... @ldst sign=1 ext=0 sz=0
+LDR 01 111 0 00 10 1 ..... ... . 10 ..... ..... @ldst sign=1 ext=0 sz=1
+LDR 10 111 0 00 10 1 ..... ... . 10 ..... ..... @ldst sign=1 ext=0 sz=2
+LDR 00 111 0 00 11 1 ..... ... . 10 ..... ..... @ldst sign=1 ext=1 sz=0
+LDR 01 111 0 00 11 1 ..... ... . 10 ..... ..... @ldst sign=1 ext=1 sz=1
+
+# PRFM
+NOP 11 111 0 00 10 1 ----- -1- - 10 ----- -----
+
+STR_v sz:2 111 1 00 00 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0
+STR_v 00 111 1 00 10 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0 sz=4
+LDR_v sz:2 111 1 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0
+LDR_v 00 111 1 00 11 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0 sz=4
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index e1936c7c246..3d161169411 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3153,104 +3153,95 @@ static bool trans_LDR_v_i(DisasContext *s, arg_ldst_imm *a)
return true;
}
-/*
- * Load/store (register offset)
- *
- * 31 30 29 27 26 25 24 23 22 21 20 16 15 13 12 11 10 9 5 4 0
- * +----+-------+---+-----+-----+---+------+-----+--+-----+----+----+
- * |size| 1 1 1 | V | 0 0 | opc | 1 | Rm | opt | S| 1 0 | Rn | Rt |
- * +----+-------+---+-----+-----+---+------+-----+--+-----+----+----+
- *
- * For non-vector:
- * size: 00-> byte, 01 -> 16 bit, 10 -> 32bit, 11 -> 64bit
- * opc: 00 -> store, 01 -> loadu, 10 -> loads 64, 11 -> loads 32
- * For vector:
- * size is opc<1>:size<1:0> so 100 -> 128 bit; 110 and 111 unallocated
- * opc<0>: 0 -> store, 1 -> load
- * V: 1 -> vector/simd
- * opt: extend encoding (see DecodeRegExtend)
- * S: if S=1 then scale (essentially index by sizeof(size))
- * Rt: register to transfer into/out of
- * Rn: address register or SP for base
- * Rm: offset register or ZR for offset
- */
-static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
- int opc,
- int size,
- int rt,
- bool is_vector)
+static void op_addr_ldst_pre(DisasContext *s, arg_ldst *a,
+ TCGv_i64 *clean_addr, TCGv_i64 *dirty_addr,
+ bool is_store, MemOp memop)
{
- int rn = extract32(insn, 5, 5);
- int shift = extract32(insn, 12, 1);
- int rm = extract32(insn, 16, 5);
- int opt = extract32(insn, 13, 3);
- bool is_signed = false;
- bool is_store = false;
- bool is_extended = false;
- TCGv_i64 tcg_rm, clean_addr, dirty_addr;
- MemOp memop;
+ TCGv_i64 tcg_rm;
- if (extract32(opt, 1, 1) == 0) {
- unallocated_encoding(s);
- return;
- }
-
- if (is_vector) {
- size |= (opc & 2) << 1;
- if (size > 4) {
- unallocated_encoding(s);
- return;
- }
- is_store = !extract32(opc, 0, 1);
- if (!fp_access_check(s)) {
- return;
- }
- memop = finalize_memop_asimd(s, size);
- } else {
- if (size == 3 && opc == 2) {
- /* PRFM - prefetch */
- return;
- }
- if (opc == 3 && size > 1) {
- unallocated_encoding(s);
- return;
- }
- is_store = (opc == 0);
- is_signed = !is_store && extract32(opc, 1, 1);
- is_extended = (size < 3) && extract32(opc, 0, 1);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
- }
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
+ *dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
- tcg_rm = read_cpu_reg(s, rm, 1);
- ext_and_shift_reg(tcg_rm, tcg_rm, opt, shift ? size : 0);
+ tcg_rm = read_cpu_reg(s, a->rm, 1);
+ ext_and_shift_reg(tcg_rm, tcg_rm, a->opt, a->s ? a->sz : 0);
- tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
+ tcg_gen_add_i64(*dirty_addr, *dirty_addr, tcg_rm);
+ *clean_addr = gen_mte_check1(s, *dirty_addr, is_store, true, memop);
+}
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
+static bool trans_LDR(DisasContext *s, arg_ldst *a)
+{
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt;
+ bool iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
+ MemOp memop;
- if (is_vector) {
- if (is_store) {
- do_fp_st(s, rt, clean_addr, memop);
- } else {
- do_fp_ld(s, rt, clean_addr, memop);
- }
- } else {
- TCGv_i64 tcg_rt = cpu_reg(s, rt);
- bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
-
- if (is_store) {
- do_gpr_st(s, tcg_rt, clean_addr, memop,
- true, rt, iss_sf, false);
- } else {
- do_gpr_ld(s, tcg_rt, clean_addr, memop,
- is_extended, true, rt, iss_sf, false);
- }
+ if (extract32(a->opt, 1, 1) == 0) {
+ return false;
}
+
+ memop = finalize_memop(s, a->sz + a->sign * MO_SIGN);
+ op_addr_ldst_pre(s, a, &clean_addr, &dirty_addr, false, memop);
+ tcg_rt = cpu_reg(s, a->rt);
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
+ a->ext, true, a->rt, iss_sf, false);
+ return true;
+}
+
+static bool trans_STR(DisasContext *s, arg_ldst *a)
+{
+ TCGv_i64 clean_addr, dirty_addr, tcg_rt;
+ bool iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
+ MemOp memop;
+
+ if (extract32(a->opt, 1, 1) == 0) {
+ return false;
+ }
+
+ memop = finalize_memop(s, a->sz);
+ op_addr_ldst_pre(s, a, &clean_addr, &dirty_addr, true, memop);
+ tcg_rt = cpu_reg(s, a->rt);
+ do_gpr_st(s, tcg_rt, clean_addr, memop, true, a->rt, iss_sf, false);
+ return true;
+}
+
+static bool trans_LDR_v(DisasContext *s, arg_ldst *a)
+{
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp memop;
+
+ if (extract32(a->opt, 1, 1) == 0) {
+ return false;
+ }
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ memop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldst_pre(s, a, &clean_addr, &dirty_addr, false, memop);
+ do_fp_ld(s, a->rt, clean_addr, memop);
+ return true;
+}
+
+static bool trans_STR_v(DisasContext *s, arg_ldst *a)
+{
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp memop;
+
+ if (extract32(a->opt, 1, 1) == 0) {
+ return false;
+ }
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ memop = finalize_memop_asimd(s, a->sz);
+ op_addr_ldst_pre(s, a, &clean_addr, &dirty_addr, true, memop);
+ do_fp_st(s, a->rt, clean_addr, memop);
+ return true;
}
/* Atomic memory operations
@@ -3531,7 +3522,6 @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
static void disas_ldst_reg(DisasContext *s, uint32_t insn)
{
int rt = extract32(insn, 0, 5);
- int opc = extract32(insn, 22, 2);
bool is_vector = extract32(insn, 26, 1);
int size = extract32(insn, 30, 2);
@@ -3545,8 +3535,7 @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
disas_ldst_atomic(s, insn, size, rt, is_vector);
return;
case 2:
- disas_ldst_reg_roffset(s, insn, opc, size, rt, is_vector);
- return;
+ break;
default:
disas_ldst_pac(s, insn, size, rt, is_vector);
return;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 18/23] target/arm: Convert atomic memory ops to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (16 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 17/23] target/arm: Convert LDR/STR reg+reg " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 19/23] target/arm: Convert load (pointer auth) insns " Peter Maydell
` (4 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the insns in the atomic memory operations group to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-16-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 15 ++++
target/arm/tcg/translate-a64.c | 153 ++++++++++++---------------------
2 files changed, 70 insertions(+), 98 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 5c086d6af6d..799c5ecb77a 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -442,3 +442,18 @@ STR_v sz:2 111 1 00 00 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0
STR_v 00 111 1 00 10 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0 sz=4
LDR_v sz:2 111 1 00 01 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0
LDR_v 00 111 1 00 11 1 ..... ... . 10 ..... ..... @ldst sign=0 ext=0 sz=4
+
+# Atomic memory operations
+&atomic rs rn rt a r sz
+@atomic sz:2 ... . .. a:1 r:1 . rs:5 . ... .. rn:5 rt:5 &atomic
+LDADD .. 111 0 00 . . 1 ..... 0000 00 ..... ..... @atomic
+LDCLR .. 111 0 00 . . 1 ..... 0001 00 ..... ..... @atomic
+LDEOR .. 111 0 00 . . 1 ..... 0010 00 ..... ..... @atomic
+LDSET .. 111 0 00 . . 1 ..... 0011 00 ..... ..... @atomic
+LDSMAX .. 111 0 00 . . 1 ..... 0100 00 ..... ..... @atomic
+LDSMIN .. 111 0 00 . . 1 ..... 0101 00 ..... ..... @atomic
+LDUMAX .. 111 0 00 . . 1 ..... 0110 00 ..... ..... @atomic
+LDUMIN .. 111 0 00 . . 1 ..... 0111 00 ..... ..... @atomic
+SWP .. 111 0 00 . . 1 ..... 1000 00 ..... ..... @atomic
+
+LDAPR sz:2 111 0 00 1 0 1 11111 1100 00 rn:5 rt:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 3d161169411..ba072e557e1 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3244,113 +3244,32 @@ static bool trans_STR_v(DisasContext *s, arg_ldst *a)
return true;
}
-/* Atomic memory operations
- *
- * 31 30 27 26 24 22 21 16 15 12 10 5 0
- * +------+-------+---+-----+-----+---+----+----+-----+-----+----+-----+
- * | size | 1 1 1 | V | 0 0 | A R | 1 | Rs | o3 | opc | 0 0 | Rn | Rt |
- * +------+-------+---+-----+-----+--------+----+-----+-----+----+-----+
- *
- * Rt: the result register
- * Rn: base address or SP
- * Rs: the source register for the operation
- * V: vector flag (always 0 as of v8.3)
- * A: acquire flag
- * R: release flag
- */
-static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
- int size, int rt, bool is_vector)
+
+static bool do_atomic_ld(DisasContext *s, arg_atomic *a, AtomicThreeOpFn *fn,
+ int sign, bool invert)
{
- int rs = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int o3_opc = extract32(insn, 12, 4);
- bool r = extract32(insn, 22, 1);
- bool a = extract32(insn, 23, 1);
- TCGv_i64 tcg_rs, tcg_rt, clean_addr;
- AtomicThreeOpFn *fn = NULL;
- MemOp mop = size;
+ MemOp mop = a->sz | sign;
+ TCGv_i64 clean_addr, tcg_rs, tcg_rt;
- if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
- unallocated_encoding(s);
- return;
- }
- switch (o3_opc) {
- case 000: /* LDADD */
- fn = tcg_gen_atomic_fetch_add_i64;
- break;
- case 001: /* LDCLR */
- fn = tcg_gen_atomic_fetch_and_i64;
- break;
- case 002: /* LDEOR */
- fn = tcg_gen_atomic_fetch_xor_i64;
- break;
- case 003: /* LDSET */
- fn = tcg_gen_atomic_fetch_or_i64;
- break;
- case 004: /* LDSMAX */
- fn = tcg_gen_atomic_fetch_smax_i64;
- mop |= MO_SIGN;
- break;
- case 005: /* LDSMIN */
- fn = tcg_gen_atomic_fetch_smin_i64;
- mop |= MO_SIGN;
- break;
- case 006: /* LDUMAX */
- fn = tcg_gen_atomic_fetch_umax_i64;
- break;
- case 007: /* LDUMIN */
- fn = tcg_gen_atomic_fetch_umin_i64;
- break;
- case 010: /* SWP */
- fn = tcg_gen_atomic_xchg_i64;
- break;
- case 014: /* LDAPR, LDAPRH, LDAPRB */
- if (!dc_isar_feature(aa64_rcpc_8_3, s) ||
- rs != 31 || a != 1 || r != 0) {
- unallocated_encoding(s);
- return;
- }
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
-
- mop = check_atomic_align(s, rn, mop);
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, mop);
-
- if (o3_opc == 014) {
- /*
- * LDAPR* are a special case because they are a simple load, not a
- * fetch-and-do-something op.
- * The architectural consistency requirements here are weaker than
- * full load-acquire (we only need "load-acquire processor consistent"),
- * but we choose to implement them as full LDAQ.
- */
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, mop, false,
- true, rt, disas_ldst_compute_iss_sf(size, false, 0), true);
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
- return;
- }
-
- tcg_rs = read_cpu_reg(s, rs, true);
- tcg_rt = cpu_reg(s, rt);
-
- if (o3_opc == 1) { /* LDCLR */
+ mop = check_atomic_align(s, a->rn, mop);
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, a->rn), false,
+ a->rn != 31, mop);
+ tcg_rs = read_cpu_reg(s, a->rs, true);
+ tcg_rt = cpu_reg(s, a->rt);
+ if (invert) {
tcg_gen_not_i64(tcg_rs, tcg_rs);
}
-
- /* The tcg atomic primitives are all full barriers. Therefore we
+ /*
+ * The tcg atomic primitives are all full barriers. Therefore we
* can ignore the Acquire and Release bits of this instruction.
*/
fn(tcg_rt, clean_addr, tcg_rs, get_mem_index(s), mop);
if (mop & MO_SIGN) {
- switch (size) {
+ switch (a->sz) {
case MO_8:
tcg_gen_ext8u_i64(tcg_rt, tcg_rt);
break;
@@ -3366,6 +3285,46 @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
g_assert_not_reached();
}
}
+ return true;
+}
+
+TRANS_FEAT(LDADD, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_add_i64, 0, false)
+TRANS_FEAT(LDCLR, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_and_i64, 0, true)
+TRANS_FEAT(LDEOR, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_xor_i64, 0, false)
+TRANS_FEAT(LDSET, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_or_i64, 0, false)
+TRANS_FEAT(LDSMAX, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_smax_i64, MO_SIGN, false)
+TRANS_FEAT(LDSMIN, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_smin_i64, MO_SIGN, false)
+TRANS_FEAT(LDUMAX, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_umax_i64, 0, false)
+TRANS_FEAT(LDUMIN, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_fetch_umin_i64, 0, false)
+TRANS_FEAT(SWP, aa64_atomics, do_atomic_ld, a, tcg_gen_atomic_xchg_i64, 0, false)
+
+static bool trans_LDAPR(DisasContext *s, arg_LDAPR *a)
+{
+ bool iss_sf = ldst_iss_sf(a->sz, false, false);
+ TCGv_i64 clean_addr;
+ MemOp mop;
+
+ if (!dc_isar_feature(aa64_atomics, s) ||
+ !dc_isar_feature(aa64_rcpc_8_3, s)) {
+ return false;
+ }
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+ mop = check_atomic_align(s, a->rn, a->sz);
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, a->rn), false,
+ a->rn != 31, mop);
+ /*
+ * LDAPR* are a special case because they are a simple load, not a
+ * fetch-and-do-something op.
+ * The architectural consistency requirements here are weaker than
+ * full load-acquire (we only need "load-acquire processor consistent"),
+ * but we choose to implement them as full LDAQ.
+ */
+ do_gpr_ld(s, cpu_reg(s, a->rt), clean_addr, mop, false,
+ true, a->rt, iss_sf, true);
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ return true;
}
/*
@@ -3532,8 +3491,6 @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
}
switch (extract32(insn, 10, 2)) {
case 0:
- disas_ldst_atomic(s, insn, size, rt, is_vector);
- return;
case 2:
break;
default:
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 19/23] target/arm: Convert load (pointer auth) insns to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (17 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 18/23] target/arm: Convert atomic memory ops " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 20/23] target/arm: Convert LDAPR/STLR (imm) " Peter Maydell
` (3 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the instructions in the load/store register (pointer
authentication) group ot decodetree: LDRAA, LDRAB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-17-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 7 +++
target/arm/tcg/translate-a64.c | 83 +++++++---------------------------
2 files changed, 23 insertions(+), 67 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 799c5ecb77a..b80a17111e7 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -457,3 +457,10 @@ LDUMIN .. 111 0 00 . . 1 ..... 0111 00 ..... ..... @atomic
SWP .. 111 0 00 . . 1 ..... 1000 00 ..... ..... @atomic
LDAPR sz:2 111 0 00 1 0 1 11111 1100 00 rn:5 rt:5
+
+# Load/store register (pointer authentication)
+
+# LDRA immediate is 10 bits signed and scaled, but the bits aren't all contiguous
+%ldra_imm 22:s1 12:9 !function=times_2
+
+LDRA 11 111 0 00 m:1 . 1 ......... w:1 1 rn:5 rt:5 imm=%ldra_imm
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index ba072e557e1..b4b029d0910 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3327,43 +3327,23 @@ static bool trans_LDAPR(DisasContext *s, arg_LDAPR *a)
return true;
}
-/*
- * PAC memory operations
- *
- * 31 30 27 26 24 22 21 12 11 10 5 0
- * +------+-------+---+-----+-----+---+--------+---+---+----+-----+
- * | size | 1 1 1 | V | 0 0 | M S | 1 | imm9 | W | 1 | Rn | Rt |
- * +------+-------+---+-----+-----+---+--------+---+---+----+-----+
- *
- * Rt: the result register
- * Rn: base address or SP
- * V: vector flag (always 0 as of v8.3)
- * M: clear for key DA, set for key DB
- * W: pre-indexing flag
- * S: sign for imm9.
- */
-static void disas_ldst_pac(DisasContext *s, uint32_t insn,
- int size, int rt, bool is_vector)
+static bool trans_LDRA(DisasContext *s, arg_LDRA *a)
{
- int rn = extract32(insn, 5, 5);
- bool is_wback = extract32(insn, 11, 1);
- bool use_key_a = !extract32(insn, 23, 1);
- int offset;
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
MemOp memop;
- if (size != 3 || is_vector || !dc_isar_feature(aa64_pauth, s)) {
- unallocated_encoding(s);
- return;
+ /* Load with pointer authentication */
+ if (!dc_isar_feature(aa64_pauth, s)) {
+ return false;
}
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
+ dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
if (s->pauth_active) {
- if (use_key_a) {
+ if (!a->m) {
gen_helper_autda(dirty_addr, cpu_env, dirty_addr,
tcg_constant_i64(0));
} else {
@@ -3372,25 +3352,23 @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
}
}
- /* Form the 10-bit signed, scaled offset. */
- offset = (extract32(insn, 22, 1) << 9) | extract32(insn, 12, 9);
- offset = sextract32(offset << size, 0, 10 + size);
- tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, a->imm);
- memop = finalize_memop(s, size);
+ memop = finalize_memop(s, MO_64);
/* Note that "clean" and "dirty" here refer to TBI not PAC. */
clean_addr = gen_mte_check1(s, dirty_addr, false,
- is_wback || rn != 31, memop);
+ a->w || a->rn != 31, memop);
- tcg_rt = cpu_reg(s, rt);
+ tcg_rt = cpu_reg(s, a->rt);
do_gpr_ld(s, tcg_rt, clean_addr, memop,
- /* extend */ false, /* iss_valid */ !is_wback,
- /* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
+ /* extend */ false, /* iss_valid */ !a->w,
+ /* iss_srt */ a->rt, /* iss_sf */ true, /* iss_ar */ false);
- if (is_wback) {
- tcg_gen_mov_i64(cpu_reg_sp(s, rn), dirty_addr);
+ if (a->w) {
+ tcg_gen_mov_i64(cpu_reg_sp(s, a->rn), dirty_addr);
}
+ return true;
}
/*
@@ -3477,31 +3455,6 @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
}
}
-/* Load/store register (all forms) */
-static void disas_ldst_reg(DisasContext *s, uint32_t insn)
-{
- int rt = extract32(insn, 0, 5);
- bool is_vector = extract32(insn, 26, 1);
- int size = extract32(insn, 30, 2);
-
- switch (extract32(insn, 24, 2)) {
- case 0:
- if (extract32(insn, 21, 1) == 0) {
- break;
- }
- switch (extract32(insn, 10, 2)) {
- case 0:
- case 2:
- break;
- default:
- disas_ldst_pac(s, insn, size, rt, is_vector);
- return;
- }
- break;
- }
- unallocated_encoding(s);
-}
-
/* AdvSIMD load/store multiple structures
*
* 31 30 29 23 22 21 16 15 12 11 10 9 5 4 0
@@ -4019,10 +3972,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x38: case 0x39:
- case 0x3c: case 0x3d: /* Load/store register (all forms) */
- disas_ldst_reg(s, insn);
- break;
case 0x0c: /* AdvSIMD load/store multiple structures */
disas_ldst_multiple_struct(s, insn);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 20/23] target/arm: Convert LDAPR/STLR (imm) to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (18 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 19/23] target/arm: Convert load (pointer auth) insns " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 21/23] target/arm: Convert load/store (multiple structures) " Peter Maydell
` (2 subsequent siblings)
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the instructions in the LDAPR/STLR (unscaled immediate)
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-18-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 10 +++
target/arm/tcg/translate-a64.c | 132 ++++++++++++---------------------
2 files changed, 56 insertions(+), 86 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index b80a17111e7..db4f44c4f40 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -464,3 +464,13 @@ LDAPR sz:2 111 0 00 1 0 1 11111 1100 00 rn:5 rt:5
%ldra_imm 22:s1 12:9 !function=times_2
LDRA 11 111 0 00 m:1 . 1 ......... w:1 1 rn:5 rt:5 imm=%ldra_imm
+
+&ldapr_stlr_i rn rt imm sz sign ext
+@ldapr_stlr_i .. ...... .. . imm:9 .. rn:5 rt:5 &ldapr_stlr_i
+STLR_i sz:2 011001 00 0 ......... 00 ..... ..... @ldapr_stlr_i sign=0 ext=0
+LDAPR_i sz:2 011001 01 0 ......... 00 ..... ..... @ldapr_stlr_i sign=0 ext=0
+LDAPR_i 00 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=0 sz=0
+LDAPR_i 01 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=0 sz=1
+LDAPR_i 10 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=0 sz=2
+LDAPR_i 00 011001 11 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=1 sz=0
+LDAPR_i 01 011001 11 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=1 sz=1
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index b4b029d0910..54383211006 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2652,22 +2652,12 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
}
}
-/* Update the Sixty-Four bit (SF) registersize. This logic is derived
+/*
+ * Compute the ISS.SF bit for syndrome information if an exception
+ * is taken on a load or store. This indicates whether the instruction
+ * is accessing a 32-bit or 64-bit register. This logic is derived
* from the ARMv8 specs for LDR (Shared decode for all encodings).
*/
-static bool disas_ldst_compute_iss_sf(int size, bool is_signed, int opc)
-{
- int opc0 = extract32(opc, 0, 1);
- int regsize;
-
- if (is_signed) {
- regsize = opc0 ? 32 : 64;
- } else {
- regsize = size == 3 ? 64 : 32;
- }
- return regsize == 64;
-}
-
static bool ldst_iss_sf(int size, bool sign, bool ext)
{
@@ -3371,88 +3361,60 @@ static bool trans_LDRA(DisasContext *s, arg_LDRA *a)
return true;
}
-/*
- * LDAPR/STLR (unscaled immediate)
- *
- * 31 30 24 22 21 12 10 5 0
- * +------+-------------+-----+---+--------+-----+----+-----+
- * | size | 0 1 1 0 0 1 | opc | 0 | imm9 | 0 0 | Rn | Rt |
- * +------+-------------+-----+---+--------+-----+----+-----+
- *
- * Rt: source or destination register
- * Rn: base register
- * imm9: unscaled immediate offset
- * opc: 00: STLUR*, 01/10/11: various LDAPUR*
- * size: size of load/store
- */
-static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
+static bool trans_LDAPR_i(DisasContext *s, arg_ldapr_stlr_i *a)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int offset = sextract32(insn, 12, 9);
- int opc = extract32(insn, 22, 2);
- int size = extract32(insn, 30, 2);
TCGv_i64 clean_addr, dirty_addr;
- bool is_store = false;
- bool extend = false;
- bool iss_sf;
- MemOp mop = size;
+ MemOp mop = a->sz | (a->sign ? MO_SIGN : 0);
+ bool iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
- unallocated_encoding(s);
- return;
+ return false;
}
- switch (opc) {
- case 0: /* STLURB */
- is_store = true;
- break;
- case 1: /* LDAPUR* */
- break;
- case 2: /* LDAPURS* 64-bit variant */
- if (size == 3) {
- unallocated_encoding(s);
- return;
- }
- mop |= MO_SIGN;
- break;
- case 3: /* LDAPURS* 32-bit variant */
- if (size > 1) {
- unallocated_encoding(s);
- return;
- }
- mop |= MO_SIGN;
- extend = true; /* zero-extend 32->64 after signed load */
- break;
- default:
- g_assert_not_reached();
- }
-
- iss_sf = disas_ldst_compute_iss_sf(size, (mop & MO_SIGN) != 0, opc);
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- mop = check_ordered_align(s, rn, offset, is_store, mop);
-
- dirty_addr = read_cpu_reg_sp(s, rn, 1);
- tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
+ mop = check_ordered_align(s, a->rn, a->imm, false, mop);
+ dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, a->imm);
clean_addr = clean_data_tbi(s, dirty_addr);
- if (is_store) {
- /* Store-Release semantics */
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, mop, true, rt, iss_sf, true);
- } else {
- /*
- * Load-AcquirePC semantics; we implement as the slightly more
- * restrictive Load-Acquire.
- */
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, mop,
- extend, true, rt, iss_sf, true);
- tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ /*
+ * Load-AcquirePC semantics; we implement as the slightly more
+ * restrictive Load-Acquire.
+ */
+ do_gpr_ld(s, cpu_reg(s, a->rt), clean_addr, mop, a->ext, true,
+ a->rt, iss_sf, true);
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
+ return true;
+}
+
+static bool trans_STLR_i(DisasContext *s, arg_ldapr_stlr_i *a)
+{
+ TCGv_i64 clean_addr, dirty_addr;
+ MemOp mop = a->sz;
+ bool iss_sf = ldst_iss_sf(a->sz, a->sign, a->ext);
+
+ if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
+ return false;
}
+
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ mop = check_ordered_align(s, a->rn, a->imm, true, mop);
+ dirty_addr = read_cpu_reg_sp(s, a->rn, 1);
+ tcg_gen_addi_i64(dirty_addr, dirty_addr, a->imm);
+ clean_addr = clean_data_tbi(s, dirty_addr);
+
+ /* Store-Release semantics */
+ tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
+ do_gpr_st(s, cpu_reg(s, a->rt), clean_addr, mop, true, a->rt, iss_sf, true);
+ return true;
}
/* AdvSIMD load/store multiple structures
@@ -3981,8 +3943,6 @@ static void disas_ldst(DisasContext *s, uint32_t insn)
case 0x19:
if (extract32(insn, 21, 1) != 0) {
disas_ldst_tag(s, insn);
- } else if (extract32(insn, 10, 2) == 0) {
- disas_ldst_ldapr_stlr(s, insn);
} else {
unallocated_encoding(s);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 21/23] target/arm: Convert load/store (multiple structures) to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (19 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 20/23] target/arm: Convert LDAPR/STLR (imm) " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-11 16:00 ` [PATCH v2 22/23] target/arm: Convert load/store single structure " Peter Maydell
2023-06-11 16:00 ` [PATCH v2 23/23] target/arm: Convert load/store tags insns " Peter Maydell
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the instructions in the ASIMD load/store multiple structures
instruction classes to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-19-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 20 +++
target/arm/tcg/translate-a64.c | 222 ++++++++++++++++-----------------
2 files changed, 131 insertions(+), 111 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index db4f44c4f40..69bdfa2e73b 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -474,3 +474,23 @@ LDAPR_i 01 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext
LDAPR_i 10 011001 10 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=0 sz=2
LDAPR_i 00 011001 11 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=1 sz=0
LDAPR_i 01 011001 11 0 ......... 00 ..... ..... @ldapr_stlr_i sign=1 ext=1 sz=1
+
+# Load/store multiple structures
+# The 4-bit opcode in [15:12] encodes repeat count and structure elements
+&ldst_mult rm rn rt sz q p rpt selem
+@ldst_mult . q:1 ...... p:1 . . rm:5 .... sz:2 rn:5 rt:5 &ldst_mult
+ST_mult 0 . 001100 . 0 0 ..... 0000 .. ..... ..... @ldst_mult rpt=1 selem=4
+ST_mult 0 . 001100 . 0 0 ..... 0010 .. ..... ..... @ldst_mult rpt=4 selem=1
+ST_mult 0 . 001100 . 0 0 ..... 0100 .. ..... ..... @ldst_mult rpt=1 selem=3
+ST_mult 0 . 001100 . 0 0 ..... 0110 .. ..... ..... @ldst_mult rpt=3 selem=1
+ST_mult 0 . 001100 . 0 0 ..... 0111 .. ..... ..... @ldst_mult rpt=1 selem=1
+ST_mult 0 . 001100 . 0 0 ..... 1000 .. ..... ..... @ldst_mult rpt=1 selem=2
+ST_mult 0 . 001100 . 0 0 ..... 1010 .. ..... ..... @ldst_mult rpt=2 selem=1
+
+LD_mult 0 . 001100 . 1 0 ..... 0000 .. ..... ..... @ldst_mult rpt=1 selem=4
+LD_mult 0 . 001100 . 1 0 ..... 0010 .. ..... ..... @ldst_mult rpt=4 selem=1
+LD_mult 0 . 001100 . 1 0 ..... 0100 .. ..... ..... @ldst_mult rpt=1 selem=3
+LD_mult 0 . 001100 . 1 0 ..... 0110 .. ..... ..... @ldst_mult rpt=3 selem=1
+LD_mult 0 . 001100 . 1 0 ..... 0111 .. ..... ..... @ldst_mult rpt=1 selem=1
+LD_mult 0 . 001100 . 1 0 ..... 1000 .. ..... ..... @ldst_mult rpt=1 selem=2
+LD_mult 0 . 001100 . 1 0 ..... 1010 .. ..... ..... @ldst_mult rpt=2 selem=1
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 54383211006..1c8a57f7b52 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3417,99 +3417,28 @@ static bool trans_STLR_i(DisasContext *s, arg_ldapr_stlr_i *a)
return true;
}
-/* AdvSIMD load/store multiple structures
- *
- * 31 30 29 23 22 21 16 15 12 11 10 9 5 4 0
- * +---+---+---------------+---+-------------+--------+------+------+------+
- * | 0 | Q | 0 0 1 1 0 0 0 | L | 0 0 0 0 0 0 | opcode | size | Rn | Rt |
- * +---+---+---------------+---+-------------+--------+------+------+------+
- *
- * AdvSIMD load/store multiple structures (post-indexed)
- *
- * 31 30 29 23 22 21 20 16 15 12 11 10 9 5 4 0
- * +---+---+---------------+---+---+---------+--------+------+------+------+
- * | 0 | Q | 0 0 1 1 0 0 1 | L | 0 | Rm | opcode | size | Rn | Rt |
- * +---+---+---------------+---+---+---------+--------+------+------+------+
- *
- * Rt: first (or only) SIMD&FP register to be transferred
- * Rn: base address or SP
- * Rm (post-index only): post-index register (when !31) or size dependent #imm
- */
-static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
+static bool trans_LD_mult(DisasContext *s, arg_ldst_mult *a)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int size = extract32(insn, 10, 2);
- int opcode = extract32(insn, 12, 4);
- bool is_store = !extract32(insn, 22, 1);
- bool is_postidx = extract32(insn, 23, 1);
- bool is_q = extract32(insn, 30, 1);
TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
MemOp endian, align, mop;
int total; /* total bytes */
int elements; /* elements per vector */
- int rpt; /* num iterations */
- int selem; /* structure elements */
int r;
+ int size = a->sz;
- if (extract32(insn, 31, 1) || extract32(insn, 21, 1)) {
- unallocated_encoding(s);
- return;
+ if (!a->p && a->rm != 0) {
+ /* For non-postindexed accesses the Rm field must be 0 */
+ return false;
}
-
- if (!is_postidx && rm != 0) {
- unallocated_encoding(s);
- return;
+ if (size == 3 && !a->q && a->selem != 1) {
+ return false;
}
-
- /* From the shared decode logic */
- switch (opcode) {
- case 0x0:
- rpt = 1;
- selem = 4;
- break;
- case 0x2:
- rpt = 4;
- selem = 1;
- break;
- case 0x4:
- rpt = 1;
- selem = 3;
- break;
- case 0x6:
- rpt = 3;
- selem = 1;
- break;
- case 0x7:
- rpt = 1;
- selem = 1;
- break;
- case 0x8:
- rpt = 1;
- selem = 2;
- break;
- case 0xa:
- rpt = 2;
- selem = 1;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (size == 3 && !is_q && selem != 1) {
- /* reserved */
- unallocated_encoding(s);
- return;
- }
-
if (!fp_access_check(s)) {
- return;
+ return true;
}
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
@@ -3519,22 +3448,22 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
endian = MO_LE;
}
- total = rpt * selem * (is_q ? 16 : 8);
- tcg_rn = cpu_reg_sp(s, rn);
+ total = a->rpt * a->selem * (a->q ? 16 : 8);
+ tcg_rn = cpu_reg_sp(s, a->rn);
/*
* Issue the MTE check vs the logical repeat count, before we
* promote consecutive little-endian elements below.
*/
- clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
- total, finalize_memop_asimd(s, size));
+ clean_addr = gen_mte_checkN(s, tcg_rn, false, a->p || a->rn != 31, total,
+ finalize_memop_asimd(s, size));
/*
* Consecutive little-endian elements from a single register
* can be promoted to a larger little-endian operation.
*/
align = MO_ALIGN;
- if (selem == 1 && endian == MO_LE) {
+ if (a->selem == 1 && endian == MO_LE) {
align = pow2_align(size);
size = 3;
}
@@ -3543,45 +3472,119 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
}
mop = endian | size | align;
- elements = (is_q ? 16 : 8) >> size;
+ elements = (a->q ? 16 : 8) >> size;
tcg_ebytes = tcg_constant_i64(1 << size);
- for (r = 0; r < rpt; r++) {
+ for (r = 0; r < a->rpt; r++) {
int e;
for (e = 0; e < elements; e++) {
int xs;
- for (xs = 0; xs < selem; xs++) {
- int tt = (rt + r + xs) % 32;
- if (is_store) {
- do_vec_st(s, tt, e, clean_addr, mop);
- } else {
- do_vec_ld(s, tt, e, clean_addr, mop);
- }
+ for (xs = 0; xs < a->selem; xs++) {
+ int tt = (a->rt + r + xs) % 32;
+ do_vec_ld(s, tt, e, clean_addr, mop);
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
}
}
}
- if (!is_store) {
- /* For non-quad operations, setting a slice of the low
- * 64 bits of the register clears the high 64 bits (in
- * the ARM ARM pseudocode this is implicit in the fact
- * that 'rval' is a 64 bit wide variable).
- * For quad operations, we might still need to zero the
- * high bits of SVE.
- */
- for (r = 0; r < rpt * selem; r++) {
- int tt = (rt + r) % 32;
- clear_vec_high(s, is_q, tt);
+ /*
+ * For non-quad operations, setting a slice of the low 64 bits of
+ * the register clears the high 64 bits (in the ARM ARM pseudocode
+ * this is implicit in the fact that 'rval' is a 64 bit wide
+ * variable). For quad operations, we might still need to zero
+ * the high bits of SVE.
+ */
+ for (r = 0; r < a->rpt * a->selem; r++) {
+ int tt = (a->rt + r) % 32;
+ clear_vec_high(s, a->q, tt);
+ }
+
+ if (a->p) {
+ if (a->rm == 31) {
+ tcg_gen_addi_i64(tcg_rn, tcg_rn, total);
+ } else {
+ tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, a->rm));
+ }
+ }
+ return true;
+}
+
+static bool trans_ST_mult(DisasContext *s, arg_ldst_mult *a)
+{
+ TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
+ MemOp endian, align, mop;
+
+ int total; /* total bytes */
+ int elements; /* elements per vector */
+ int r;
+ int size = a->sz;
+
+ if (!a->p && a->rm != 0) {
+ /* For non-postindexed accesses the Rm field must be 0 */
+ return false;
+ }
+ if (size == 3 && !a->q && a->selem != 1) {
+ return false;
+ }
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ /* For our purposes, bytes are always little-endian. */
+ endian = s->be_data;
+ if (size == 0) {
+ endian = MO_LE;
+ }
+
+ total = a->rpt * a->selem * (a->q ? 16 : 8);
+ tcg_rn = cpu_reg_sp(s, a->rn);
+
+ /*
+ * Issue the MTE check vs the logical repeat count, before we
+ * promote consecutive little-endian elements below.
+ */
+ clean_addr = gen_mte_checkN(s, tcg_rn, true, a->p || a->rn != 31, total,
+ finalize_memop_asimd(s, size));
+
+ /*
+ * Consecutive little-endian elements from a single register
+ * can be promoted to a larger little-endian operation.
+ */
+ align = MO_ALIGN;
+ if (a->selem == 1 && endian == MO_LE) {
+ align = pow2_align(size);
+ size = 3;
+ }
+ if (!s->align_mem) {
+ align = 0;
+ }
+ mop = endian | size | align;
+
+ elements = (a->q ? 16 : 8) >> size;
+ tcg_ebytes = tcg_constant_i64(1 << size);
+ for (r = 0; r < a->rpt; r++) {
+ int e;
+ for (e = 0; e < elements; e++) {
+ int xs;
+ for (xs = 0; xs < a->selem; xs++) {
+ int tt = (a->rt + r + xs) % 32;
+ do_vec_st(s, tt, e, clean_addr, mop);
+ tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
+ }
}
}
- if (is_postidx) {
- if (rm == 31) {
+ if (a->p) {
+ if (a->rm == 31) {
tcg_gen_addi_i64(tcg_rn, tcg_rn, total);
} else {
- tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
+ tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, a->rm));
}
}
+ return true;
}
/* AdvSIMD load/store single structure
@@ -3934,9 +3937,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x0c: /* AdvSIMD load/store multiple structures */
- disas_ldst_multiple_struct(s, insn);
- break;
case 0x0d: /* AdvSIMD load/store single structure */
disas_ldst_single_struct(s, insn);
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 22/23] target/arm: Convert load/store single structure to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (20 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 21/23] target/arm: Convert load/store (multiple structures) " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
2023-06-14 5:35 ` Richard Henderson
2023-06-11 16:00 ` [PATCH v2 23/23] target/arm: Convert load/store tags insns " Peter Maydell
22 siblings, 1 reply; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the ASIMD load/store single structure insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230602155223.2040685-20-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 34 +++++
target/arm/tcg/translate-a64.c | 219 +++++++++++++++------------------
2 files changed, 136 insertions(+), 117 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 69bdfa2e73b..4ffdc91865f 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -494,3 +494,37 @@ LD_mult 0 . 001100 . 1 0 ..... 0110 .. ..... ..... @ldst_mult rpt=3 sele
LD_mult 0 . 001100 . 1 0 ..... 0111 .. ..... ..... @ldst_mult rpt=1 selem=1
LD_mult 0 . 001100 . 1 0 ..... 1000 .. ..... ..... @ldst_mult rpt=1 selem=2
LD_mult 0 . 001100 . 1 0 ..... 1010 .. ..... ..... @ldst_mult rpt=2 selem=1
+
+# Load/store single structure
+&ldst_single rm rn rt p selem index scale
+
+%ldst_single_selem 13:1 21:1 !function=plus_1
+
+%ldst_single_index_b 30:1 10:3
+%ldst_single_index_h 30:1 11:2
+%ldst_single_index_s 30:1 12:1
+
+@ldst_single_b .. ...... p:1 .. rm:5 ...... rn:5 rt:5 \
+ &ldst_single scale=0 selem=%ldst_single_selem \
+ index=%ldst_single_index_b
+@ldst_single_h .. ...... p:1 .. rm:5 ...... rn:5 rt:5 \
+ &ldst_single scale=1 selem=%ldst_single_selem \
+ index=%ldst_single_index_h
+@ldst_single_s .. ...... p:1 .. rm:5 ...... rn:5 rt:5 \
+ &ldst_single scale=2 selem=%ldst_single_selem \
+ index=%ldst_single_index_s
+@ldst_single_d . index:1 ...... p:1 .. rm:5 ...... rn:5 rt:5 \
+ &ldst_single scale=3 selem=%ldst_single_selem
+
+ST_single 0 . 001101 . 0 . ..... 00 . ... ..... ..... @ldst_single_b
+ST_single 0 . 001101 . 0 . ..... 01 . ..0 ..... ..... @ldst_single_h
+ST_single 0 . 001101 . 0 . ..... 10 . .00 ..... ..... @ldst_single_s
+ST_single 0 . 001101 . 0 . ..... 10 . 001 ..... ..... @ldst_single_d
+
+LD_single 0 . 001101 . 1 . ..... 00 . ... ..... ..... @ldst_single_b
+LD_single 0 . 001101 . 1 . ..... 01 . ..0 ..... ..... @ldst_single_h
+LD_single 0 . 001101 . 1 . ..... 10 . .00 ..... ..... @ldst_single_s
+LD_single 0 . 001101 . 1 . ..... 10 . 001 ..... ..... @ldst_single_d
+
+# Replicating load case
+LD_single_repl 0 q:1 001101 p:1 1 . rm:5 11 . 0 scale:2 rn:5 rt:5 selem=%ldst_single_selem
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1c8a57f7b52..6e7fe1f35cf 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3587,141 +3587,129 @@ static bool trans_ST_mult(DisasContext *s, arg_ldst_mult *a)
return true;
}
-/* AdvSIMD load/store single structure
- *
- * 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
- * +---+---+---------------+-----+-----------+-----+---+------+------+------+
- * | 0 | Q | 0 0 1 1 0 1 0 | L R | 0 0 0 0 0 | opc | S | size | Rn | Rt |
- * +---+---+---------------+-----+-----------+-----+---+------+------+------+
- *
- * AdvSIMD load/store single structure (post-indexed)
- *
- * 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
- * +---+---+---------------+-----+-----------+-----+---+------+------+------+
- * | 0 | Q | 0 0 1 1 0 1 1 | L R | Rm | opc | S | size | Rn | Rt |
- * +---+---+---------------+-----+-----------+-----+---+------+------+------+
- *
- * Rt: first (or only) SIMD&FP register to be transferred
- * Rn: base address or SP
- * Rm (post-index only): post-index register (when !31) or size dependent #imm
- * index = encoded in Q:S:size dependent on size
- *
- * lane_size = encoded in R, opc
- * transfer width = encoded in opc, S, size
- */
-static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
+static bool trans_ST_single(DisasContext *s, arg_ldst_single *a)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int size = extract32(insn, 10, 2);
- int S = extract32(insn, 12, 1);
- int opc = extract32(insn, 13, 3);
- int R = extract32(insn, 21, 1);
- int is_load = extract32(insn, 22, 1);
- int is_postidx = extract32(insn, 23, 1);
- int is_q = extract32(insn, 30, 1);
-
- int scale = extract32(opc, 1, 2);
- int selem = (extract32(opc, 0, 1) << 1 | R) + 1;
- bool replicate = false;
- int index = is_q << 3 | S << 2 | size;
- int xs, total;
+ int xs, total, rt;
TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
MemOp mop;
- if (extract32(insn, 31, 1)) {
- unallocated_encoding(s);
- return;
+ if (!a->p && a->rm != 0) {
+ return false;
}
- if (!is_postidx && rm != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (scale) {
- case 3:
- if (!is_load || S) {
- unallocated_encoding(s);
- return;
- }
- scale = size;
- replicate = true;
- break;
- case 0:
- break;
- case 1:
- if (extract32(size, 0, 1)) {
- unallocated_encoding(s);
- return;
- }
- index >>= 1;
- break;
- case 2:
- if (extract32(size, 1, 1)) {
- unallocated_encoding(s);
- return;
- }
- if (!extract32(size, 0, 1)) {
- index >>= 2;
- } else {
- if (S) {
- unallocated_encoding(s);
- return;
- }
- index >>= 3;
- scale = 3;
- }
- break;
- default:
- g_assert_not_reached();
- }
-
if (!fp_access_check(s)) {
- return;
+ return true;
}
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- total = selem << scale;
- tcg_rn = cpu_reg_sp(s, rn);
+ total = a->selem << a->scale;
+ tcg_rn = cpu_reg_sp(s, a->rn);
- mop = finalize_memop_asimd(s, scale);
-
- clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
+ mop = finalize_memop_asimd(s, a->scale);
+ clean_addr = gen_mte_checkN(s, tcg_rn, true, a->p || a->rn != 31,
total, mop);
- tcg_ebytes = tcg_constant_i64(1 << scale);
- for (xs = 0; xs < selem; xs++) {
- if (replicate) {
- /* Load and replicate to all elements */
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
-
- tcg_gen_qemu_ld_i64(tcg_tmp, clean_addr, get_mem_index(s), mop);
- tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
- (is_q + 1) * 8, vec_full_reg_size(s),
- tcg_tmp);
- } else {
- /* Load/store one element per register */
- if (is_load) {
- do_vec_ld(s, rt, index, clean_addr, mop);
- } else {
- do_vec_st(s, rt, index, clean_addr, mop);
- }
- }
+ tcg_ebytes = tcg_constant_i64(1 << a->scale);
+ for (xs = 0, rt = a->rt; xs < a->selem; xs++, rt = (rt + 1) % 32) {
+ do_vec_st(s, rt, a->index, clean_addr, mop);
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
- rt = (rt + 1) % 32;
}
- if (is_postidx) {
- if (rm == 31) {
+ if (a->p) {
+ if (a->rm == 31) {
tcg_gen_addi_i64(tcg_rn, tcg_rn, total);
} else {
- tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
+ tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, a->rm));
}
}
+ return true;
+}
+
+static bool trans_LD_single(DisasContext *s, arg_ldst_single *a)
+{
+ int xs, total, rt;
+ TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
+ MemOp mop;
+
+ if (!a->p && a->rm != 0) {
+ return false;
+ }
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ total = a->selem << a->scale;
+ tcg_rn = cpu_reg_sp(s, a->rn);
+
+ mop = finalize_memop_asimd(s, a->scale);
+ clean_addr = gen_mte_checkN(s, tcg_rn, false, a->p || a->rn != 31,
+ total, mop);
+
+ tcg_ebytes = tcg_constant_i64(1 << a->scale);
+ for (xs = 0, rt = a->rt; xs < a->selem; xs++, rt = (rt + 1) % 32) {
+ do_vec_ld(s, rt, a->index, clean_addr, mop);
+ tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
+ }
+
+ if (a->p) {
+ if (a->rm == 31) {
+ tcg_gen_addi_i64(tcg_rn, tcg_rn, total);
+ } else {
+ tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, a->rm));
+ }
+ }
+ return true;
+}
+
+static bool trans_LD_single_repl(DisasContext *s, arg_LD_single_repl *a)
+{
+ int xs, total, rt;
+ TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
+ MemOp mop;
+
+ if (!a->p && a->rm != 0) {
+ return false;
+ }
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ total = a->selem << a->scale;
+ tcg_rn = cpu_reg_sp(s, a->rn);
+
+ mop = finalize_memop_asimd(s, a->scale);
+ clean_addr = gen_mte_checkN(s, tcg_rn, false, a->p || a->rn != 31,
+ total, mop);
+
+ tcg_ebytes = tcg_constant_i64(1 << a->scale);
+ for (xs = 0, rt = a->rt; xs < a->selem; xs++, rt = (rt + 1) % 32) {
+ /* Load and replicate to all elements */
+ TCGv_i64 tcg_tmp = tcg_temp_new_i64();
+
+ tcg_gen_qemu_ld_i64(tcg_tmp, clean_addr, get_mem_index(s), mop);
+ tcg_gen_gvec_dup_i64(a->scale, vec_full_reg_offset(s, rt),
+ (a->q + 1) * 8, vec_full_reg_size(s), tcg_tmp);
+ tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
+ }
+
+ if (a->p) {
+ if (a->rm == 31) {
+ tcg_gen_addi_i64(tcg_rn, tcg_rn, total);
+ } else {
+ tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, a->rm));
+ }
+ }
+ return true;
}
/*
@@ -3937,9 +3925,6 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
static void disas_ldst(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 24, 6)) {
- case 0x0d: /* AdvSIMD load/store single structure */
- disas_ldst_single_struct(s, insn);
- break;
case 0x19:
if (extract32(insn, 21, 1) != 0) {
disas_ldst_tag(s, insn);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 23/23] target/arm: Convert load/store tags insns to decodetree
2023-06-11 16:00 [PATCH v2 00/23] target/arm: Convert exception, system, loads and stores to decodetree Peter Maydell
` (21 preceding siblings ...)
2023-06-11 16:00 ` [PATCH v2 22/23] target/arm: Convert load/store single structure " Peter Maydell
@ 2023-06-11 16:00 ` Peter Maydell
22 siblings, 0 replies; 31+ messages in thread
From: Peter Maydell @ 2023-06-11 16:00 UTC (permalink / raw)
To: qemu-arm, qemu-devel
Convert the instructions in the load/store memory tags instruction
group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230602155223.2040685-21-peter.maydell@linaro.org
---
target/arm/tcg/a64.decode | 25 +++
target/arm/tcg/translate-a64.c | 360 ++++++++++++++++-----------------
2 files changed, 199 insertions(+), 186 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 4ffdc91865f..ef64a3f9cba 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -528,3 +528,28 @@ LD_single 0 . 001101 . 1 . ..... 10 . 001 ..... ..... @ldst_single_d
# Replicating load case
LD_single_repl 0 q:1 001101 p:1 1 . rm:5 11 . 0 scale:2 rn:5 rt:5 selem=%ldst_single_selem
+
+%tag_offset 12:s9 !function=scale_by_log2_tag_granule
+&ldst_tag rn rt imm p w
+@ldst_tag ........ .. . ......... .. rn:5 rt:5 &ldst_tag imm=%tag_offset
+@ldst_tag_mult ........ .. . 000000000 .. rn:5 rt:5 &ldst_tag imm=0
+
+STZGM 11011001 00 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
+STG 11011001 00 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
+STG 11011001 00 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
+STG 11011001 00 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
+
+LDG 11011001 01 1 ......... 00 ..... ..... @ldst_tag p=0 w=0
+STZG 11011001 01 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
+STZG 11011001 01 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
+STZG 11011001 01 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
+
+STGM 11011001 10 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
+ST2G 11011001 10 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
+ST2G 11011001 10 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
+ST2G 11011001 10 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
+
+LDGM 11011001 11 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0
+STZ2G 11011001 11 1 ......... 01 ..... ..... @ldst_tag p=1 w=1
+STZ2G 11011001 11 1 ......... 10 ..... ..... @ldst_tag p=0 w=0
+STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 6e7fe1f35cf..43963287a8c 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -62,6 +62,12 @@ static int uimm_scaled(DisasContext *s, int x)
return imm << scale;
}
+/* For load/store memory tags: scale offset by LOG2_TAG_GRANULE */
+static int scale_by_log2_tag_granule(DisasContext *s, int x)
+{
+ return x << LOG2_TAG_GRANULE;
+}
+
/*
* Include the generated decoders.
*/
@@ -3712,185 +3718,184 @@ static bool trans_LD_single_repl(DisasContext *s, arg_LD_single_repl *a)
return true;
}
-/*
- * Load/Store memory tags
- *
- * 31 30 29 24 22 21 12 10 5 0
- * +-----+-------------+-----+---+------+-----+------+------+
- * | 1 1 | 0 1 1 0 0 1 | op1 | 1 | imm9 | op2 | Rn | Rt |
- * +-----+-------------+-----+---+------+-----+------+------+
- */
-static void disas_ldst_tag(DisasContext *s, uint32_t insn)
+static bool trans_STZGM(DisasContext *s, arg_ldst_tag *a)
{
- int rt = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- uint64_t offset = sextract64(insn, 12, 9) << LOG2_TAG_GRANULE;
- int op2 = extract32(insn, 10, 2);
- int op1 = extract32(insn, 22, 2);
- bool is_load = false, is_pair = false, is_zero = false, is_mult = false;
- int index = 0;
TCGv_i64 addr, clean_addr, tcg_rt;
+ int size = 4 << s->dcz_blocksize;
- /* We checked insn bits [29:24,21] in the caller. */
- if (extract32(insn, 30, 2) != 3) {
- goto do_unallocated;
+ if (!dc_isar_feature(aa64_mte, s)) {
+ return false;
+ }
+ if (s->current_el == 0) {
+ return false;
}
- /*
- * @index is a tri-state variable which has 3 states:
- * < 0 : post-index, writeback
- * = 0 : signed offset
- * > 0 : pre-index, writeback
- */
- switch (op1) {
- case 0:
- if (op2 != 0) {
- /* STG */
- index = op2 - 2;
- } else {
- /* STZGM */
- if (s->current_el == 0 || offset != 0) {
- goto do_unallocated;
- }
- is_mult = is_zero = true;
- }
- break;
- case 1:
- if (op2 != 0) {
- /* STZG */
- is_zero = true;
- index = op2 - 2;
- } else {
- /* LDG */
- is_load = true;
- }
- break;
- case 2:
- if (op2 != 0) {
- /* ST2G */
- is_pair = true;
- index = op2 - 2;
- } else {
- /* STGM */
- if (s->current_el == 0 || offset != 0) {
- goto do_unallocated;
- }
- is_mult = true;
- }
- break;
- case 3:
- if (op2 != 0) {
- /* STZ2G */
- is_pair = is_zero = true;
- index = op2 - 2;
- } else {
- /* LDGM */
- if (s->current_el == 0 || offset != 0) {
- goto do_unallocated;
- }
- is_mult = is_load = true;
- }
- break;
-
- default:
- do_unallocated:
- unallocated_encoding(s);
- return;
- }
-
- if (is_mult
- ? !dc_isar_feature(aa64_mte, s)
- : !dc_isar_feature(aa64_mte_insn_reg, s)) {
- goto do_unallocated;
- }
-
- if (rn == 31) {
+ if (a->rn == 31) {
gen_check_sp_alignment(s);
}
- addr = read_cpu_reg_sp(s, rn, true);
- if (index >= 0) {
+ addr = read_cpu_reg_sp(s, a->rn, true);
+ tcg_gen_addi_i64(addr, addr, a->imm);
+ tcg_rt = cpu_reg(s, a->rt);
+
+ if (s->ata) {
+ gen_helper_stzgm_tags(cpu_env, addr, tcg_rt);
+ }
+ /*
+ * The non-tags portion of STZGM is mostly like DC_ZVA,
+ * except the alignment happens before the access.
+ */
+ clean_addr = clean_data_tbi(s, addr);
+ tcg_gen_andi_i64(clean_addr, clean_addr, -size);
+ gen_helper_dc_zva(cpu_env, clean_addr);
+ return true;
+}
+
+static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
+{
+ TCGv_i64 addr, clean_addr, tcg_rt;
+
+ if (!dc_isar_feature(aa64_mte, s)) {
+ return false;
+ }
+ if (s->current_el == 0) {
+ return false;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ addr = read_cpu_reg_sp(s, a->rn, true);
+ tcg_gen_addi_i64(addr, addr, a->imm);
+ tcg_rt = cpu_reg(s, a->rt);
+
+ if (s->ata) {
+ gen_helper_stgm(cpu_env, addr, tcg_rt);
+ } else {
+ MMUAccessType acc = MMU_DATA_STORE;
+ int size = 4 << GMID_EL1_BS;
+
+ clean_addr = clean_data_tbi(s, addr);
+ tcg_gen_andi_i64(clean_addr, clean_addr, -size);
+ gen_probe_access(s, clean_addr, acc, size);
+ }
+ return true;
+}
+
+static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
+{
+ TCGv_i64 addr, clean_addr, tcg_rt;
+
+ if (!dc_isar_feature(aa64_mte, s)) {
+ return false;
+ }
+ if (s->current_el == 0) {
+ return false;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ addr = read_cpu_reg_sp(s, a->rn, true);
+ tcg_gen_addi_i64(addr, addr, a->imm);
+ tcg_rt = cpu_reg(s, a->rt);
+
+ if (s->ata) {
+ gen_helper_ldgm(tcg_rt, cpu_env, addr);
+ } else {
+ MMUAccessType acc = MMU_DATA_LOAD;
+ int size = 4 << GMID_EL1_BS;
+
+ clean_addr = clean_data_tbi(s, addr);
+ tcg_gen_andi_i64(clean_addr, clean_addr, -size);
+ gen_probe_access(s, clean_addr, acc, size);
+ /* The result tags are zeros. */
+ tcg_gen_movi_i64(tcg_rt, 0);
+ }
+ return true;
+}
+
+static bool trans_LDG(DisasContext *s, arg_ldst_tag *a)
+{
+ TCGv_i64 addr, clean_addr, tcg_rt;
+
+ if (!dc_isar_feature(aa64_mte_insn_reg, s)) {
+ return false;
+ }
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
+ }
+
+ addr = read_cpu_reg_sp(s, a->rn, true);
+ if (!a->p) {
/* pre-index or signed offset */
- tcg_gen_addi_i64(addr, addr, offset);
+ tcg_gen_addi_i64(addr, addr, a->imm);
}
- if (is_mult) {
- tcg_rt = cpu_reg(s, rt);
+ tcg_gen_andi_i64(addr, addr, -TAG_GRANULE);
+ tcg_rt = cpu_reg(s, a->rt);
+ if (s->ata) {
+ gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
+ } else {
+ /*
+ * Tag access disabled: we must check for aborts on the load
+ * load from [rn+offset], and then insert a 0 tag into rt.
+ */
+ clean_addr = clean_data_tbi(s, addr);
+ gen_probe_access(s, clean_addr, MMU_DATA_LOAD, MO_8);
+ gen_address_with_allocation_tag0(tcg_rt, tcg_rt);
+ }
- if (is_zero) {
- int size = 4 << s->dcz_blocksize;
-
- if (s->ata) {
- gen_helper_stzgm_tags(cpu_env, addr, tcg_rt);
- }
- /*
- * The non-tags portion of STZGM is mostly like DC_ZVA,
- * except the alignment happens before the access.
- */
- clean_addr = clean_data_tbi(s, addr);
- tcg_gen_andi_i64(clean_addr, clean_addr, -size);
- gen_helper_dc_zva(cpu_env, clean_addr);
- } else if (s->ata) {
- if (is_load) {
- gen_helper_ldgm(tcg_rt, cpu_env, addr);
- } else {
- gen_helper_stgm(cpu_env, addr, tcg_rt);
- }
- } else {
- MMUAccessType acc = is_load ? MMU_DATA_LOAD : MMU_DATA_STORE;
- int size = 4 << GMID_EL1_BS;
-
- clean_addr = clean_data_tbi(s, addr);
- tcg_gen_andi_i64(clean_addr, clean_addr, -size);
- gen_probe_access(s, clean_addr, acc, size);
-
- if (is_load) {
- /* The result tags are zeros. */
- tcg_gen_movi_i64(tcg_rt, 0);
- }
+ if (a->w) {
+ /* pre-index or post-index */
+ if (a->p) {
+ /* post-index */
+ tcg_gen_addi_i64(addr, addr, a->imm);
}
- return;
+ tcg_gen_mov_i64(cpu_reg_sp(s, a->rn), addr);
+ }
+ return true;
+}
+
+static bool do_STG(DisasContext *s, arg_ldst_tag *a, bool is_zero, bool is_pair)
+{
+ TCGv_i64 addr, tcg_rt;
+
+ if (a->rn == 31) {
+ gen_check_sp_alignment(s);
}
- if (is_load) {
- tcg_gen_andi_i64(addr, addr, -TAG_GRANULE);
- tcg_rt = cpu_reg(s, rt);
- if (s->ata) {
- gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt);
+ addr = read_cpu_reg_sp(s, a->rn, true);
+ if (!a->p) {
+ /* pre-index or signed offset */
+ tcg_gen_addi_i64(addr, addr, a->imm);
+ }
+ tcg_rt = cpu_reg_sp(s, a->rt);
+ if (!s->ata) {
+ /*
+ * For STG and ST2G, we need to check alignment and probe memory.
+ * TODO: For STZG and STZ2G, we could rely on the stores below,
+ * at least for system mode; user-only won't enforce alignment.
+ */
+ if (is_pair) {
+ gen_helper_st2g_stub(cpu_env, addr);
} else {
- /*
- * Tag access disabled: we must check for aborts on the load
- * load from [rn+offset], and then insert a 0 tag into rt.
- */
- clean_addr = clean_data_tbi(s, addr);
- gen_probe_access(s, clean_addr, MMU_DATA_LOAD, MO_8);
- gen_address_with_allocation_tag0(tcg_rt, tcg_rt);
+ gen_helper_stg_stub(cpu_env, addr);
+ }
+ } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
+ if (is_pair) {
+ gen_helper_st2g_parallel(cpu_env, addr, tcg_rt);
+ } else {
+ gen_helper_stg_parallel(cpu_env, addr, tcg_rt);
}
} else {
- tcg_rt = cpu_reg_sp(s, rt);
- if (!s->ata) {
- /*
- * For STG and ST2G, we need to check alignment and probe memory.
- * TODO: For STZG and STZ2G, we could rely on the stores below,
- * at least for system mode; user-only won't enforce alignment.
- */
- if (is_pair) {
- gen_helper_st2g_stub(cpu_env, addr);
- } else {
- gen_helper_stg_stub(cpu_env, addr);
- }
- } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
- if (is_pair) {
- gen_helper_st2g_parallel(cpu_env, addr, tcg_rt);
- } else {
- gen_helper_stg_parallel(cpu_env, addr, tcg_rt);
- }
+ if (is_pair) {
+ gen_helper_st2g(cpu_env, addr, tcg_rt);
} else {
- if (is_pair) {
- gen_helper_st2g(cpu_env, addr, tcg_rt);
- } else {
- gen_helper_stg(cpu_env, addr, tcg_rt);
- }
+ gen_helper_stg(cpu_env, addr, tcg_rt);
}
}
@@ -3911,32 +3916,21 @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
}
}
- if (index != 0) {
+ if (a->w) {
/* pre-index or post-index */
- if (index < 0) {
+ if (a->p) {
/* post-index */
- tcg_gen_addi_i64(addr, addr, offset);
+ tcg_gen_addi_i64(addr, addr, a->imm);
}
- tcg_gen_mov_i64(cpu_reg_sp(s, rn), addr);
+ tcg_gen_mov_i64(cpu_reg_sp(s, a->rn), addr);
}
+ return true;
}
-/* Loads and stores */
-static void disas_ldst(DisasContext *s, uint32_t insn)
-{
- switch (extract32(insn, 24, 6)) {
- case 0x19:
- if (extract32(insn, 21, 1) != 0) {
- disas_ldst_tag(s, insn);
- } else {
- unallocated_encoding(s);
- }
- break;
- default:
- unallocated_encoding(s);
- break;
- }
-}
+TRANS_FEAT(STG, aa64_mte_insn_reg, do_STG, a, false, false)
+TRANS_FEAT(STZG, aa64_mte_insn_reg, do_STG, a, true, false)
+TRANS_FEAT(ST2G, aa64_mte_insn_reg, do_STG, a, false, true)
+TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true)
typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
@@ -13832,12 +13826,6 @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
{
switch (extract32(insn, 25, 4)) {
- case 0x4:
- case 0x6:
- case 0xc:
- case 0xe: /* Loads and stores */
- disas_ldst(s, insn);
- break;
case 0x5:
case 0xd: /* Data processing - register */
disas_data_proc_reg(s, insn);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread