* [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1)
@ 2024-05-24 23:20 Richard Henderson
2024-05-24 23:20 ` [PATCH v2 01/67] target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE) Richard Henderson
` (67 more replies)
0 siblings, 68 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
In the process, convert more code to gvec as well -- I will need
the gvec code for implementing SME2. I guess this is about 1/3
of the job done, but there's no reason to wait until the patch
set is completely unwieldy.
Changes for v2:
* Fix existing RISU failures vs neoverse-n1.
* Introduce vfp_load_reg16, fixing a regression wrt VNEG (scalar, hp).
* Fix typo in SUQADD vectorization.
* Two more conversions.
r~
Richard Henderson (67):
target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE)
target/arm: Use PLD, PLDW, PLI not NOP for t32
target/arm: Reject incorrect operands to PLD, PLDW, PLI
target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer)
target/arm: Fix decode of FMOV (hp) vs MOVI
target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16)
target/arm: Split out gengvec.c
target/arm: Split out gengvec64.c
target/arm: Convert Cryptographic AES to decodetree
target/arm: Convert Cryptographic 3-register SHA to decodetree
target/arm: Convert Cryptographic 2-register SHA to decodetree
target/arm: Convert Cryptographic 3-register SHA512 to decodetree
target/arm: Convert Cryptographic 2-register SHA512 to decodetree
target/arm: Convert Cryptographic 4-register to decodetree
target/arm: Convert Cryptographic 3-register, imm2 to decodetree
target/arm: Convert XAR to decodetree
target/arm: Convert Advanced SIMD copy to decodetree
target/arm: Convert FMULX to decodetree
target/arm: Convert FADD, FSUB, FDIV, FMUL to decodetree
target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM to decodetree
target/arm: Introduce vfp_load_reg16
target/arm: Expand vfp neg and abs inline
target/arm: Convert FNMUL to decodetree
target/arm: Convert FMLA, FMLS to decodetree
target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT to decodetree
target/arm: Convert FABD to decodetree
target/arm: Convert FRECPS, FRSQRTS to decodetree
target/arm: Convert FADDP to decodetree
target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP to decodetree
target/arm: Use gvec for neon faddp, fmaxp, fminp
target/arm: Convert ADDP to decodetree
target/arm: Use gvec for neon padd
target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree
target/arm: Use gvec for neon pmax, pmin
target/arm: Convert FMLAL, FMLSL to decodetree
target/arm: Convert disas_simd_3same_logic to decodetree
target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB
target/arm: Convert SUQADD and USQADD to gvec
target/arm: Inline scalar SUQADD and USQADD
target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB
target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree
target/arm: Convert SUQADD, USQADD to decodetree
target/arm: Convert SSHL, USHL to decodetree
target/arm: Convert SRSHL and URSHL (register) to gvec
target/arm: Convert SRSHL, URSHL to decodetree
target/arm: Convert SQSHL and UQSHL (register) to gvec
target/arm: Convert SQSHL, UQSHL to decodetree
target/arm: Convert SQRSHL and UQRSHL (register) to gvec
target/arm: Convert SQRSHL, UQRSHL to decodetree
target/arm: Convert ADD, SUB (vector) to decodetree
target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ to decodetree
target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32,i64}
target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec
target/arm: Convert SHADD, UHADD to gvec
target/arm: Convert SHADD, UHADD to decodetree
target/arm: Convert SHSUB, UHSUB to gvec
target/arm: Convert SHSUB, UHSUB to decodetree
target/arm: Convert SRHADD, URHADD to gvec
target/arm: Convert SRHADD, URHADD to decodetree
target/arm: Convert SMAX, SMIN, UMAX, UMIN to decodetree
target/arm: Convert SABA, SABD, UABA, UABD to decodetree
target/arm: Convert MUL, PMUL to decodetree
target/arm: Convert MLA, MLS to decodetree
target/arm: Tidy SQDMULH, SQRDMULH (vector)
target/arm: Convert SQDMULH, SQRDMULH to decodetree
target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB to decodetree
target/arm: Convert FCSEL to decodetree
target/arm/helper.h | 164 +-
target/arm/tcg/helper-a64.h | 12 +
target/arm/tcg/translate-a64.h | 18 +
target/arm/tcg/translate.h | 95 +
target/arm/tcg/a32-uncond.decode | 8 +-
target/arm/tcg/a64.decode | 430 ++-
target/arm/tcg/neon-dp.decode | 37 +-
target/arm/tcg/t32.decode | 26 +-
target/arm/tcg/cpu32.c | 73 +
target/arm/tcg/gengvec.c | 2306 ++++++++++++++++
target/arm/tcg/gengvec64.c | 367 +++
target/arm/tcg/neon_helper.c | 511 +---
target/arm/tcg/translate-a64.c | 4440 ++++++++++--------------------
target/arm/tcg/translate-neon.c | 254 +-
target/arm/tcg/translate-sve.c | 145 +-
target/arm/tcg/translate-vfp.c | 93 +-
target/arm/tcg/translate.c | 1649 +----------
target/arm/tcg/vec_helper.c | 349 ++-
target/arm/vfp_helper.c | 30 -
target/arm/tcg/meson.build | 2 +
20 files changed, 5446 insertions(+), 5563 deletions(-)
create mode 100644 target/arm/tcg/gengvec.c
create mode 100644 target/arm/tcg/gengvec64.c
--
2.34.1
^ permalink raw reply [flat|nested] 114+ messages in thread
* [PATCH v2 01/67] target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE)
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32 Richard Henderson
` (66 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Hack, because there should be a better way to do this without
duplicating code between cpu32.c and cpu64.c. Hack, because
qemu-arm crashes without ARM_FEATURE_AARCH64 disabled.
Needed in order to compare RISU results with aarch64.ci.qemu.org.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/cpu32.c | 73 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 73 insertions(+)
diff --git a/target/arm/tcg/cpu32.c b/target/arm/tcg/cpu32.c
index bdd82d912a..6ee055c78b 100644
--- a/target/arm/tcg/cpu32.c
+++ b/target/arm/tcg/cpu32.c
@@ -978,6 +978,78 @@ static void arm_max_initfn(Object *obj)
}
#endif /* !TARGET_AARCH64 */
+#ifdef CONFIG_USER_ONLY
+static void aarch64_neoverse_n1_initfn(Object *obj)
+{
+ ARMCPU *cpu = ARM_CPU(obj);
+
+ cpu->dtb_compatible = "arm,neoverse-n1";
+ set_feature(&cpu->env, ARM_FEATURE_V8);
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
+ set_feature(&cpu->env, ARM_FEATURE_BACKCOMPAT_CNTFRQ);
+ // set_feature(&cpu->env, ARM_FEATURE_AARCH64);
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
+
+ /* Ordered by B2.4 AArch64 registers by functional group */
+ cpu->clidr = 0x82000023;
+ cpu->ctr = 0x8444c004;
+ cpu->dcz_blocksize = 4;
+ cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
+ cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
+ cpu->id_afr0 = 0x00000000;
+ cpu->isar.id_dfr0 = 0x04010088;
+ cpu->isar.id_isar0 = 0x02101110;
+ cpu->isar.id_isar1 = 0x13112111;
+ cpu->isar.id_isar2 = 0x21232042;
+ cpu->isar.id_isar3 = 0x01112131;
+ cpu->isar.id_isar4 = 0x00010142;
+ cpu->isar.id_isar5 = 0x01011121;
+ cpu->isar.id_isar6 = 0x00000010;
+ cpu->isar.id_mmfr0 = 0x10201105;
+ cpu->isar.id_mmfr1 = 0x40000000;
+ cpu->isar.id_mmfr2 = 0x01260000;
+ cpu->isar.id_mmfr3 = 0x02122211;
+ cpu->isar.id_mmfr4 = 0x00021110;
+ cpu->isar.id_pfr0 = 0x10010131;
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
+ cpu->isar.id_pfr2 = 0x00000011;
+ cpu->midr = 0x414fd0c1; /* r4p1 */
+ cpu->revidr = 0;
+
+ /* From B2.23 CCSIDR_EL1 */
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
+ cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
+
+ /* From B2.98 SCTLR_EL3 */
+ cpu->reset_sctlr = 0x30c50838;
+
+ /* From B4.23 ICH_VTR_EL2 */
+ cpu->gic_num_lrs = 4;
+ cpu->gic_vpribits = 5;
+ cpu->gic_vprebits = 5;
+ cpu->gic_pribits = 5;
+
+ /* From B5.1 AdvSIMD AArch64 register summary */
+ cpu->isar.mvfr0 = 0x10110222;
+ cpu->isar.mvfr1 = 0x13211111;
+ cpu->isar.mvfr2 = 0x00000043;
+
+ /* From D5.1 AArch64 PMU register summary */
+ cpu->isar.reset_pmcr_el0 = 0x410c3000;
+}
+#endif /* CONFIG_USER_ONLY */
+
static const ARMCPUInfo arm_tcg_cpus[] = {
{ .name = "arm926", .initfn = arm926_initfn },
{ .name = "arm946", .initfn = arm946_initfn },
@@ -1018,6 +1090,7 @@ static const ARMCPUInfo arm_tcg_cpus[] = {
{ .name = "max", .initfn = arm_max_initfn },
#endif
#ifdef CONFIG_USER_ONLY
+ { .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
{ .name = "any", .initfn = arm_max_initfn },
#endif
};
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
2024-05-24 23:20 ` [PATCH v2 01/67] target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE) Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 13:13 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI Richard Henderson
` (65 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
This fixes a bug in that neither PLI nor PLDW are present in ARMv6T2,
but are introduced with ARMv7 and ARMv7MP respectively.
For clarity, do not use NOP for PLD.
Note that there is no PLDW (literal) -- bit 5 of the first word
is not decoded, and is PLD (literal). Confirmed on neoverse-n1
host which does *not* trap on the (0) bit in the decode.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/t32.decode | 25 ++++++++++++-------------
target/arm/tcg/translate.c | 4 ++--
2 files changed, 14 insertions(+), 15 deletions(-)
diff --git a/target/arm/tcg/t32.decode b/target/arm/tcg/t32.decode
index f21ad0167a..d327178829 100644
--- a/target/arm/tcg/t32.decode
+++ b/target/arm/tcg/t32.decode
@@ -458,41 +458,41 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
# Note that Load, unsigned (literal) overlaps all other load encodings.
{
{
- NOP 1111 1000 -001 1111 1111 ------------ # PLD
+ PLD 1111 1000 -001 1111 1111 ------------ # (literal)
LDRB_ri 1111 1000 .001 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1000 1001 ---- 1111 ------------ # PLD
+ PLD 1111 1000 1001 ---- 1111 ------------ # (immediate T1)
LDRB_ri 1111 1000 1001 .... .... ............ @ldst_ri_pos
}
LDRB_ri 1111 1000 0001 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1000 0001 ---- 1111 1100 -------- # PLD
+ PLD 1111 1000 0001 ---- 1111 1100 -------- # (immediate T2)
LDRB_ri 1111 1000 0001 .... .... 1100 ........ @ldst_ri_neg
}
LDRBT_ri 1111 1000 0001 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1000 0001 ---- 1111 000000 -- ---- # PLD
+ PLD 1111 1000 0001 ---- 1111 000000 -- ---- # (register)
LDRB_rr 1111 1000 0001 .... .... 000000 .. .... @ldst_rr
}
}
{
{
- NOP 1111 1000 -011 1111 1111 ------------ # PLD
+ PLD 1111 1000 -011 1111 1111 ------------ # (literal)
LDRH_ri 1111 1000 .011 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1000 1011 ---- 1111 ------------ # PLDW
+ PLDW 1111 1000 1011 ---- 1111 ------------ # (immediate T1)
LDRH_ri 1111 1000 1011 .... .... ............ @ldst_ri_pos
}
LDRH_ri 1111 1000 0011 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1000 0011 ---- 1111 1100 -------- # PLDW
+ PLDW 1111 1000 0011 ---- 1111 1100 -------- # (immediate T2)
LDRH_ri 1111 1000 0011 .... .... 1100 ........ @ldst_ri_neg
}
LDRHT_ri 1111 1000 0011 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1000 0011 ---- 1111 000000 -- ---- # PLDW
+ PLDW 1111 1000 0011 ---- 1111 000000 -- ---- # (register)
LDRH_rr 1111 1000 0011 .... .... 000000 .. .... @ldst_rr
}
}
@@ -504,24 +504,23 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
LDRT_ri 1111 1000 0101 .... .... 1110 ........ @ldst_ri_unp
LDR_rr 1111 1000 0101 .... .... 000000 .. .... @ldst_rr
}
-# NOPs here are PLI.
{
{
- NOP 1111 1001 -001 1111 1111 ------------
+ PLI 1111 1001 -001 1111 1111 ------------ # (literal T3)
LDRSB_ri 1111 1001 .001 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1001 1001 ---- 1111 ------------
+ PLI 1111 1001 1001 ---- 1111 ------------ # (immediate T1)
LDRSB_ri 1111 1001 1001 .... .... ............ @ldst_ri_pos
}
LDRSB_ri 1111 1001 0001 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1001 0001 ---- 1111 1100 --------
+ PLI 1111 1001 0001 ---- 1111 1100 -------- # (immediate T2)
LDRSB_ri 1111 1001 0001 .... .... 1100 ........ @ldst_ri_neg
}
LDRSBT_ri 1111 1001 0001 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1001 0001 ---- 1111 000000 -- ----
+ PLI 1111 1001 0001 ---- 1111 000000 -- ---- # (register)
LDRSB_rr 1111 1001 0001 .... .... 000000 .. .... @ldst_rr
}
}
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
index d605e10f11..187eacffd9 100644
--- a/target/arm/tcg/translate.c
+++ b/target/arm/tcg/translate.c
@@ -8765,12 +8765,12 @@ static bool trans_PLD(DisasContext *s, arg_PLD *a)
return ENABLE_ARCH_5TE;
}
-static bool trans_PLDW(DisasContext *s, arg_PLD *a)
+static bool trans_PLDW(DisasContext *s, arg_PLDW *a)
{
return arm_dc_feature(s, ARM_FEATURE_V7MP);
}
-static bool trans_PLI(DisasContext *s, arg_PLD *a)
+static bool trans_PLI(DisasContext *s, arg_PLI *a)
{
return ENABLE_ARCH_7;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
2024-05-24 23:20 ` [PATCH v2 01/67] target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE) Richard Henderson
2024-05-24 23:20 ` [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32 Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 13:18 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Richard Henderson
` (64 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
For all, rm == 15 is invalid.
Prior to v8, thumb with rm == 13 is invalid.
For PLDW, rn == 15 is invalid.
Fixes a RISU mismatch for the HINTSPACE pattern in t32.risu
compared to a neoverse-n1 host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a32-uncond.decode | 8 +++--
target/arm/tcg/t32.decode | 7 ++--
target/arm/tcg/translate.c | 57 ++++++++++++++++++++++++++++++++
3 files changed, 66 insertions(+), 6 deletions(-)
diff --git a/target/arm/tcg/a32-uncond.decode b/target/arm/tcg/a32-uncond.decode
index 2339de2e94..e1b1780d37 100644
--- a/target/arm/tcg/a32-uncond.decode
+++ b/target/arm/tcg/a32-uncond.decode
@@ -24,7 +24,9 @@
&empty !extern
&i !extern imm
+&r !extern rm
&setend E
+&nm rn rm
# Branch with Link and Exchange
@@ -61,9 +63,9 @@ PLD 1111 0101 -101 ---- 1111 ---- ---- ---- # (imm, lit) 5te
PLDW 1111 0101 -001 ---- 1111 ---- ---- ---- # (imm, lit) 7mp
PLI 1111 0100 -101 ---- 1111 ---- ---- ---- # (imm, lit) 7
-PLD 1111 0111 -101 ---- 1111 ----- -- 0 ---- # (register) 5te
-PLDW 1111 0111 -001 ---- 1111 ----- -- 0 ---- # (register) 7mp
-PLI 1111 0110 -101 ---- 1111 ----- -- 0 ---- # (register) 7
+PLD_rr 1111 0111 -101 ---- 1111 ----- -- 0 rm:4 &r
+PLDW_rr 1111 0111 -001 rn:4 1111 ----- -- 0 rm:4 &nm
+PLI_rr 1111 0110 -101 ---- 1111 ----- -- 0 rm:4 &r
# Unallocated memory hints
#
diff --git a/target/arm/tcg/t32.decode b/target/arm/tcg/t32.decode
index d327178829..1ec12442a4 100644
--- a/target/arm/tcg/t32.decode
+++ b/target/arm/tcg/t32.decode
@@ -28,6 +28,7 @@
&rrr_rot !extern rd rn rm rot
&rrr !extern rd rn rm
&rr !extern rd rm
+&nm !extern rn rm
&ri !extern rd imm
&r !extern rm
&i !extern imm
@@ -472,7 +473,7 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
}
LDRBT_ri 1111 1000 0001 .... .... 1110 ........ @ldst_ri_unp
{
- PLD 1111 1000 0001 ---- 1111 000000 -- ---- # (register)
+ PLD_rr 1111 1000 0001 ---- 1111 000000 -- rm:4 &r
LDRB_rr 1111 1000 0001 .... .... 000000 .. .... @ldst_rr
}
}
@@ -492,7 +493,7 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
}
LDRHT_ri 1111 1000 0011 .... .... 1110 ........ @ldst_ri_unp
{
- PLDW 1111 1000 0011 ---- 1111 000000 -- ---- # (register)
+ PLDW_rr 1111 1000 0011 rn:4 1111 000000 -- rm:4 &nm
LDRH_rr 1111 1000 0011 .... .... 000000 .. .... @ldst_rr
}
}
@@ -520,7 +521,7 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
}
LDRSBT_ri 1111 1001 0001 .... .... 1110 ........ @ldst_ri_unp
{
- PLI 1111 1001 0001 ---- 1111 000000 -- ---- # (register)
+ PLI_rr 1111 1001 0001 ---- 1111 000000 -- rm:4 &r
LDRSB_rr 1111 1001 0001 .... .... 000000 .. .... @ldst_rr
}
}
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
index 187eacffd9..7c09153b6e 100644
--- a/target/arm/tcg/translate.c
+++ b/target/arm/tcg/translate.c
@@ -8775,6 +8775,63 @@ static bool trans_PLI(DisasContext *s, arg_PLI *a)
return ENABLE_ARCH_7;
}
+static bool prefetch_check_m(DisasContext *s, int rm)
+{
+ switch (rm) {
+ case 13:
+ /* SP allowed in v8 or with A1 encoding; rejected with T1. */
+ return ENABLE_ARCH_8 || !s->thumb;
+ case 15:
+ /* PC always rejected. */
+ return false;
+ default:
+ return true;
+ }
+}
+
+static bool trans_PLD_rr(DisasContext *s, arg_PLD_rr *a)
+{
+ if (!ENABLE_ARCH_5TE) {
+ return false;
+ }
+ /* We cannot return false, because that leads to LDRB for thumb. */
+ if (!prefetch_check_m(s, a->rm)) {
+ unallocated_encoding(s);
+ }
+ return true;
+}
+
+static bool trans_PLDW_rr(DisasContext *s, arg_PLDW_rr *a)
+{
+ if (!arm_dc_feature(s, ARM_FEATURE_V7MP)) {
+ return false;
+ }
+ /*
+ * For A1, rn == 15 is UNPREDICTABLE.
+ * For T1, rn == 15 is PLD (literal).
+ */
+ if (a->rn == 15) {
+ return false;
+ }
+ /* We cannot return false, because that leads to LDRH for thumb. */
+ if (!prefetch_check_m(s, a->rm)) {
+ unallocated_encoding(s);
+ }
+ return true;
+}
+
+static bool trans_PLI_rr(DisasContext *s, arg_PLI_rr *a)
+{
+ if (!ENABLE_ARCH_7) {
+ return false;
+ }
+ /* We cannot return false, because that leads to LDRSB for thumb. */
+ if (!prefetch_check_m(s, a->rm)) {
+ unallocated_encoding(s);
+ }
+ return true;
+}
+
/*
* If-then
*/
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer)
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (2 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 12:48 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI Richard Henderson
` (63 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Fixes RISU mismatch for "fcvtzs h31, h0, #14".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 4126aaa27e..d97acdbaf9 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -8707,6 +8707,9 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
read_vec_element_i32(s, tcg_op, rn, pass, size);
fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
if (is_scalar) {
+ if (size == MO_16 && !is_u) {
+ tcg_gen_ext16u_i32(tcg_op, tcg_op);
+ }
write_fp_sreg(s, rd, tcg_op);
} else {
write_vec_element_i32(s, tcg_op, rd, pass, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (3 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 12:34 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Richard Henderson
` (62 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
The decode of FMOV (vector, immediate, half-precision) vs
invalid cases of MOVI are incorrect.
Fixes RISU mismatch for invalid insn 0x2f01fd31.
Fixes: 70b4e6a4457 ("arm/translate-a64: add FP16 FMOV to simd_mod_imm")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d97acdbaf9..5455ae3685 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -7904,27 +7904,31 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
bool is_q = extract32(insn, 30, 1);
uint64_t imm = 0;
- if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
- /* Check for FMOV (vector, immediate) - half-precision */
- if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
+ if (o2) {
+ if (cmode != 0xf || is_neg) {
unallocated_encoding(s);
return;
}
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (cmode == 15 && o2 && !is_neg) {
/* FMOV (vector, immediate) - half-precision */
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ unallocated_encoding(s);
+ return;
+ }
imm = vfp_expand_imm(MO_16, abcdefgh);
/* now duplicate across the lanes */
imm = dup_const(MO_16, imm);
} else {
+ if (cmode == 0xf && is_neg && !is_q) {
+ unallocated_encoding(s);
+ return;
+ }
imm = asimd_imm_const(abcdefgh, cmode, is_neg);
}
+ if (!fp_access_check(s)) {
+ return;
+ }
+
if (!((cmode & 0x9) == 0x1 || (cmode & 0xd) == 0x9)) {
/* MOVI or MVNI, with MVNI negation handled above. */
tcg_gen_gvec_dup_imm(MO_64, vec_full_reg_offset(s, rd), is_q ? 16 : 8,
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16)
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (4 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 12:25 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 07/67] target/arm: Split out gengvec.c Richard Henderson
` (61 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
All of these insns have "if sz == '1' then UNDEFINED" in their pseudocode.
Fixes a RISU miscompare for invalid insn 0x5ef0c87a.
Fixes: 5c36d89567c ("arm/translate-a64: add all FP16 ops in simd_scalar_pairwise")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5455ae3685..0bdddb8517 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -8006,7 +8006,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0x2f: /* FMINP */
/* FP op, size[0] is 32 or 64 bit*/
if (!u) {
- if (!dc_isar_feature(aa64_fp16, s)) {
+ if ((size & 1) || !dc_isar_feature(aa64_fp16, s)) {
unallocated_encoding(s);
return;
} else {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 07/67] target/arm: Split out gengvec.c
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (5 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 08/67] target/arm: Split out gengvec64.c Richard Henderson
` (60 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell, Philippe Mathieu-Daudé
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate.h | 5 +
target/arm/tcg/gengvec.c | 1612 ++++++++++++++++++++++++++++++++++++
target/arm/tcg/translate.c | 1588 -----------------------------------
target/arm/tcg/meson.build | 1 +
4 files changed, 1618 insertions(+), 1588 deletions(-)
create mode 100644 target/arm/tcg/gengvec.c
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index dc66ff2190..80e85096a8 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -445,6 +445,11 @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
+void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
+
void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
new file mode 100644
index 0000000000..7a1856253f
--- /dev/null
+++ b/target/arm/tcg/gengvec.c
@@ -0,0 +1,1612 @@
+/*
+ * ARM generic vector expansion
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ * Copyright (c) 2005-2007 CodeSourcery
+ * Copyright (c) 2007 OpenedHand, Ltd.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "translate.h"
+
+
+static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz,
+ gen_helper_gvec_3_ptr *fn)
+{
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
+
+ tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
+ opr_sz, max_sz, 0, fn);
+}
+
+void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
+void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
+#define GEN_CMP0(NAME, COND) \
+ void NAME(unsigned vece, uint32_t d, uint32_t m, \
+ uint32_t opr_sz, uint32_t max_sz) \
+ { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
+
+GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
+GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
+GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
+GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
+GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
+
+#undef GEN_CMP0
+
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_sar8i_i64(a, a, shift);
+ tcg_gen_vec_add8_i64(d, d, a);
+}
+
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_sar16i_i64(a, a, shift);
+ tcg_gen_vec_add16_i64(d, d, a);
+}
+
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_sari_i32(a, a, shift);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_sari_i64(a, a, shift);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ tcg_gen_sari_vec(vece, a, a, sh);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_ssra8_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_ssra16_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_ssra32_i32,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_ssra64_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits.
+ */
+ shift = MIN(shift, (8 << vece) - 1);
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+}
+
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_shr8i_i64(a, a, shift);
+ tcg_gen_vec_add8_i64(d, d, a);
+}
+
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_shr16i_i64(a, a, shift);
+ tcg_gen_vec_add16_i64(d, d, a);
+}
+
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_shri_i32(a, a, shift);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_shri_i64(a, a, shift);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ tcg_gen_shri_vec(vece, a, a, sh);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_usra8_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8, },
+ { .fni8 = gen_usra16_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16, },
+ { .fni4 = gen_usra32_i32,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32, },
+ { .fni8 = gen_usra64_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64, },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Unsigned results in all zeros as input to accumulate: nop.
+ */
+ if (shift < (8 << vece)) {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ } else {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ }
+}
+
+/*
+ * Shift one less than the requested amount, and the low bit is
+ * the rounding bit. For the 8 and 16-bit operations, because we
+ * mask the low bit, we can perform a normal integer shift instead
+ * of a vector shift.
+ */
+static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_sar8i_i64(d, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_sar16i_i64(d, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t;
+
+ /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
+ if (sh == 32) {
+ tcg_gen_movi_i32(d, 0);
+ return;
+ }
+ t = tcg_temp_new_i32();
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
+ tcg_gen_sari_i32(d, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+ void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
+ tcg_gen_sari_i64(d, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
+ tcg_gen_dupi_vec(vece, ones, 1);
+ tcg_gen_and_vec(vece, t, t, ones);
+ tcg_gen_sari_vec(vece, d, a, sh);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_srshr8_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_srshr16_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_srshr32_i32,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_srshr64_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ if (shift == (8 << vece)) {
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits. With rounding, this produces
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
+ * I.e. always zero.
+ */
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr8_i64(t, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr16_i64(t, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ gen_srshr32_i32(t, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr64_i64(t, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ gen_srshr_vec(vece, t, a, sh);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_srsra8_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fni8 = gen_srsra16_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_srsra32_i32,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_srsra64_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits. With rounding, this produces
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
+ * I.e. always zero. With accumulation, this leaves D unchanged.
+ */
+ if (shift == (8 << vece)) {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_shr8i_i64(d, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_shr16i_i64(d, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t;
+
+ /* Handle shift by the input size for the benefit of trans_URSHR_ri */
+ if (sh == 32) {
+ tcg_gen_extract_i32(d, a, sh - 1, 1);
+ return;
+ }
+ t = tcg_temp_new_i32();
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
+ tcg_gen_shri_i32(d, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
+ tcg_gen_shri_i64(d, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shri_vec(vece, t, a, shift - 1);
+ tcg_gen_dupi_vec(vece, ones, 1);
+ tcg_gen_and_vec(vece, t, t, ones);
+ tcg_gen_shri_vec(vece, d, a, shift);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_urshr8_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_urshr16_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_urshr32_i32,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_urshr64_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ if (shift == (8 << vece)) {
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Unsigned results in zero. With rounding, this produces a
+ * copy of the most significant bit.
+ */
+ tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 8) {
+ tcg_gen_vec_shr8i_i64(t, a, 7);
+ } else {
+ gen_urshr8_i64(t, a, sh);
+ }
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 16) {
+ tcg_gen_vec_shr16i_i64(t, a, 15);
+ } else {
+ gen_urshr16_i64(t, a, sh);
+ }
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ if (sh == 32) {
+ tcg_gen_shri_i32(t, a, 31);
+ } else {
+ gen_urshr32_i32(t, a, sh);
+ }
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 64) {
+ tcg_gen_shri_i64(t, a, 63);
+ } else {
+ gen_urshr64_i64(t, a, sh);
+ }
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ if (sh == (8 << vece)) {
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
+ } else {
+ gen_urshr_vec(vece, t, a, sh);
+ }
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_ursra8_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fni8 = gen_ursra16_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_ursra32_i32,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_ursra64_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+}
+
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_shri_i32(a, a, shift);
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
+}
+
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_shri_i64(a, a, shift);
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
+}
+
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
+ tcg_gen_shri_vec(vece, t, a, sh);
+ tcg_gen_and_vec(vece, d, d, m);
+ tcg_gen_or_vec(vece, d, d, t);
+}
+
+void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
+ const GVecGen2i ops[4] = {
+ { .fni8 = gen_shr8_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shr16_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shr32_ins_i32,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_shr64_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /* Shift of esize leaves destination unchanged. */
+ if (shift < (8 << vece)) {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ } else {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ }
+}
+
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shli_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shli_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
+}
+
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
+}
+
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shli_vec(vece, t, a, sh);
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
+ tcg_gen_and_vec(vece, d, d, m);
+ tcg_gen_or_vec(vece, d, d, t);
+}
+
+void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
+ const GVecGen2i ops[4] = {
+ { .fni8 = gen_shl8_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shl16_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shl32_ins_i32,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_shl64_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [0..esize-1]. */
+ tcg_debug_assert(shift >= 0);
+ tcg_debug_assert(shift < (8 << vece));
+
+ if (shift == 0) {
+ tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u8(a, a, b);
+ gen_helper_neon_add_u8(d, d, a);
+}
+
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u8(a, a, b);
+ gen_helper_neon_sub_u8(d, d, a);
+}
+
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u16(a, a, b);
+ gen_helper_neon_add_u16(d, d, a);
+}
+
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u16(a, a, b);
+ gen_helper_neon_sub_u16(d, d, a);
+}
+
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_mul_i32(a, a, b);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_mul_i32(a, a, b);
+ tcg_gen_sub_i32(d, d, a);
+}
+
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_mul_i64(a, a, b);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_mul_i64(a, a, b);
+ tcg_gen_sub_i64(d, d, a);
+}
+
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_mul_vec(vece, a, a, b);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_mul_vec(vece, a, a, b);
+ tcg_gen_sub_vec(vece, d, d, a);
+}
+
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
+ * these tables are shared with AArch64 which does support them.
+ */
+void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_mul_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_mla8_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_mla16_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_mla32_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_mla64_i64,
+ .fniv = gen_mla_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_mul_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_mls8_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_mls16_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_mls32_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_mls64_i64,
+ .fniv = gen_mls_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+/* CMTST : test is "if (X & Y != 0)". */
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_and_i32(d, a, b);
+ tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
+}
+
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_and_i64(d, a, b);
+ tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
+}
+
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_and_vec(vece, d, a, b);
+ tcg_gen_dupi_vec(vece, a, 0);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
+}
+
+void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_helper_neon_tst_u8,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_helper_neon_tst_u16,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_cmtst_i32,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_cmtst_i64,
+ .fniv = gen_cmtst_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
+{
+ TCGv_i32 lval = tcg_temp_new_i32();
+ TCGv_i32 rval = tcg_temp_new_i32();
+ TCGv_i32 lsh = tcg_temp_new_i32();
+ TCGv_i32 rsh = tcg_temp_new_i32();
+ TCGv_i32 zero = tcg_constant_i32(0);
+ TCGv_i32 max = tcg_constant_i32(32);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i32(lsh, shift);
+ tcg_gen_neg_i32(rsh, lsh);
+ tcg_gen_shl_i32(lval, src, lsh);
+ tcg_gen_shr_i32(rval, src, rsh);
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
+}
+
+void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
+{
+ TCGv_i64 lval = tcg_temp_new_i64();
+ TCGv_i64 rval = tcg_temp_new_i64();
+ TCGv_i64 lsh = tcg_temp_new_i64();
+ TCGv_i64 rsh = tcg_temp_new_i64();
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 max = tcg_constant_i64(64);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i64(lsh, shift);
+ tcg_gen_neg_i64(rsh, lsh);
+ tcg_gen_shl_i64(lval, src, lsh);
+ tcg_gen_shr_i64(rval, src, rsh);
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
+}
+
+static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
+ TCGv_vec src, TCGv_vec shift)
+{
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec msk, max;
+
+ tcg_gen_neg_vec(vece, rsh, shift);
+ if (vece == MO_8) {
+ tcg_gen_mov_vec(lsh, shift);
+ } else {
+ msk = tcg_temp_new_vec_matching(dst);
+ tcg_gen_dupi_vec(vece, msk, 0xff);
+ tcg_gen_and_vec(vece, lsh, shift, msk);
+ tcg_gen_and_vec(vece, rsh, rsh, msk);
+ }
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
+ tcg_gen_shrv_vec(vece, rval, src, rsh);
+
+ max = tcg_temp_new_vec_matching(dst);
+ tcg_gen_dupi_vec(vece, max, 8 << vece);
+
+ /*
+ * The choice of LT (signed) and GEU (unsigned) are biased toward
+ * the instructions of the x86_64 host. For MO_8, the whole byte
+ * is significant so we must use an unsigned compare; otherwise we
+ * have already masked to a byte and so a signed compare works.
+ * Other tcg hosts have a full set of comparisons and do not care.
+ */
+ if (vece == MO_8) {
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
+ tcg_gen_andc_vec(vece, lval, lval, lsh);
+ tcg_gen_andc_vec(vece, rval, rval, rsh);
+ } else {
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
+ tcg_gen_and_vec(vece, lval, lval, lsh);
+ tcg_gen_and_vec(vece, rval, rval, rsh);
+ }
+ tcg_gen_or_vec(vece, dst, lval, rval);
+}
+
+void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_neg_vec, INDEX_op_shlv_vec,
+ INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_ushl_vec,
+ .fno = gen_helper_gvec_ushl_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_ushl_vec,
+ .fno = gen_helper_gvec_ushl_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_ushl_i32,
+ .fniv = gen_ushl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_ushl_i64,
+ .fniv = gen_ushl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
+{
+ TCGv_i32 lval = tcg_temp_new_i32();
+ TCGv_i32 rval = tcg_temp_new_i32();
+ TCGv_i32 lsh = tcg_temp_new_i32();
+ TCGv_i32 rsh = tcg_temp_new_i32();
+ TCGv_i32 zero = tcg_constant_i32(0);
+ TCGv_i32 max = tcg_constant_i32(31);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i32(lsh, shift);
+ tcg_gen_neg_i32(rsh, lsh);
+ tcg_gen_shl_i32(lval, src, lsh);
+ tcg_gen_umin_i32(rsh, rsh, max);
+ tcg_gen_sar_i32(rval, src, rsh);
+ tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
+ tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
+}
+
+void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
+{
+ TCGv_i64 lval = tcg_temp_new_i64();
+ TCGv_i64 rval = tcg_temp_new_i64();
+ TCGv_i64 lsh = tcg_temp_new_i64();
+ TCGv_i64 rsh = tcg_temp_new_i64();
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 max = tcg_constant_i64(63);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i64(lsh, shift);
+ tcg_gen_neg_i64(rsh, lsh);
+ tcg_gen_shl_i64(lval, src, lsh);
+ tcg_gen_umin_i64(rsh, rsh, max);
+ tcg_gen_sar_i64(rval, src, rsh);
+ tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
+ tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
+}
+
+static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
+ TCGv_vec src, TCGv_vec shift)
+{
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_neg_vec(vece, rsh, shift);
+ if (vece == MO_8) {
+ tcg_gen_mov_vec(lsh, shift);
+ } else {
+ tcg_gen_dupi_vec(vece, tmp, 0xff);
+ tcg_gen_and_vec(vece, lsh, shift, tmp);
+ tcg_gen_and_vec(vece, rsh, rsh, tmp);
+ }
+
+ /* Bound rsh so out of bound right shift gets -1. */
+ tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
+ tcg_gen_umin_vec(vece, rsh, rsh, tmp);
+ tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
+
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
+ tcg_gen_sarv_vec(vece, rval, src, rsh);
+
+ /* Select in-bound left shift. */
+ tcg_gen_andc_vec(vece, lval, lval, tmp);
+
+ /* Select between left and right shift. */
+ if (vece == MO_8) {
+ tcg_gen_dupi_vec(vece, tmp, 0);
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
+ } else {
+ tcg_gen_dupi_vec(vece, tmp, 0x80);
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
+ }
+}
+
+void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
+ INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_sshl_vec,
+ .fno = gen_helper_gvec_sshl_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_sshl_vec,
+ .fno = gen_helper_gvec_sshl_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_sshl_i32,
+ .fniv = gen_sshl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_sshl_i64,
+ .fniv = gen_sshl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_add_vec(vece, x, a, b);
+ tcg_gen_usadd_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_b,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_h,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_s,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_d,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_add_vec(vece, x, a, b);
+ tcg_gen_ssadd_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_sub_vec(vece, x, a, b);
+ tcg_gen_ussub_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_sub_vec(vece, x, a, b);
+ tcg_gen_sssub_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_sub_i32(t, a, b);
+ tcg_gen_sub_i32(d, b, a);
+ tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
+}
+
+static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t, a, b);
+ tcg_gen_sub_i64(d, b, a);
+ tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
+}
+
+static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_smin_vec(vece, t, a, b);
+ tcg_gen_smax_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_sabd_i32,
+ .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_sabd_i64,
+ .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_sub_i32(t, a, b);
+ tcg_gen_sub_i32(d, b, a);
+ tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
+}
+
+static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t, a, b);
+ tcg_gen_sub_i64(d, b, a);
+ tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
+}
+
+static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_umin_vec(vece, t, a, b);
+ tcg_gen_umax_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_uabd_i32,
+ .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_uabd_i64,
+ .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+ gen_sabd_i32(t, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ gen_sabd_i64(t, a, b);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ gen_sabd_vec(vece, t, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_add_vec,
+ INDEX_op_smin_vec, INDEX_op_smax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_saba_i32,
+ .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_saba_i64,
+ .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+ gen_uabd_i32(t, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ gen_uabd_i64(t, a, b);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ gen_uabd_vec(vece, t, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_add_vec,
+ INDEX_op_umin_vec, INDEX_op_umax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_uaba_i32,
+ .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_uaba_i64,
+ .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
index 7c09153b6e..52cd5daf0f 100644
--- a/target/arm/tcg/translate.c
+++ b/target/arm/tcg/translate.c
@@ -2912,1594 +2912,6 @@ static void gen_exception_return(DisasContext *s, TCGv_i32 pc)
gen_rfe(s, pc, load_cpu_field(spsr));
}
-static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
- uint32_t opr_sz, uint32_t max_sz,
- gen_helper_gvec_3_ptr *fn)
-{
- TCGv_ptr qc_ptr = tcg_temp_new_ptr();
-
- tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
- tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
- opr_sz, max_sz, 0, fn);
-}
-
-void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static gen_helper_gvec_3_ptr * const fns[2] = {
- gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
- };
- tcg_debug_assert(vece >= 1 && vece <= 2);
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
-}
-
-void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static gen_helper_gvec_3_ptr * const fns[2] = {
- gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
- };
- tcg_debug_assert(vece >= 1 && vece <= 2);
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
-}
-
-#define GEN_CMP0(NAME, COND) \
- void NAME(unsigned vece, uint32_t d, uint32_t m, \
- uint32_t opr_sz, uint32_t max_sz) \
- { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
-
-GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
-GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
-GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
-GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
-GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
-
-#undef GEN_CMP0
-
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_sar8i_i64(a, a, shift);
- tcg_gen_vec_add8_i64(d, d, a);
-}
-
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_sar16i_i64(a, a, shift);
- tcg_gen_vec_add16_i64(d, d, a);
-}
-
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_sari_i32(a, a, shift);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_sari_i64(a, a, shift);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- tcg_gen_sari_vec(vece, a, a, sh);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_ssra8_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_ssra16_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_ssra32_i32,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_ssra64_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits.
- */
- shift = MIN(shift, (8 << vece) - 1);
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
-}
-
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_shr8i_i64(a, a, shift);
- tcg_gen_vec_add8_i64(d, d, a);
-}
-
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_shr16i_i64(a, a, shift);
- tcg_gen_vec_add16_i64(d, d, a);
-}
-
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_shri_i32(a, a, shift);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_shri_i64(a, a, shift);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- tcg_gen_shri_vec(vece, a, a, sh);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_usra8_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8, },
- { .fni8 = gen_usra16_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16, },
- { .fni4 = gen_usra32_i32,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32, },
- { .fni8 = gen_usra64_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64, },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Unsigned results in all zeros as input to accumulate: nop.
- */
- if (shift < (8 << vece)) {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- } else {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- }
-}
-
-/*
- * Shift one less than the requested amount, and the low bit is
- * the rounding bit. For the 8 and 16-bit operations, because we
- * mask the low bit, we can perform a normal integer shift instead
- * of a vector shift.
- */
-static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
- tcg_gen_vec_sar8i_i64(d, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
- tcg_gen_vec_sar16i_i64(d, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t;
-
- /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
- if (sh == 32) {
- tcg_gen_movi_i32(d, 0);
- return;
- }
- t = tcg_temp_new_i32();
- tcg_gen_extract_i32(t, a, sh - 1, 1);
- tcg_gen_sari_i32(d, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_extract_i64(t, a, sh - 1, 1);
- tcg_gen_sari_i64(d, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shri_vec(vece, t, a, sh - 1);
- tcg_gen_dupi_vec(vece, ones, 1);
- tcg_gen_and_vec(vece, t, t, ones);
- tcg_gen_sari_vec(vece, d, a, sh);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_srshr8_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_srshr16_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_srshr32_i32,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_srshr64_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- if (shift == (8 << vece)) {
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits. With rounding, this produces
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
- * I.e. always zero.
- */
- tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr8_i64(t, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr16_i64(t, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- gen_srshr32_i32(t, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr64_i64(t, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- gen_srshr_vec(vece, t, a, sh);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_srsra8_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fni8 = gen_srsra16_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_srsra32_i32,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_srsra64_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits. With rounding, this produces
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
- * I.e. always zero. With accumulation, this leaves D unchanged.
- */
- if (shift == (8 << vece)) {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
- tcg_gen_vec_shr8i_i64(d, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
- tcg_gen_vec_shr16i_i64(d, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t;
-
- /* Handle shift by the input size for the benefit of trans_URSHR_ri */
- if (sh == 32) {
- tcg_gen_extract_i32(d, a, sh - 1, 1);
- return;
- }
- t = tcg_temp_new_i32();
- tcg_gen_extract_i32(t, a, sh - 1, 1);
- tcg_gen_shri_i32(d, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_extract_i64(t, a, sh - 1, 1);
- tcg_gen_shri_i64(d, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shri_vec(vece, t, a, shift - 1);
- tcg_gen_dupi_vec(vece, ones, 1);
- tcg_gen_and_vec(vece, t, t, ones);
- tcg_gen_shri_vec(vece, d, a, shift);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_urshr8_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_urshr16_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_urshr32_i32,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_urshr64_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- if (shift == (8 << vece)) {
- /*
- * Shifts larger than the element size are architecturally valid.
- * Unsigned results in zero. With rounding, this produces a
- * copy of the most significant bit.
- */
- tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 8) {
- tcg_gen_vec_shr8i_i64(t, a, 7);
- } else {
- gen_urshr8_i64(t, a, sh);
- }
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 16) {
- tcg_gen_vec_shr16i_i64(t, a, 15);
- } else {
- gen_urshr16_i64(t, a, sh);
- }
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- if (sh == 32) {
- tcg_gen_shri_i32(t, a, 31);
- } else {
- gen_urshr32_i32(t, a, sh);
- }
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 64) {
- tcg_gen_shri_i64(t, a, 63);
- } else {
- gen_urshr64_i64(t, a, sh);
- }
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- if (sh == (8 << vece)) {
- tcg_gen_shri_vec(vece, t, a, sh - 1);
- } else {
- gen_urshr_vec(vece, t, a, sh);
- }
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_ursra8_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fni8 = gen_ursra16_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_ursra32_i32,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_ursra64_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
-}
-
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_shri_i32(a, a, shift);
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
-}
-
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_shri_i64(a, a, shift);
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
-}
-
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec m = tcg_temp_new_vec_matching(d);
-
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
- tcg_gen_shri_vec(vece, t, a, sh);
- tcg_gen_and_vec(vece, d, d, m);
- tcg_gen_or_vec(vece, d, d, t);
-}
-
-void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
- const GVecGen2i ops[4] = {
- { .fni8 = gen_shr8_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_shr16_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_shr32_ins_i32,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_shr64_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /* Shift of esize leaves destination unchanged. */
- if (shift < (8 << vece)) {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- } else {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- }
-}
-
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_8, 0xff << shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shli_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shli_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
-}
-
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
-}
-
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec m = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shli_vec(vece, t, a, sh);
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
- tcg_gen_and_vec(vece, d, d, m);
- tcg_gen_or_vec(vece, d, d, t);
-}
-
-void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
- const GVecGen2i ops[4] = {
- { .fni8 = gen_shl8_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_shl16_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_shl32_ins_i32,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_shl64_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [0..esize-1]. */
- tcg_debug_assert(shift >= 0);
- tcg_debug_assert(shift < (8 << vece));
-
- if (shift == 0) {
- tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u8(a, a, b);
- gen_helper_neon_add_u8(d, d, a);
-}
-
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u8(a, a, b);
- gen_helper_neon_sub_u8(d, d, a);
-}
-
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u16(a, a, b);
- gen_helper_neon_add_u16(d, d, a);
-}
-
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u16(a, a, b);
- gen_helper_neon_sub_u16(d, d, a);
-}
-
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_mul_i32(a, a, b);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_mul_i32(a, a, b);
- tcg_gen_sub_i32(d, d, a);
-}
-
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_mul_i64(a, a, b);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_mul_i64(a, a, b);
- tcg_gen_sub_i64(d, d, a);
-}
-
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_mul_vec(vece, a, a, b);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_mul_vec(vece, a, a, b);
- tcg_gen_sub_vec(vece, d, d, a);
-}
-
-/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
- * these tables are shared with AArch64 which does support them.
- */
-void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_mul_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_mla8_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_mla16_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_mla32_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_mla64_i64,
- .fniv = gen_mla_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_mul_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_mls8_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_mls16_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_mls32_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_mls64_i64,
- .fniv = gen_mls_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-/* CMTST : test is "if (X & Y != 0)". */
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_and_i32(d, a, b);
- tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
-}
-
-void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_and_i64(d, a, b);
- tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
-}
-
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_and_vec(vece, d, a, b);
- tcg_gen_dupi_vec(vece, a, 0);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
-}
-
-void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_helper_neon_tst_u8,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_helper_neon_tst_u16,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_cmtst_i32,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_cmtst_i64,
- .fniv = gen_cmtst_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
-{
- TCGv_i32 lval = tcg_temp_new_i32();
- TCGv_i32 rval = tcg_temp_new_i32();
- TCGv_i32 lsh = tcg_temp_new_i32();
- TCGv_i32 rsh = tcg_temp_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
- TCGv_i32 max = tcg_constant_i32(32);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i32(lsh, shift);
- tcg_gen_neg_i32(rsh, lsh);
- tcg_gen_shl_i32(lval, src, lsh);
- tcg_gen_shr_i32(rval, src, rsh);
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
-}
-
-void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
-{
- TCGv_i64 lval = tcg_temp_new_i64();
- TCGv_i64 rval = tcg_temp_new_i64();
- TCGv_i64 lsh = tcg_temp_new_i64();
- TCGv_i64 rsh = tcg_temp_new_i64();
- TCGv_i64 zero = tcg_constant_i64(0);
- TCGv_i64 max = tcg_constant_i64(64);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i64(lsh, shift);
- tcg_gen_neg_i64(rsh, lsh);
- tcg_gen_shl_i64(lval, src, lsh);
- tcg_gen_shr_i64(rval, src, rsh);
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
-}
-
-static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
- TCGv_vec src, TCGv_vec shift)
-{
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec msk, max;
-
- tcg_gen_neg_vec(vece, rsh, shift);
- if (vece == MO_8) {
- tcg_gen_mov_vec(lsh, shift);
- } else {
- msk = tcg_temp_new_vec_matching(dst);
- tcg_gen_dupi_vec(vece, msk, 0xff);
- tcg_gen_and_vec(vece, lsh, shift, msk);
- tcg_gen_and_vec(vece, rsh, rsh, msk);
- }
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_shlv_vec(vece, lval, src, lsh);
- tcg_gen_shrv_vec(vece, rval, src, rsh);
-
- max = tcg_temp_new_vec_matching(dst);
- tcg_gen_dupi_vec(vece, max, 8 << vece);
-
- /*
- * The choice of LT (signed) and GEU (unsigned) are biased toward
- * the instructions of the x86_64 host. For MO_8, the whole byte
- * is significant so we must use an unsigned compare; otherwise we
- * have already masked to a byte and so a signed compare works.
- * Other tcg hosts have a full set of comparisons and do not care.
- */
- if (vece == MO_8) {
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
- tcg_gen_andc_vec(vece, lval, lval, lsh);
- tcg_gen_andc_vec(vece, rval, rval, rsh);
- } else {
- tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
- tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
- tcg_gen_and_vec(vece, lval, lval, lsh);
- tcg_gen_and_vec(vece, rval, rval, rsh);
- }
- tcg_gen_or_vec(vece, dst, lval, rval);
-}
-
-void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_neg_vec, INDEX_op_shlv_vec,
- INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_ushl_vec,
- .fno = gen_helper_gvec_ushl_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_ushl_vec,
- .fno = gen_helper_gvec_ushl_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_ushl_i32,
- .fniv = gen_ushl_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_ushl_i64,
- .fniv = gen_ushl_vec,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
-{
- TCGv_i32 lval = tcg_temp_new_i32();
- TCGv_i32 rval = tcg_temp_new_i32();
- TCGv_i32 lsh = tcg_temp_new_i32();
- TCGv_i32 rsh = tcg_temp_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
- TCGv_i32 max = tcg_constant_i32(31);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i32(lsh, shift);
- tcg_gen_neg_i32(rsh, lsh);
- tcg_gen_shl_i32(lval, src, lsh);
- tcg_gen_umin_i32(rsh, rsh, max);
- tcg_gen_sar_i32(rval, src, rsh);
- tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
- tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
-}
-
-void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
-{
- TCGv_i64 lval = tcg_temp_new_i64();
- TCGv_i64 rval = tcg_temp_new_i64();
- TCGv_i64 lsh = tcg_temp_new_i64();
- TCGv_i64 rsh = tcg_temp_new_i64();
- TCGv_i64 zero = tcg_constant_i64(0);
- TCGv_i64 max = tcg_constant_i64(63);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i64(lsh, shift);
- tcg_gen_neg_i64(rsh, lsh);
- tcg_gen_shl_i64(lval, src, lsh);
- tcg_gen_umin_i64(rsh, rsh, max);
- tcg_gen_sar_i64(rval, src, rsh);
- tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
- tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
-}
-
-static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
- TCGv_vec src, TCGv_vec shift)
-{
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_neg_vec(vece, rsh, shift);
- if (vece == MO_8) {
- tcg_gen_mov_vec(lsh, shift);
- } else {
- tcg_gen_dupi_vec(vece, tmp, 0xff);
- tcg_gen_and_vec(vece, lsh, shift, tmp);
- tcg_gen_and_vec(vece, rsh, rsh, tmp);
- }
-
- /* Bound rsh so out of bound right shift gets -1. */
- tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
- tcg_gen_umin_vec(vece, rsh, rsh, tmp);
- tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
-
- tcg_gen_shlv_vec(vece, lval, src, lsh);
- tcg_gen_sarv_vec(vece, rval, src, rsh);
-
- /* Select in-bound left shift. */
- tcg_gen_andc_vec(vece, lval, lval, tmp);
-
- /* Select between left and right shift. */
- if (vece == MO_8) {
- tcg_gen_dupi_vec(vece, tmp, 0);
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
- } else {
- tcg_gen_dupi_vec(vece, tmp, 0x80);
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
- }
-}
-
-void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
- INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_sshl_vec,
- .fno = gen_helper_gvec_sshl_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_sshl_vec,
- .fno = gen_helper_gvec_sshl_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_sshl_i32,
- .fniv = gen_sshl_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_sshl_i64,
- .fniv = gen_sshl_vec,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_add_vec(vece, x, a, b);
- tcg_gen_usadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_b,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_h,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_s,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_d,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_add_vec(vece, x, a, b);
- tcg_gen_ssadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_sub_vec(vece, x, a, b);
- tcg_gen_ussub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_sub_vec(vece, x, a, b);
- tcg_gen_sssub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- tcg_gen_sub_i32(t, a, b);
- tcg_gen_sub_i32(d, b, a);
- tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
-}
-
-static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_sub_i64(t, a, b);
- tcg_gen_sub_i64(d, b, a);
- tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
-}
-
-static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- tcg_gen_smin_vec(vece, t, a, b);
- tcg_gen_smax_vec(vece, d, a, b);
- tcg_gen_sub_vec(vece, d, d, t);
-}
-
-void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_sabd_i32,
- .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_sabd_i64,
- .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- tcg_gen_sub_i32(t, a, b);
- tcg_gen_sub_i32(d, b, a);
- tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
-}
-
-static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_sub_i64(t, a, b);
- tcg_gen_sub_i64(d, b, a);
- tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
-}
-
-static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- tcg_gen_umin_vec(vece, t, a, b);
- tcg_gen_umax_vec(vece, d, a, b);
- tcg_gen_sub_vec(vece, d, d, t);
-}
-
-void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_uabd_i32,
- .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_uabd_i64,
- .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
- gen_sabd_i32(t, a, b);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- gen_sabd_i64(t, a, b);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- gen_sabd_vec(vece, t, a, b);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_add_vec,
- INDEX_op_smin_vec, INDEX_op_smax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_saba_i32,
- .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_saba_i64,
- .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
- gen_uabd_i32(t, a, b);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- gen_uabd_i64(t, a, b);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- gen_uabd_vec(vece, t, a, b);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_add_vec,
- INDEX_op_umin_vec, INDEX_op_umax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_uaba_i32,
- .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_uaba_i64,
- .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
static bool aa32_cpreg_encoding_in_impdef_space(uint8_t crn, uint8_t crm)
{
static const uint16_t mask[3] = {
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
index 3b1a9f0fc5..bdb5c7352f 100644
--- a/target/arm/tcg/meson.build
+++ b/target/arm/tcg/meson.build
@@ -24,6 +24,7 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: gen_a64)
arm_ss.add(files(
'cpu32.c',
+ 'gengvec.c',
'translate.c',
'translate-m-nocp.c',
'translate-mve.c',
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 08/67] target/arm: Split out gengvec64.c
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (6 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 07/67] target/arm: Split out gengvec.c Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 09/67] target/arm: Convert Cryptographic AES to decodetree Richard Henderson
` (59 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell, Philippe Mathieu-Daudé
Split some routines out of translate-a64.c and translate-sve.c
that are used by both.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.h | 4 +
target/arm/tcg/gengvec64.c | 190 +++++++++++++++++++++++++++++++++
target/arm/tcg/translate-a64.c | 26 -----
target/arm/tcg/translate-sve.c | 145 +------------------------
target/arm/tcg/meson.build | 1 +
5 files changed, 197 insertions(+), 169 deletions(-)
create mode 100644 target/arm/tcg/gengvec64.c
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
index 7b811b8ac5..91750f0ca9 100644
--- a/target/arm/tcg/translate-a64.h
+++ b/target/arm/tcg/translate-a64.h
@@ -193,6 +193,10 @@ void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, int64_t shift,
uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
new file mode 100644
index 0000000000..093b498b13
--- /dev/null
+++ b/target/arm/tcg/gengvec64.c
@@ -0,0 +1,190 @@
+/*
+ * AArch64 generic vector expansion
+ *
+ * Copyright (c) 2013 Alexander Graf <agraf@suse.de>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "translate.h"
+#include "translate-a64.h"
+
+
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
+{
+ tcg_gen_rotli_i64(d, m, 1);
+ tcg_gen_xor_i64(d, d, n);
+}
+
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
+{
+ tcg_gen_rotli_vec(vece, d, m, 1);
+ tcg_gen_xor_vec(vece, d, d, n);
+}
+
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
+ static const GVecGen3 op = {
+ .fni8 = gen_rax1_i64,
+ .fniv = gen_rax1_vec,
+ .opt_opc = vecop_list,
+ .fno = gen_helper_crypto_rax1,
+ .vece = MO_64,
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
+}
+
+static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ uint64_t mask = dup_const(MO_8, 0xff >> sh);
+
+ tcg_gen_xor_i64(t, n, m);
+ tcg_gen_shri_i64(d, t, sh);
+ tcg_gen_shli_i64(t, t, 8 - sh);
+ tcg_gen_andi_i64(d, d, mask);
+ tcg_gen_andi_i64(t, t, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ uint64_t mask = dup_const(MO_16, 0xffff >> sh);
+
+ tcg_gen_xor_i64(t, n, m);
+ tcg_gen_shri_i64(d, t, sh);
+ tcg_gen_shli_i64(t, t, 16 - sh);
+ tcg_gen_andi_i64(d, d, mask);
+ tcg_gen_andi_i64(t, t, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
+{
+ tcg_gen_xor_i32(d, n, m);
+ tcg_gen_rotri_i32(d, d, sh);
+}
+
+static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ tcg_gen_xor_i64(d, n, m);
+ tcg_gen_rotri_i64(d, d, sh);
+}
+
+static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, int64_t sh)
+{
+ tcg_gen_xor_vec(vece, d, n, m);
+ tcg_gen_rotri_vec(vece, d, d, sh);
+}
+
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, int64_t shift,
+ uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
+ static const GVecGen3i ops[4] = {
+ { .fni8 = gen_xar8_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_b,
+ .opt_opc = vecop,
+ .vece = MO_8 },
+ { .fni8 = gen_xar16_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_h,
+ .opt_opc = vecop,
+ .vece = MO_16 },
+ { .fni4 = gen_xar_i32,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_s,
+ .opt_opc = vecop,
+ .vece = MO_32 },
+ { .fni8 = gen_xar_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_gvec_xar_d,
+ .opt_opc = vecop,
+ .vece = MO_64 }
+ };
+ int esize = 8 << vece;
+
+ /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
+ tcg_debug_assert(shift >= 0);
+ tcg_debug_assert(shift <= esize);
+ shift &= esize - 1;
+
+ if (shift == 0) {
+ /* xar with no rotate devolves to xor. */
+ tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
+ shift, &ops[vece]);
+ }
+}
+
+static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
+{
+ tcg_gen_xor_i64(d, n, m);
+ tcg_gen_xor_i64(d, d, k);
+}
+
+static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, TCGv_vec k)
+{
+ tcg_gen_xor_vec(vece, d, n, m);
+ tcg_gen_xor_vec(vece, d, d, k);
+}
+
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+ static const GVecGen4 op = {
+ .fni8 = gen_eor3_i64,
+ .fniv = gen_eor3_vec,
+ .fno = gen_helper_sve2_eor3,
+ .vece = MO_64,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ };
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
+}
+
+static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
+{
+ tcg_gen_andc_i64(d, m, k);
+ tcg_gen_xor_i64(d, d, n);
+}
+
+static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, TCGv_vec k)
+{
+ tcg_gen_andc_vec(vece, d, m, k);
+ tcg_gen_xor_vec(vece, d, d, n);
+}
+
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+ static const GVecGen4 op = {
+ .fni8 = gen_bcax_i64,
+ .fniv = gen_bcax_vec,
+ .fno = gen_helper_sve2_bcax,
+ .vece = MO_64,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ };
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
+}
+
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 0bdddb8517..8842ff634d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -13623,32 +13623,6 @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
}
-static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
-{
- tcg_gen_rotli_i64(d, m, 1);
- tcg_gen_xor_i64(d, d, n);
-}
-
-static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
-{
- tcg_gen_rotli_vec(vece, d, m, 1);
- tcg_gen_xor_vec(vece, d, d, n);
-}
-
-void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
- static const GVecGen3 op = {
- .fni8 = gen_rax1_i64,
- .fniv = gen_rax1_vec,
- .opt_opc = vecop_list,
- .fno = gen_helper_crypto_rax1,
- .vece = MO_64,
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
-}
-
/* Crypto three-reg SHA512
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
* +-----------------------+------+---+---+-----+--------+------+------+
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
index ada05aa530..798ab2bfb1 100644
--- a/target/arm/tcg/translate-sve.c
+++ b/target/arm/tcg/translate-sve.c
@@ -527,94 +527,6 @@ TRANS_FEAT(ORR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_or, a)
TRANS_FEAT(EOR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_xor, a)
TRANS_FEAT(BIC_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_andc, a)
-static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- uint64_t mask = dup_const(MO_8, 0xff >> sh);
-
- tcg_gen_xor_i64(t, n, m);
- tcg_gen_shri_i64(d, t, sh);
- tcg_gen_shli_i64(t, t, 8 - sh);
- tcg_gen_andi_i64(d, d, mask);
- tcg_gen_andi_i64(t, t, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- uint64_t mask = dup_const(MO_16, 0xffff >> sh);
-
- tcg_gen_xor_i64(t, n, m);
- tcg_gen_shri_i64(d, t, sh);
- tcg_gen_shli_i64(t, t, 16 - sh);
- tcg_gen_andi_i64(d, d, mask);
- tcg_gen_andi_i64(t, t, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
-{
- tcg_gen_xor_i32(d, n, m);
- tcg_gen_rotri_i32(d, d, sh);
-}
-
-static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- tcg_gen_xor_i64(d, n, m);
- tcg_gen_rotri_i64(d, d, sh);
-}
-
-static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, int64_t sh)
-{
- tcg_gen_xor_vec(vece, d, n, m);
- tcg_gen_rotri_vec(vece, d, d, sh);
-}
-
-void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, int64_t shift,
- uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
- static const GVecGen3i ops[4] = {
- { .fni8 = gen_xar8_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_b,
- .opt_opc = vecop,
- .vece = MO_8 },
- { .fni8 = gen_xar16_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_h,
- .opt_opc = vecop,
- .vece = MO_16 },
- { .fni4 = gen_xar_i32,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_s,
- .opt_opc = vecop,
- .vece = MO_32 },
- { .fni8 = gen_xar_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_gvec_xar_d,
- .opt_opc = vecop,
- .vece = MO_64 }
- };
- int esize = 8 << vece;
-
- /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
- tcg_debug_assert(shift >= 0);
- tcg_debug_assert(shift <= esize);
- shift &= esize - 1;
-
- if (shift == 0) {
- /* xar with no rotate devolves to xor. */
- tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
- shift, &ops[vece]);
- }
-}
-
static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
{
if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
@@ -629,61 +541,8 @@ static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
return true;
}
-static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
-{
- tcg_gen_xor_i64(d, n, m);
- tcg_gen_xor_i64(d, d, k);
-}
-
-static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, TCGv_vec k)
-{
- tcg_gen_xor_vec(vece, d, n, m);
- tcg_gen_xor_vec(vece, d, d, k);
-}
-
-static void gen_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
-{
- static const GVecGen4 op = {
- .fni8 = gen_eor3_i64,
- .fniv = gen_eor3_vec,
- .fno = gen_helper_sve2_eor3,
- .vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- };
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
-}
-
-TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_eor3, a)
-
-static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
-{
- tcg_gen_andc_i64(d, m, k);
- tcg_gen_xor_i64(d, d, n);
-}
-
-static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, TCGv_vec k)
-{
- tcg_gen_andc_vec(vece, d, m, k);
- tcg_gen_xor_vec(vece, d, d, n);
-}
-
-static void gen_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
-{
- static const GVecGen4 op = {
- .fni8 = gen_bcax_i64,
- .fniv = gen_bcax_vec,
- .fno = gen_helper_sve2_bcax,
- .vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- };
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
-}
-
-TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_bcax, a)
+TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_eor3, a)
+TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_bcax, a)
static void gen_bsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz)
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
index bdb5c7352f..508932a249 100644
--- a/target/arm/tcg/meson.build
+++ b/target/arm/tcg/meson.build
@@ -43,6 +43,7 @@ arm_ss.add(files(
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
'cpu64.c',
+ 'gengvec64.c',
'translate-a64.c',
'translate-sve.c',
'translate-sme.c',
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 09/67] target/arm: Convert Cryptographic AES to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (7 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 08/67] target/arm: Split out gengvec64.c Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 10/67] target/arm: Convert Cryptographic 3-register SHA " Richard Henderson
` (58 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 21 +++++++--
target/arm/tcg/translate-a64.c | 86 +++++++++++++++-------------------
2 files changed, 54 insertions(+), 53 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 0e7656fd15..1de09903dc 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -19,11 +19,17 @@
# This file is processed by scripts/decodetree.py
#
-&r rn
-&ri rd imm
-&rri_sf rd rn imm sf
-&i imm
+%rd 0:5
+&r rn
+&ri rd imm
+&rri_sf rd rn imm sf
+&i imm
+&qrr_e q rd rn esz
+&qrrr_e q rd rn rm esz
+
+@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
+@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
### Data Processing - Immediate
@@ -590,3 +596,10 @@ CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
+
+### Cryptographic AES
+
+AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
+AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
+AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
+AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8842ff634d..3894db4bee 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1313,6 +1313,34 @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
return true;
}
+/*
+ * Expanders for AdvSIMD translation functions.
+ */
+
+static bool do_gvec_op2_ool(DisasContext *s, arg_qrr_e *a, int data,
+ gen_helper_gvec_2 *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op2_ool(s, a->q, a->rd, a->rn, data, fn);
+ }
+ return true;
+}
+
+static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
+ gen_helper_gvec_3 *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_ool(s, a->q, a->rd, a->rn, a->rm, data, fn);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4560,6 +4588,15 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
return true;
}
+/*
+ * Cryptographic AES
+ */
+
+TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
+TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
+TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
+TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13460,54 +13497,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto AES
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----------------+------+-----------+--------+-----+------+------+
- * | 0 1 0 0 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----------------+------+-----------+--------+-----+------+------+
- */
-static void disas_crypto_aes(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_2 *genfn2 = NULL;
- gen_helper_gvec_3 *genfn3 = NULL;
-
- if (!dc_isar_feature(aa64_aes, s) || size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0x4: /* AESE */
- genfn3 = gen_helper_crypto_aese;
- break;
- case 0x6: /* AESMC */
- genfn2 = gen_helper_crypto_aesmc;
- break;
- case 0x5: /* AESD */
- genfn3 = gen_helper_crypto_aesd;
- break;
- case 0x7: /* AESIMC */
- genfn2 = gen_helper_crypto_aesimc;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- if (genfn2) {
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn2);
- } else {
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, genfn3);
- }
-}
-
/* Crypto three-reg SHA
* 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
* +-----------------+------+---+------+---+--------+-----+------+------+
@@ -13917,7 +13906,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x4e280800, 0xff3e0c00, disas_crypto_aes },
{ 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 10/67] target/arm: Convert Cryptographic 3-register SHA to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (8 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 09/67] target/arm: Convert Cryptographic AES to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 11/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
` (57 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 11 +++++
target/arm/tcg/translate-a64.c | 78 +++++-----------------------------
2 files changed, 21 insertions(+), 68 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1de09903dc..7590659ee6 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -30,6 +30,7 @@
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
+@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
### Data Processing - Immediate
@@ -603,3 +604,13 @@ AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
+
+### Cryptographic three-register SHA
+
+SHA1C 0101 1110 000 ..... 000000 ..... ..... @rrr_q1e0
+SHA1P 0101 1110 000 ..... 000100 ..... ..... @rrr_q1e0
+SHA1M 0101 1110 000 ..... 001000 ..... ..... @rrr_q1e0
+SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
+SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
+SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
+SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 3894db4bee..5bef39d4e7 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4589,7 +4589,7 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
}
/*
- * Cryptographic AES
+ * Cryptographic AES, SHA
*/
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
@@ -4597,6 +4597,15 @@ TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
+TRANS_FEAT(SHA1C, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1c)
+TRANS_FEAT(SHA1P, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1p)
+TRANS_FEAT(SHA1M, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1m)
+TRANS_FEAT(SHA1SU0, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1su0)
+
+TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h)
+TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
+TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13497,72 +13506,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto three-reg SHA
- * 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
- * +-----------------+------+---+------+---+--------+-----+------+------+
- * | 0 1 0 1 1 1 1 0 | size | 0 | Rm | 0 | opcode | 0 0 | Rn | Rd |
- * +-----------------+------+---+------+---+--------+-----+------+------+
- */
-static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 3);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_3 *genfn;
- bool feature;
-
- if (size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA1C */
- genfn = gen_helper_crypto_sha1c;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 1: /* SHA1P */
- genfn = gen_helper_crypto_sha1p;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 2: /* SHA1M */
- genfn = gen_helper_crypto_sha1m;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 3: /* SHA1SU0 */
- genfn = gen_helper_crypto_sha1su0;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 4: /* SHA256H */
- genfn = gen_helper_crypto_sha256h;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- case 5: /* SHA256H2 */
- genfn = gen_helper_crypto_sha256h2;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- case 6: /* SHA256SU1 */
- genfn = gen_helper_crypto_sha256su1;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
-}
-
/* Crypto two-reg SHA
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
* +-----------------+------+-----------+--------+-----+------+------+
@@ -13906,7 +13849,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 11/67] target/arm: Convert Cryptographic 2-register SHA to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (9 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 10/67] target/arm: Convert Cryptographic 3-register SHA " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 12/67] target/arm: Convert Cryptographic 3-register SHA512 " Richard Henderson
` (56 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 6 ++++
target/arm/tcg/translate-a64.c | 54 +++-------------------------------
2 files changed, 10 insertions(+), 50 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7590659ee6..350afabc77 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -614,3 +614,9 @@ SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
+
+### Cryptographic two-register SHA
+
+SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
+SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
+SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5bef39d4e7..1d20bf0c35 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4606,6 +4606,10 @@ TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256
TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
+TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
+TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
+TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13506,55 +13510,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto two-reg SHA
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----------------+------+-----------+--------+-----+------+------+
- * | 0 1 0 1 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----------------+------+-----------+--------+-----+------+------+
- */
-static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_2 *genfn;
- bool feature;
-
- if (size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA1H */
- feature = dc_isar_feature(aa64_sha1, s);
- genfn = gen_helper_crypto_sha1h;
- break;
- case 1: /* SHA1SU1 */
- feature = dc_isar_feature(aa64_sha1, s);
- genfn = gen_helper_crypto_sha1su1;
- break;
- case 2: /* SHA256SU0 */
- feature = dc_isar_feature(aa64_sha256, s);
- genfn = gen_helper_crypto_sha256su0;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
-}
-
/* Crypto three-reg SHA512
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
* +-----------------------+------+---+---+-----+--------+------+------+
@@ -13849,7 +13804,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 12/67] target/arm: Convert Cryptographic 3-register SHA512 to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (10 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 11/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 13/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
` (55 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 11 ++++
target/arm/tcg/translate-a64.c | 97 ++++++++--------------------------
2 files changed, 32 insertions(+), 76 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 350afabc77..c342c27608 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -31,6 +31,7 @@
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
+@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
### Data Processing - Immediate
@@ -620,3 +621,13 @@ SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
+
+### Cryptographic three-register SHA512
+
+SHA512H 1100 1110 011 ..... 100000 ..... ..... @rrr_q1e0
+SHA512H2 1100 1110 011 ..... 100001 ..... ..... @rrr_q1e0
+SHA512SU1 1100 1110 011 ..... 100010 ..... ..... @rrr_q1e0
+RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
+SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
+SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
+SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1d20bf0c35..77b24cd52e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1341,6 +1341,17 @@ static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
return true;
}
+static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4589,7 +4600,7 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
}
/*
- * Cryptographic AES, SHA
+ * Cryptographic AES, SHA, SHA512
*/
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
@@ -4610,6 +4621,15 @@ TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
+TRANS_FEAT(SHA512H, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h)
+TRANS_FEAT(SHA512H2, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h2)
+TRANS_FEAT(SHA512SU1, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512su1)
+TRANS_FEAT(RAX1, aa64_sha3, do_gvec_fn3, a, gen_gvec_rax1)
+TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw1)
+TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
+TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
+
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13510,80 +13530,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto three-reg SHA512
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
- * +-----------------------+------+---+---+-----+--------+------+------+
- * | 1 1 0 0 1 1 1 0 0 1 1 | Rm | 1 | O | 0 0 | opcode | Rn | Rd |
- * +-----------------------+------+---+---+-----+--------+------+------+
- */
-static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 10, 2);
- int o = extract32(insn, 14, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
- gen_helper_gvec_3 *oolfn = NULL;
- GVecGen3Fn *gvecfn = NULL;
-
- if (o == 0) {
- switch (opcode) {
- case 0: /* SHA512H */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512h;
- break;
- case 1: /* SHA512H2 */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512h2;
- break;
- case 2: /* SHA512SU1 */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512su1;
- break;
- case 3: /* RAX1 */
- feature = dc_isar_feature(aa64_sha3, s);
- gvecfn = gen_gvec_rax1;
- break;
- default:
- g_assert_not_reached();
- }
- } else {
- switch (opcode) {
- case 0: /* SM3PARTW1 */
- feature = dc_isar_feature(aa64_sm3, s);
- oolfn = gen_helper_crypto_sm3partw1;
- break;
- case 1: /* SM3PARTW2 */
- feature = dc_isar_feature(aa64_sm3, s);
- oolfn = gen_helper_crypto_sm3partw2;
- break;
- case 2: /* SM4EKEY */
- feature = dc_isar_feature(aa64_sm4, s);
- oolfn = gen_helper_crypto_sm4ekey;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (oolfn) {
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
- } else {
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
- }
-}
-
/* Crypto two-reg SHA512
* 31 12 11 10 9 5 4 0
* +-----------------------------------------+--------+------+------+
@@ -13804,7 +13750,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 13/67] target/arm: Convert Cryptographic 2-register SHA512 to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (11 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 12/67] target/arm: Convert Cryptographic 3-register SHA512 " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 14/67] target/arm: Convert Cryptographic 4-register " Richard Henderson
` (54 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 5 ++++
target/arm/tcg/translate-a64.c | 50 ++--------------------------------
2 files changed, 8 insertions(+), 47 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index c342c27608..5a46205751 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -631,3 +631,8 @@ RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
+
+### Cryptographic two-register SHA512
+
+SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
+SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 77b24cd52e..eed0abe912 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4629,6 +4629,9 @@ TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3part
TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
+TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
+TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -13530,52 +13533,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto two-reg SHA512
- * 31 12 11 10 9 5 4 0
- * +-----------------------------------------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 | opcode | Rn | Rd |
- * +-----------------------------------------+--------+------+------+
- */
-static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 10, 2);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
-
- switch (opcode) {
- case 0: /* SHA512SU0 */
- feature = dc_isar_feature(aa64_sha512, s);
- break;
- case 1: /* SM4E */
- feature = dc_isar_feature(aa64_sm4, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA512SU0 */
- gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
- break;
- case 1: /* SM4E */
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
- break;
- default:
- g_assert_not_reached();
- }
-}
-
/* Crypto four-register
* 31 23 22 21 20 16 15 14 10 9 5 4 0
* +-------------------+-----+------+---+------+------+------+
@@ -13750,7 +13707,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 14/67] target/arm: Convert Cryptographic 4-register to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (12 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 13/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 15/67] target/arm: Convert Cryptographic 3-register, imm2 " Richard Henderson
` (53 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 8 ++
target/arm/tcg/translate-a64.c | 132 +++++++++++----------------------
2 files changed, 51 insertions(+), 89 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 5a46205751..ef6902e86a 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -27,11 +27,13 @@
&i imm
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
+&qrrrr_e q rd rn rm ra esz
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
+@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
### Data Processing - Immediate
@@ -636,3 +638,9 @@ SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
+
+### Cryptographic four-register
+
+EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
+BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
+SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index eed0abe912..2951e7eb59 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1352,6 +1352,17 @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
return true;
}
+static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn4(s, a->q, a->rd, a->rn, a->rm, a->ra, fn, a->esz);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4632,6 +4643,38 @@ TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
+TRANS_FEAT(EOR3, aa64_sha3, do_gvec_fn4, a, gen_gvec_eor3)
+TRANS_FEAT(BCAX, aa64_sha3, do_gvec_fn4, a, gen_gvec_bcax)
+
+static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
+{
+ if (!dc_isar_feature(aa64_sm3, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 tcg_op1 = tcg_temp_new_i32();
+ TCGv_i32 tcg_op2 = tcg_temp_new_i32();
+ TCGv_i32 tcg_op3 = tcg_temp_new_i32();
+ TCGv_i32 tcg_res = tcg_temp_new_i32();
+ unsigned vsz, dofs;
+
+ read_vec_element_i32(s, tcg_op1, a->rn, 3, MO_32);
+ read_vec_element_i32(s, tcg_op2, a->rm, 3, MO_32);
+ read_vec_element_i32(s, tcg_op3, a->ra, 3, MO_32);
+
+ tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
+ tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
+
+ /* Clear the whole register first, then store bits [127:96]. */
+ vsz = vec_full_reg_size(s);
+ dofs = vec_full_reg_offset(s, a->rd);
+ tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
+ write_vec_element_i32(s, tcg_res, a->rd, 3, MO_32);
+ }
+ return true;
+}
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -13533,94 +13576,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto four-register
- * 31 23 22 21 20 16 15 14 10 9 5 4 0
- * +-------------------+-----+------+---+------+------+------+
- * | 1 1 0 0 1 1 1 0 0 | Op0 | Rm | 0 | Ra | Rn | Rd |
- * +-------------------+-----+------+---+------+------+------+
- */
-static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
-{
- int op0 = extract32(insn, 21, 2);
- int rm = extract32(insn, 16, 5);
- int ra = extract32(insn, 10, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
-
- switch (op0) {
- case 0: /* EOR3 */
- case 1: /* BCAX */
- feature = dc_isar_feature(aa64_sha3, s);
- break;
- case 2: /* SM3SS1 */
- feature = dc_isar_feature(aa64_sm3, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (op0 < 2) {
- TCGv_i64 tcg_op1, tcg_op2, tcg_op3, tcg_res[2];
- int pass;
-
- tcg_op1 = tcg_temp_new_i64();
- tcg_op2 = tcg_temp_new_i64();
- tcg_op3 = tcg_temp_new_i64();
- tcg_res[0] = tcg_temp_new_i64();
- tcg_res[1] = tcg_temp_new_i64();
-
- for (pass = 0; pass < 2; pass++) {
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
- read_vec_element(s, tcg_op3, ra, pass, MO_64);
-
- if (op0 == 0) {
- /* EOR3 */
- tcg_gen_xor_i64(tcg_res[pass], tcg_op2, tcg_op3);
- } else {
- /* BCAX */
- tcg_gen_andc_i64(tcg_res[pass], tcg_op2, tcg_op3);
- }
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
- }
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
- } else {
- TCGv_i32 tcg_op1, tcg_op2, tcg_op3, tcg_res, tcg_zero;
-
- tcg_op1 = tcg_temp_new_i32();
- tcg_op2 = tcg_temp_new_i32();
- tcg_op3 = tcg_temp_new_i32();
- tcg_res = tcg_temp_new_i32();
- tcg_zero = tcg_constant_i32(0);
-
- read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
- read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
- read_vec_element_i32(s, tcg_op3, ra, 3, MO_32);
-
- tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
- tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
-
- write_vec_element_i32(s, tcg_zero, rd, 0, MO_32);
- write_vec_element_i32(s, tcg_zero, rd, 1, MO_32);
- write_vec_element_i32(s, tcg_zero, rd, 2, MO_32);
- write_vec_element_i32(s, tcg_res, rd, 3, MO_32);
- }
-}
-
/* Crypto XAR
* 31 21 20 16 15 10 9 5 4 0
* +-----------------------+------+--------+------+------+
@@ -13707,7 +13662,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 15/67] target/arm: Convert Cryptographic 3-register, imm2 to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (13 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 14/67] target/arm: Convert Cryptographic 4-register " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 16/67] target/arm: Convert XAR " Richard Henderson
` (52 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 10 ++++++++
target/arm/tcg/translate-a64.c | 43 ++++++++++------------------------
2 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index ef6902e86a..1292312a7f 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -644,3 +644,13 @@ SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
+
+### Cryptographic three-register, imm2
+
+&crypto3i rd rn rm imm
+@crypto3i ........ ... rm:5 .. imm:2 .. rn:5 rd:5 &crypto3i
+
+SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
+SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
+SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
+SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 2951e7eb59..cf3a7dfa99 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4676,6 +4676,18 @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
return true;
}
+static bool do_crypto3i(DisasContext *s, arg_crypto3i *a, gen_helper_gvec_3 *fn)
+{
+ if (fp_access_check(s)) {
+ gen_gvec_op3_ool(s, true, a->rd, a->rn, a->rm, a->imm, fn);
+ }
+ return true;
+}
+TRANS_FEAT(SM3TT1A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1a)
+TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
+TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
+TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13604,36 +13616,6 @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
vec_full_reg_size(s));
}
-/* Crypto three-reg imm2
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
- * +-----------------------+------+-----+------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 0 1 0 | Rm | 1 0 | imm2 | opcode | Rn | Rd |
- * +-----------------------+------+-----+------+--------+------+------+
- */
-static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
-{
- static gen_helper_gvec_3 * const fns[4] = {
- gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
- gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
- };
- int opcode = extract32(insn, 10, 2);
- int imm2 = extract32(insn, 12, 2);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (!dc_isar_feature(aa64_sm3, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
-}
-
/* C3.6 Data processing - SIMD, inc Crypto
*
* As the decode gets a little complex we are using a table based
@@ -13663,7 +13645,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
- { 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 16/67] target/arm: Convert XAR to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (14 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 15/67] target/arm: Convert Cryptographic 3-register, imm2 " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 17/67] target/arm: Convert Advanced SIMD copy " Richard Henderson
` (51 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++++
target/arm/tcg/translate-a64.c | 43 +++++++++++-----------------------
2 files changed, 18 insertions(+), 29 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1292312a7f..7f354af25d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -654,3 +654,7 @@ SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
+
+### Cryptographic XAR
+
+XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index cf3a7dfa99..75f1e6a7b9 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4688,6 +4688,20 @@ TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
+static bool trans_XAR(DisasContext *s, arg_XAR *a)
+{
+ if (!dc_isar_feature(aa64_sha3, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_xar(MO_64, vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), a->imm, 16,
+ vec_full_reg_size(s));
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13588,34 +13602,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto XAR
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 1 0 0 | Rm | imm6 | Rn | Rd |
- * +-----------------------+------+--------+------+------+
- */
-static void disas_crypto_xar(DisasContext *s, uint32_t insn)
-{
- int rm = extract32(insn, 16, 5);
- int imm6 = extract32(insn, 10, 6);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (!dc_isar_feature(aa64_sha3, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- gen_gvec_xar(MO_64, vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), imm6, 16,
- vec_full_reg_size(s));
-}
-
/* C3.6 Data processing - SIMD, inc Crypto
*
* As the decode gets a little complex we are using a table based
@@ -13644,7 +13630,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 17/67] target/arm: Convert Advanced SIMD copy to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (15 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 16/67] target/arm: Convert XAR " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 18/67] target/arm: Convert FMULX " Richard Henderson
` (50 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 13 +
target/arm/tcg/translate-a64.c | 426 +++++++++++----------------------
2 files changed, 152 insertions(+), 287 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7f354af25d..d5bfeae7a8 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -658,3 +658,16 @@ SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
### Cryptographic XAR
XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
+
+### Advanced SIMD scalar copy
+
+DUP_element_s 0101 1110 000 imm:5 0 0000 1 rn:5 rd:5
+
+### Advanced SIMD copy
+
+DUP_element_v 0 q:1 00 1110 000 imm:5 0 0000 1 rn:5 rd:5
+DUP_general 0 q:1 00 1110 000 imm:5 0 0001 1 rn:5 rd:5
+INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
+SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
+UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
+INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 75f1e6a7b9..1a12bf22fd 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4702,6 +4702,145 @@ static bool trans_XAR(DisasContext *s, arg_XAR *a)
return true;
}
+/*
+ * Advanced SIMD copy
+ */
+
+static bool decode_esz_idx(int imm, MemOp *pesz, unsigned *pidx)
+{
+ unsigned esz = ctz32(imm);
+ if (esz <= MO_64) {
+ *pesz = esz;
+ *pidx = imm >> (esz + 1);
+ return true;
+ }
+ return false;
+}
+
+static bool trans_DUP_element_s(DisasContext *s, arg_DUP_element_s *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ /*
+ * This instruction just extracts the specified element and
+ * zero-extends it into the bottom of the destination register.
+ */
+ TCGv_i64 tmp = tcg_temp_new_i64();
+ read_vec_element(s, tmp, a->rn, idx, esz);
+ write_fp_dreg(s, a->rd, tmp);
+ }
+ return true;
+}
+
+static bool trans_DUP_element_v(DisasContext *s, arg_DUP_element_v *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (esz == MO_64 && !a->q) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_dup_mem(esz, vec_full_reg_offset(s, a->rd),
+ vec_reg_offset(s, a->rn, idx, esz),
+ a->q ? 16 : 8, vec_full_reg_size(s));
+ }
+ return true;
+}
+
+static bool trans_DUP_general(DisasContext *s, arg_DUP_general *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (esz == MO_64 && !a->q) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ cpu_reg(s, a->rn));
+ }
+ return true;
+}
+
+static bool do_smov_umov(DisasContext *s, arg_SMOV *a, MemOp is_signed)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (is_signed) {
+ if (esz == MO_64 || (esz == MO_32 && !a->q)) {
+ return false;
+ }
+ } else {
+ if (esz == MO_64 ? !a->q : a->q) {
+ return false;
+ }
+ }
+ if (fp_access_check(s)) {
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
+ read_vec_element(s, tcg_rd, a->rn, idx, esz | is_signed);
+ if (is_signed && !a->q) {
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
+ }
+ }
+ return true;
+}
+
+TRANS(SMOV, do_smov_umov, a, MO_SIGN)
+TRANS(UMOV, do_smov_umov, a, 0)
+
+static bool trans_INS_general(DisasContext *s, arg_INS_general *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ write_vec_element(s, cpu_reg(s, a->rn), a->rd, idx, esz);
+ clear_vec_high(s, true, a->rd);
+ }
+ return true;
+}
+
+static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
+{
+ MemOp esz;
+ unsigned didx, sidx;
+
+ if (!decode_esz_idx(a->di, &esz, &didx)) {
+ return false;
+ }
+ sidx = a->si >> esz;
+ if (fp_access_check(s)) {
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ read_vec_element(s, tmp, a->rn, sidx, esz);
+ write_vec_element(s, tmp, a->rd, didx, esz);
+
+ /* INS is considered a 128-bit write for SVE. */
+ clear_vec_high(s, true, a->rd);
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -7760,268 +7899,6 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
write_fp_dreg(s, rd, tcg_res);
}
-/* DUP (Element, Vector)
- *
- * 31 30 29 21 20 16 15 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_dupe(DisasContext *s, int is_q, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- int index;
-
- if (size > 3 || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- index = imm5 >> (size + 1);
- tcg_gen_gvec_dup_mem(size, vec_full_reg_offset(s, rd),
- vec_reg_offset(s, rn, index, size),
- is_q ? 16 : 8, vec_full_reg_size(s));
-}
-
-/* DUP (element, scalar)
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+--------+-------------+------+------+
- * | 0 1 0 1 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
- * +-----------------------+--------+-------------+------+------+
- */
-static void handle_simd_dupes(DisasContext *s, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- int index;
- TCGv_i64 tmp;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- index = imm5 >> (size + 1);
-
- /* This instruction just extracts the specified element and
- * zero-extends it into the bottom of the destination register.
- */
- tmp = tcg_temp_new_i64();
- read_vec_element(s, tmp, rn, index, size);
- write_fp_dreg(s, rd, tmp);
-}
-
-/* DUP (General)
- *
- * 31 30 29 21 20 16 15 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 1 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_dupg(DisasContext *s, int is_q, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- uint32_t dofs, oprsz, maxsz;
-
- if (size > 3 || ((size == 3) && !is_q)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- dofs = vec_full_reg_offset(s, rd);
- oprsz = is_q ? 16 : 8;
- maxsz = vec_full_reg_size(s);
-
- tcg_gen_gvec_dup_i64(size, dofs, oprsz, maxsz, cpu_reg(s, rn));
-}
-
-/* INS (Element)
- *
- * 31 21 20 16 15 14 11 10 9 5 4 0
- * +-----------------------+--------+------------+---+------+------+
- * | 0 1 1 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +-----------------------+--------+------------+---+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- * index: encoded in imm5<4:size+1>
- */
-static void handle_simd_inse(DisasContext *s, int rd, int rn,
- int imm4, int imm5)
-{
- int size = ctz32(imm5);
- int src_index, dst_index;
- TCGv_i64 tmp;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- dst_index = extract32(imm5, 1+size, 5);
- src_index = extract32(imm4, size, 4);
-
- tmp = tcg_temp_new_i64();
-
- read_vec_element(s, tmp, rn, src_index, size);
- write_vec_element(s, tmp, rd, dst_index, size);
-
- /* INS is considered a 128-bit write for SVE. */
- clear_vec_high(s, true, rd);
-}
-
-
-/* INS (General)
- *
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+--------+-------------+------+------+
- * | 0 1 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 1 1 1 | Rn | Rd |
- * +-----------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- * index: encoded in imm5<4:size+1>
- */
-static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
-{
- int size = ctz32(imm5);
- int idx;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- idx = extract32(imm5, 1 + size, 4 - size);
- write_vec_element(s, cpu_reg(s, rn), rd, idx, size);
-
- /* INS is considered a 128-bit write for SVE. */
- clear_vec_high(s, true, rd);
-}
-
-/*
- * UMOV (General)
- * SMOV (General)
- *
- * 31 30 29 21 20 16 15 12 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 1 U 1 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * U: unsigned when set
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_umov_smov(DisasContext *s, int is_q, int is_signed,
- int rn, int rd, int imm5)
-{
- int size = ctz32(imm5);
- int element;
- TCGv_i64 tcg_rd;
-
- /* Check for UnallocatedEncodings */
- if (is_signed) {
- if (size > 2 || (size == 2 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- } else {
- if (size > 3
- || (size < 3 && is_q)
- || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- element = extract32(imm5, 1+size, 4);
-
- tcg_rd = cpu_reg(s, rd);
- read_vec_element(s, tcg_rd, rn, element, size | (is_signed ? MO_SIGN : 0));
- if (is_signed && !is_q) {
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
- }
-}
-
-/* AdvSIMD copy
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
- * +---+---+----+-----------------+------+---+------+---+------+------+
- * | 0 | Q | op | 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +---+---+----+-----------------+------+---+------+---+------+------+
- */
-static void disas_simd_copy(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int imm4 = extract32(insn, 11, 4);
- int op = extract32(insn, 29, 1);
- int is_q = extract32(insn, 30, 1);
- int imm5 = extract32(insn, 16, 5);
-
- if (op) {
- if (is_q) {
- /* INS (element) */
- handle_simd_inse(s, rd, rn, imm4, imm5);
- } else {
- unallocated_encoding(s);
- }
- } else {
- switch (imm4) {
- case 0:
- /* DUP (element - vector) */
- handle_simd_dupe(s, is_q, rd, rn, imm5);
- break;
- case 1:
- /* DUP (general) */
- handle_simd_dupg(s, is_q, rd, rn, imm5);
- break;
- case 3:
- if (is_q) {
- /* INS (general) */
- handle_simd_insg(s, rd, rn, imm5);
- } else {
- unallocated_encoding(s);
- }
- break;
- case 5:
- case 7:
- /* UMOV/SMOV (is_q indicates 32/64; imm4 indicates signedness) */
- handle_simd_umov_smov(s, is_q, (imm4 == 5), rn, rd, imm5);
- break;
- default:
- unallocated_encoding(s);
- break;
- }
- }
-}
-
/* AdvSIMD modified immediate
* 31 30 29 28 19 18 16 15 12 11 10 9 5 4 0
* +---+---+----+---------------------+-----+-------+----+---+-------+------+
@@ -8085,29 +7962,6 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
}
}
-/* AdvSIMD scalar copy
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
- * +-----+----+-----------------+------+---+------+---+------+------+
- * | 0 1 | op | 1 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +-----+----+-----------------+------+---+------+---+------+------+
- */
-static void disas_simd_scalar_copy(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int imm4 = extract32(insn, 11, 4);
- int imm5 = extract32(insn, 16, 5);
- int op = extract32(insn, 29, 1);
-
- if (op != 0 || imm4 != 0) {
- unallocated_encoding(s);
- return;
- }
-
- /* DUP (element, scalar) */
- handle_simd_dupes(s, rd, rn, imm5);
-}
-
/* AdvSIMD scalar pairwise
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
* +-----+---+-----------+------+-----------+--------+-----+------+------+
@@ -13614,7 +13468,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
{ 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes },
- { 0x0e000400, 0x9fe08400, disas_simd_copy },
{ 0x0f000000, 0x9f000400, disas_simd_indexed }, /* vector indexed */
/* simd_mod_imm decode is a subset of simd_shift_imm, so must precede it */
{ 0x0f000400, 0x9ff80400, disas_simd_mod_imm },
@@ -13627,7 +13480,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
- { 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 18/67] target/arm: Convert FMULX to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (16 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 17/67] target/arm: Convert Advanced SIMD copy " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 19/67] target/arm: Convert FADD, FSUB, FDIV, FMUL " Richard Henderson
` (49 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Convert all forms (scalar, vector, scalar indexed, vector indexed),
which allows us to remove switch table entries elsewhere.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/helper-a64.h | 8 ++
target/arm/tcg/a64.decode | 45 +++++++
target/arm/tcg/translate-a64.c | 221 +++++++++++++++++++++++++++------
target/arm/tcg/vec_helper.c | 39 +++---
4 files changed, 259 insertions(+), 54 deletions(-)
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index 0518165399..b79751a717 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -132,3 +132,11 @@ DEF_HELPER_4(cpye, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d5bfeae7a8..2e0e01be01 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -20,21 +20,44 @@
#
%rd 0:5
+%esz_sd 22:1 !function=plus_2
+%hl 11:1 21:1
+%hlm 11:1 20:2
&r rn
&ri rd imm
&rri_sf rd rn imm sf
&i imm
+&rrr_e rd rn rm esz
+&rrx_e rd rn rm idx esz
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
+&qrrx_e q rd rn rm idx esz
&qrrrr_e q rd rn rm ra esz
+@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
+@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
+
+@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
+@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
+@rrx_d ........ .. . rm:5 .... idx:1 . rn:5 rd:5 &rrx_e esz=3
+
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
+@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
+@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
+
+@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
+ &qrrx_e esz=1 idx=%hlm
+@qrrx_s . q:1 .. .... .. . rm:5 .... . . rn:5 rd:5 \
+ &qrrx_e esz=2 idx=%hl
+@qrrx_d . q:1 .. .... .. . rm:5 .... idx:1 . rn:5 rd:5 \
+ &qrrx_e esz=3
+
### Data Processing - Immediate
# PC-rel addressing
@@ -671,3 +694,25 @@ INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
+
+### Advanced SIMD scalar three same
+
+FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
+FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
+
+### Advanced SIMD three same
+
+FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
+FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+
+### Advanced SIMD scalar x indexed element
+
+FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
+FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
+FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+
+### Advanced SIMD vector x indexed element
+
+FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
+FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
+FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1a12bf22fd..8cbe6cd70f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4841,6 +4841,178 @@ static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
return true;
}
+/*
+ * Advanced SIMD three same
+ */
+
+typedef struct FPScalar {
+ void (*gen_h)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
+ void (*gen_s)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
+} FPScalar;
+
+static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rm);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t1 = read_fp_sreg(s, a->rm);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t1 = read_fp_hreg(s, a->rm);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ return false;
+ }
+ return true;
+}
+
+static const FPScalar f_scalar_fmulx = {
+ gen_helper_advsimd_mulxh,
+ gen_helper_vfp_mulxs,
+ gen_helper_vfp_mulxd,
+};
+TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
+
+static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
+ gen_helper_gvec_3_ptr * const fns[3])
+{
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
+ esz == MO_16, 0, fns[esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
+ gen_helper_gvec_fmulx_h,
+ gen_helper_gvec_fmulx_s,
+ gen_helper_gvec_fmulx_d,
+};
+TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
+
+/*
+ * Advanced SIMD scalar/vector x indexed element
+ */
+
+static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t1, a->rm, a->idx, MO_64);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_32);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_16);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
+
+static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
+ gen_helper_gvec_3_ptr * const fns[3])
+{
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
+ esz == MO_16, a->idx, fns[esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
+ gen_helper_gvec_fmulx_idx_h,
+ gen_helper_gvec_fmulx_idx_s,
+ gen_helper_gvec_fmulx_idx_d,
+};
+TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
+
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -9011,9 +9183,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1a: /* FADD */
gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1b: /* FMULX */
- gen_helper_vfp_mulxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9058,6 +9227,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1b: /* FMULX */
g_assert_not_reached();
}
@@ -9084,9 +9254,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1a: /* FADD */
gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1b: /* FMULX */
- gen_helper_vfp_mulxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9134,6 +9301,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1b: /* FMULX */
g_assert_not_reached();
}
@@ -9172,7 +9340,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
/* Floating point: U, size[1] and opcode indicate operation */
int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
switch (fpopcode) {
- case 0x1b: /* FMULX */
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
@@ -9183,6 +9350,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x7a: /* FABD */
break;
default:
+ case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9335,7 +9503,6 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
TCGv_i32 tcg_res;
switch (fpopcode) {
- case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
@@ -9346,6 +9513,7 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
case 0x1d: /* FACGT */
break;
default:
+ case 0x03: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9365,9 +9533,6 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
tcg_res = tcg_temp_new_i32();
switch (fpopcode) {
- case 0x03: /* FMULX */
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x04: /* FCMEQ (reg) */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9394,6 +9559,7 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x03: /* FMULX */
g_assert_not_reached();
}
@@ -11051,7 +11217,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
rn, rm, rd);
return;
- case 0x1b: /* FMULX */
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
@@ -11097,6 +11262,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -11441,7 +11607,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x2: /* FADD */
- case 0x3: /* FMULX */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x7: /* FRECPS */
@@ -11467,6 +11632,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x3: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -11543,9 +11709,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x2: /* FADD */
gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3: /* FMULX */
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11597,6 +11760,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x3: /* FMULX */
g_assert_not_reached();
}
@@ -12816,7 +12980,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x01: /* FMLA */
case 0x05: /* FMLS */
case 0x09: /* FMUL */
- case 0x19: /* FMULX */
is_fp = 1;
break;
case 0x1d: /* SQRDMLAH */
@@ -12885,6 +13048,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x19: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -13108,10 +13272,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x09: /* FMUL */
gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
break;
- case 0x19: /* FMULX */
- gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
- break;
default:
+ case 0x19: /* FMULX */
g_assert_not_reached();
}
@@ -13224,24 +13386,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
g_assert_not_reached();
}
break;
- case 0x19: /* FMULX */
- switch (size) {
- case 1:
- if (is_scalar) {
- gen_helper_advsimd_mulxh(tcg_res, tcg_op,
- tcg_idx, fpst);
- } else {
- gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
- tcg_idx, fpst);
- }
- break;
- case 2:
- gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13283,6 +13427,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x19: /* FMULX */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 1f93510b85..8684581923 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1248,6 +1248,9 @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
#ifdef TARGET_AARCH64
+DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
+DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
+DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
@@ -1385,7 +1388,7 @@ DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, H8)
#undef DO_MLA_IDX
-#define DO_FMUL_IDX(NAME, ADD, TYPE, H) \
+#define DO_FMUL_IDX(NAME, ADD, MUL, TYPE, H) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
intptr_t i, j, oprsz = simd_oprsz(desc); \
@@ -1395,33 +1398,37 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
TYPE mm = m[H(i + idx)]; \
for (j = 0; j < segment; j++) { \
- d[i + j] = TYPE##_##ADD(d[i + j], \
- TYPE##_mul(n[i + j], mm, stat), stat); \
+ d[i + j] = ADD(d[i + j], MUL(n[i + j], mm, stat), stat); \
} \
} \
clear_tail(d, oprsz, simd_maxsz(desc)); \
}
-#define float16_nop(N, M, S) (M)
-#define float32_nop(N, M, S) (M)
-#define float64_nop(N, M, S) (M)
+#define nop(N, M, S) (M)
-DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16, H2)
-DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32, H4)
-DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64, H8)
+DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32_mul, float32, H4)
+DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64_mul, float64, H8)
+
+#ifdef TARGET_AARCH64
+
+DO_FMUL_IDX(gvec_fmulx_idx_h, nop, helper_advsimd_mulxh, float16, H2)
+DO_FMUL_IDX(gvec_fmulx_idx_s, nop, helper_vfp_mulxs, float32, H4)
+DO_FMUL_IDX(gvec_fmulx_idx_d, nop, helper_vfp_mulxd, float64, H8)
+
+#endif
+
+#undef nop
/*
* Non-fused multiply-accumulate operations, for Neon. NB that unlike
* the fused ops below they assume accumulate both from and into Vd.
*/
-DO_FMUL_IDX(gvec_fmla_nf_idx_h, add, float16, H2)
-DO_FMUL_IDX(gvec_fmla_nf_idx_s, add, float32, H4)
-DO_FMUL_IDX(gvec_fmls_nf_idx_h, sub, float16, H2)
-DO_FMUL_IDX(gvec_fmls_nf_idx_s, sub, float32, H4)
+DO_FMUL_IDX(gvec_fmla_nf_idx_h, float16_add, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmla_nf_idx_s, float32_add, float32_mul, float32, H4)
+DO_FMUL_IDX(gvec_fmls_nf_idx_h, float16_sub, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmls_nf_idx_s, float32_sub, float32_mul, float32, H4)
-#undef float16_nop
-#undef float32_nop
-#undef float64_nop
#undef DO_FMUL_IDX
#define DO_FMLA_IDX(NAME, TYPE, H) \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 19/67] target/arm: Convert FADD, FSUB, FDIV, FMUL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (17 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 18/67] target/arm: Convert FMULX " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 20/67] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Richard Henderson
` (48 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/helper-a64.h | 4 +
target/arm/tcg/translate.h | 5 +
target/arm/tcg/a64.decode | 27 +++++
target/arm/tcg/translate-a64.c | 205 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 4 +
5 files changed, 143 insertions(+), 102 deletions(-)
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index b79751a717..371388f61b 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -133,6 +133,10 @@ DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 80e85096a8..ecfa242eef 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -252,6 +252,11 @@ static inline int shl_12(DisasContext *s, int x)
return x << 12;
}
+static inline int xor_2(DisasContext *s, int x)
+{
+ return x ^ 2;
+}
+
static inline int neon_3same_fp_size(DisasContext *s, int x)
{
/* Convert 0==fp32, 1==fp16 into a MO_* value */
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 2e0e01be01..82daafbef5 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -21,6 +21,7 @@
%rd 0:5
%esz_sd 22:1 !function=plus_2
+%esz_hsd 22:2 !function=xor_2
%hl 11:1 21:1
%hlm 11:1 20:2
@@ -37,6 +38,7 @@
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
+@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
@@ -697,22 +699,47 @@ INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
### Advanced SIMD scalar three same
+FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
+FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
+FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
+FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
### Advanced SIMD three same
+FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
+FADD_v 0.00 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+
+FSUB_v 0.00 1110 110 ..... 00010 1 ..... ..... @qrrr_h
+FSUB_v 0.00 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+
+FDIV_v 0.10 1110 010 ..... 00111 1 ..... ..... @qrrr_h
+FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
+
+FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
+FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
### Advanced SIMD scalar x indexed element
+FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
+FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
+FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
### Advanced SIMD vector x indexed element
+FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
+FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
+FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8cbe6cd70f..97c3d758d6 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4887,6 +4887,34 @@ static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
return true;
}
+static const FPScalar f_scalar_fadd = {
+ gen_helper_vfp_addh,
+ gen_helper_vfp_adds,
+ gen_helper_vfp_addd,
+};
+TRANS(FADD_s, do_fp3_scalar, a, &f_scalar_fadd)
+
+static const FPScalar f_scalar_fsub = {
+ gen_helper_vfp_subh,
+ gen_helper_vfp_subs,
+ gen_helper_vfp_subd,
+};
+TRANS(FSUB_s, do_fp3_scalar, a, &f_scalar_fsub)
+
+static const FPScalar f_scalar_fdiv = {
+ gen_helper_vfp_divh,
+ gen_helper_vfp_divs,
+ gen_helper_vfp_divd,
+};
+TRANS(FDIV_s, do_fp3_scalar, a, &f_scalar_fdiv)
+
+static const FPScalar f_scalar_fmul = {
+ gen_helper_vfp_mulh,
+ gen_helper_vfp_muls,
+ gen_helper_vfp_muld,
+};
+TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
+
static const FPScalar f_scalar_fmulx = {
gen_helper_advsimd_mulxh,
gen_helper_vfp_mulxs,
@@ -4922,6 +4950,34 @@ static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
return true;
}
+static gen_helper_gvec_3_ptr * const f_vector_fadd[3] = {
+ gen_helper_gvec_fadd_h,
+ gen_helper_gvec_fadd_s,
+ gen_helper_gvec_fadd_d,
+};
+TRANS(FADD_v, do_fp3_vector, a, f_vector_fadd)
+
+static gen_helper_gvec_3_ptr * const f_vector_fsub[3] = {
+ gen_helper_gvec_fsub_h,
+ gen_helper_gvec_fsub_s,
+ gen_helper_gvec_fsub_d,
+};
+TRANS(FSUB_v, do_fp3_vector, a, f_vector_fsub)
+
+static gen_helper_gvec_3_ptr * const f_vector_fdiv[3] = {
+ gen_helper_gvec_fdiv_h,
+ gen_helper_gvec_fdiv_s,
+ gen_helper_gvec_fdiv_d,
+};
+TRANS(FDIV_v, do_fp3_vector, a, f_vector_fdiv)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
+ gen_helper_gvec_fmul_h,
+ gen_helper_gvec_fmul_s,
+ gen_helper_gvec_fmul_d,
+};
+TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
+
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
gen_helper_gvec_fmulx_h,
gen_helper_gvec_fmulx_s,
@@ -4975,6 +5031,7 @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
return true;
}
+TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
@@ -5005,6 +5062,13 @@ static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
return true;
}
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmul[3] = {
+ gen_helper_gvec_fmul_idx_h,
+ gen_helper_gvec_fmul_idx_s,
+ gen_helper_gvec_fmul_idx_d,
+};
+TRANS(FMUL_vi, do_fp3_vector_idx, a, f_vector_idx_fmul)
+
static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
gen_helper_gvec_fmulx_idx_h,
gen_helper_gvec_fmulx_idx_s,
@@ -6827,18 +6891,6 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
tcg_op2 = read_fp_sreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6855,6 +6907,12 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negs(tcg_res, tcg_res);
break;
+ default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
+ g_assert_not_reached();
}
write_fp_sreg(s, rd, tcg_res);
@@ -6875,18 +6933,6 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
tcg_op2 = read_fp_dreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6903,6 +6949,12 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negd(tcg_res, tcg_res);
break;
+ default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
+ g_assert_not_reached();
}
write_fp_dreg(s, rd, tcg_res);
@@ -6923,18 +6975,6 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_op2 = read_fp_hreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6952,6 +6992,10 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
break;
default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
g_assert_not_reached();
}
@@ -9180,9 +9224,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x18: /* FMAXNM */
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FADD */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9195,27 +9236,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x38: /* FMINNM */
gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3a: /* FSUB */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3e: /* FMIN */
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5b: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5d: /* FACGE */
gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5f: /* FDIV */
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_absd(tcg_res, tcg_res);
@@ -9227,7 +9259,11 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
g_assert_not_reached();
}
@@ -9251,9 +9287,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
tcg_res, fpst);
break;
- case 0x1a: /* FADD */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9269,27 +9302,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x38: /* FMINNM */
gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3a: /* FSUB */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3e: /* FMIN */
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5b: /* FMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5d: /* FACGE */
gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5f: /* FDIV */
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_abss(tcg_res, tcg_res);
@@ -9301,7 +9325,11 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
g_assert_not_reached();
}
@@ -11224,15 +11252,11 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x19: /* FMLA */
case 0x39: /* FMLS */
case 0x18: /* FMAXNM */
- case 0x1a: /* FADD */
case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
- case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
- case 0x5f: /* FDIV */
case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
if (!fp_access_check(s)) {
@@ -11262,7 +11286,11 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
unallocated_encoding(s);
return;
}
@@ -11606,19 +11634,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
- case 0x2: /* FADD */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x7: /* FRECPS */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
- case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
- case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
- case 0x17: /* FDIV */
case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
@@ -11632,7 +11656,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0xa: /* FSUB */
+ case 0x13: /* FMUL */
+ case 0x17: /* FDIV */
unallocated_encoding(s);
return;
}
@@ -11706,9 +11734,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0x2: /* FADD */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11728,27 +11753,18 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0xa: /* FSUB */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xe: /* FMIN */
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x13: /* FMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x14: /* FCMGE */
gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x15: /* FACGE */
gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x17: /* FDIV */
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
@@ -11760,7 +11776,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0xa: /* FSUB */
+ case 0x13: /* FMUL */
+ case 0x17: /* FDIV */
g_assert_not_reached();
}
@@ -12979,7 +12999,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
break;
case 0x01: /* FMLA */
case 0x05: /* FMLS */
- case 0x09: /* FMUL */
is_fp = 1;
break;
case 0x1d: /* SQRDMLAH */
@@ -13048,6 +13067,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
unallocated_encoding(s);
return;
@@ -13269,10 +13289,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
read_vec_element(s, tcg_res, rd, pass, MO_64);
gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
break;
- case 0x09: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
- break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
}
@@ -13368,24 +13386,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
g_assert_not_reached();
}
break;
- case 0x09: /* FMUL */
- switch (size) {
- case 1:
- if (is_scalar) {
- gen_helper_advsimd_mulh(tcg_res, tcg_op,
- tcg_idx, fpst);
- } else {
- gen_helper_advsimd_mul2h(tcg_res, tcg_op,
- tcg_idx, fpst);
- }
- break;
- case 2:
- gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13427,6 +13427,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 8684581923..4106536371 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1248,6 +1248,10 @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
#ifdef TARGET_AARCH64
+DO_3OP(gvec_fdiv_h, float16_div, float16)
+DO_3OP(gvec_fdiv_s, float32_div, float32)
+DO_3OP(gvec_fdiv_d, float64_div, float64)
+
DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 20/67] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (18 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 19/67] target/arm: Convert FADD, FSUB, FDIV, FMUL " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16 Richard Henderson
` (47 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 4 +
target/arm/tcg/a64.decode | 17 ++++
target/arm/tcg/translate-a64.c | 168 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 4 +
4 files changed, 113 insertions(+), 80 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 2b02733305..7ee15b9651 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -748,15 +748,19 @@ DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmax_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmin_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmin_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmin_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmaxnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmaxnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_recps_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_recps_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 82daafbef5..e2678d919e 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -704,6 +704,11 @@ FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
+FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
+FMAXNM_s 0001 1110 ..1 ..... 0110 10 ..... ..... @rrr_hsd
+FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
+
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
@@ -721,6 +726,18 @@ FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+FMAX_v 0.00 1110 010 ..... 00110 1 ..... ..... @qrrr_h
+FMAX_v 0.00 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMIN_v 0.00 1110 110 ..... 00110 1 ..... ..... @qrrr_h
+FMIN_v 0.00 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMAXNM_v 0.00 1110 010 ..... 00000 1 ..... ..... @qrrr_h
+FMAXNM_v 0.00 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
+
+FMINNM_v 0.00 1110 110 ..... 00000 1 ..... ..... @qrrr_h
+FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 97c3d758d6..6f8207d842 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4915,6 +4915,34 @@ static const FPScalar f_scalar_fmul = {
};
TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
+static const FPScalar f_scalar_fmax = {
+ gen_helper_advsimd_maxh,
+ gen_helper_vfp_maxs,
+ gen_helper_vfp_maxd,
+};
+TRANS(FMAX_s, do_fp3_scalar, a, &f_scalar_fmax)
+
+static const FPScalar f_scalar_fmin = {
+ gen_helper_advsimd_minh,
+ gen_helper_vfp_mins,
+ gen_helper_vfp_mind,
+};
+TRANS(FMIN_s, do_fp3_scalar, a, &f_scalar_fmin)
+
+static const FPScalar f_scalar_fmaxnm = {
+ gen_helper_advsimd_maxnumh,
+ gen_helper_vfp_maxnums,
+ gen_helper_vfp_maxnumd,
+};
+TRANS(FMAXNM_s, do_fp3_scalar, a, &f_scalar_fmaxnm)
+
+static const FPScalar f_scalar_fminnm = {
+ gen_helper_advsimd_minnumh,
+ gen_helper_vfp_minnums,
+ gen_helper_vfp_minnumd,
+};
+TRANS(FMINNM_s, do_fp3_scalar, a, &f_scalar_fminnm)
+
static const FPScalar f_scalar_fmulx = {
gen_helper_advsimd_mulxh,
gen_helper_vfp_mulxs,
@@ -4978,6 +5006,34 @@ static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
};
TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
+static gen_helper_gvec_3_ptr * const f_vector_fmax[3] = {
+ gen_helper_gvec_fmax_h,
+ gen_helper_gvec_fmax_s,
+ gen_helper_gvec_fmax_d,
+};
+TRANS(FMAX_v, do_fp3_vector, a, f_vector_fmax)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmin[3] = {
+ gen_helper_gvec_fmin_h,
+ gen_helper_gvec_fmin_s,
+ gen_helper_gvec_fmin_d,
+};
+TRANS(FMIN_v, do_fp3_vector, a, f_vector_fmin)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnm[3] = {
+ gen_helper_gvec_fmaxnum_h,
+ gen_helper_gvec_fmaxnum_s,
+ gen_helper_gvec_fmaxnum_d,
+};
+TRANS(FMAXNM_v, do_fp3_vector, a, f_vector_fmaxnm)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminnm[3] = {
+ gen_helper_gvec_fminnum_h,
+ gen_helper_gvec_fminnum_s,
+ gen_helper_gvec_fminnum_d,
+};
+TRANS(FMINNM_v, do_fp3_vector, a, f_vector_fminnm)
+
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
gen_helper_gvec_fmulx_h,
gen_helper_gvec_fmulx_s,
@@ -6891,18 +6947,6 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
tcg_op2 = read_fp_sreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negs(tcg_res, tcg_res);
@@ -6912,6 +6956,10 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -6933,18 +6981,6 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
tcg_op2 = read_fp_dreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negd(tcg_res, tcg_res);
@@ -6954,6 +6990,10 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -6975,18 +7015,6 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_op2 = read_fp_hreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
@@ -6996,6 +7024,10 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -9221,24 +9253,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
tcg_res, fpst);
break;
- case 0x18: /* FMAXNM */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1e: /* FMAX */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x38: /* FMINNM */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3e: /* FMIN */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9259,9 +9279,13 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
g_assert_not_reached();
@@ -9290,21 +9314,9 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1e: /* FMAX */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x18: /* FMAXNM */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x38: /* FMINNM */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3e: /* FMIN */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9325,9 +9337,13 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
g_assert_not_reached();
@@ -11251,11 +11267,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x7d: /* FACGT */
case 0x19: /* FMLA */
case 0x39: /* FMLS */
- case 0x18: /* FMAXNM */
case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x3e: /* FMIN */
case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
@@ -11286,9 +11298,13 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
unallocated_encoding(s);
@@ -11632,14 +11648,10 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
case 0x7: /* FRECPS */
- case 0x8: /* FMINNM */
case 0x9: /* FMLS */
- case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11656,9 +11668,13 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x0: /* FMAXNM */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x6: /* FMAX */
+ case 0x8: /* FMINNM */
case 0xa: /* FSUB */
+ case 0xe: /* FMIN */
case 0x13: /* FMUL */
case 0x17: /* FDIV */
unallocated_encoding(s);
@@ -11726,9 +11742,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x0: /* FMAXNM */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1: /* FMLA */
read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
@@ -11737,15 +11750,9 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x6: /* FMAX */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x8: /* FMINNM */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x9: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
@@ -11753,9 +11760,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0xe: /* FMIN */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11776,9 +11780,13 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x0: /* FMAXNM */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x6: /* FMAX */
+ case 0x8: /* FMINNM */
case 0xa: /* FSUB */
+ case 0xe: /* FMIN */
case 0x13: /* FMUL */
case 0x17: /* FDIV */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 4106536371..99ef676071 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1231,15 +1231,19 @@ DO_3OP(gvec_facgt_s, float32_acgt, float32)
DO_3OP(gvec_fmax_h, float16_max, float16)
DO_3OP(gvec_fmax_s, float32_max, float32)
+DO_3OP(gvec_fmax_d, float64_max, float64)
DO_3OP(gvec_fmin_h, float16_min, float16)
DO_3OP(gvec_fmin_s, float32_min, float32)
+DO_3OP(gvec_fmin_d, float64_min, float64)
DO_3OP(gvec_fmaxnum_h, float16_maxnum, float16)
DO_3OP(gvec_fmaxnum_s, float32_maxnum, float32)
+DO_3OP(gvec_fmaxnum_d, float64_maxnum, float64)
DO_3OP(gvec_fminnum_h, float16_minnum, float16)
DO_3OP(gvec_fminnum_s, float32_minnum, float32)
+DO_3OP(gvec_fminnum_d, float64_minnum, float64)
DO_3OP(gvec_recps_nf_h, float16_recps_nf, float16)
DO_3OP(gvec_recps_nf_s, float32_recps_nf, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (19 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 20/67] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 12:22 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 22/67] target/arm: Expand vfp neg and abs inline Richard Henderson
` (46 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Load and zero-extend float16 into a TCGv_i32 before
all scalar operations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-vfp.c | 39 +++++++++++++++++++---------------
1 file changed, 22 insertions(+), 17 deletions(-)
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
index b9af03b7c3..8e755fcde8 100644
--- a/target/arm/tcg/translate-vfp.c
+++ b/target/arm/tcg/translate-vfp.c
@@ -48,6 +48,12 @@ static inline void vfp_store_reg32(TCGv_i32 var, int reg)
tcg_gen_st_i32(var, tcg_env, vfp_reg_offset(false, reg));
}
+static inline void vfp_load_reg16(TCGv_i32 var, int reg)
+{
+ tcg_gen_ld16u_i32(var, tcg_env,
+ vfp_reg_offset(false, reg) + HOST_BIG_ENDIAN * 2);
+}
+
/*
* The imm8 encodes the sign bit, enough bits to represent an exponent in
* the range 01....1xx to 10....0xx, and the most significant 4 bits of
@@ -902,8 +908,7 @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
if (a->l) {
/* VFP to general purpose register */
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vn);
- tcg_gen_andi_i32(tmp, tmp, 0xffff);
+ vfp_load_reg16(tmp, a->vn);
store_reg(s, a->rt, tmp);
} else {
/* general purpose register to VFP */
@@ -1453,11 +1458,11 @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
fd = tcg_temp_new_i32();
fpst = fpstatus_ptr(FPST_FPCR_F16);
- vfp_load_reg32(f0, vn);
- vfp_load_reg32(f1, vm);
+ vfp_load_reg16(f0, vn);
+ vfp_load_reg16(f1, vm);
if (reads_vd) {
- vfp_load_reg32(fd, vd);
+ vfp_load_reg16(fd, vd);
}
fn(fd, f0, f1, fpst);
vfp_store_reg32(fd, vd);
@@ -1633,7 +1638,7 @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
}
f0 = tcg_temp_new_i32();
- vfp_load_reg32(f0, vm);
+ vfp_load_reg16(f0, vm);
fn(f0, f0);
vfp_store_reg32(f0, vd);
@@ -2106,13 +2111,13 @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vm = tcg_temp_new_i32();
vd = tcg_temp_new_i32();
- vfp_load_reg32(vn, a->vn);
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vn, a->vn);
+ vfp_load_reg16(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
gen_helper_vfp_negh(vn, vn);
}
- vfp_load_reg32(vd, a->vd);
+ vfp_load_reg16(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
gen_helper_vfp_negh(vd, vd);
@@ -2456,11 +2461,11 @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
vd = tcg_temp_new_i32();
vm = tcg_temp_new_i32();
- vfp_load_reg32(vd, a->vd);
+ vfp_load_reg16(vd, a->vd);
if (a->z) {
tcg_gen_movi_i32(vm, 0);
} else {
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vm, a->vm);
}
if (a->e) {
@@ -2700,7 +2705,7 @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_rinth(tmp, tmp, fpst);
vfp_store_reg32(tmp, a->vd);
@@ -2773,7 +2778,7 @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
tcg_rmode = gen_set_rmode(FPROUNDING_ZERO, fpst);
gen_helper_rinth(tmp, tmp, fpst);
@@ -2853,7 +2858,7 @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_rinth_exact(tmp, tmp, fpst);
vfp_store_reg32(tmp, a->vd);
@@ -3270,7 +3275,7 @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
fpst = fpstatus_ptr(FPST_FPCR_F16);
vm = tcg_temp_new_i32();
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vm, a->vm);
if (a->s) {
if (a->rz) {
@@ -3383,8 +3388,8 @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
/* Insert low half of Vm into high half of Vd */
rm = tcg_temp_new_i32();
rd = tcg_temp_new_i32();
- vfp_load_reg32(rm, a->vm);
- vfp_load_reg32(rd, a->vd);
+ vfp_load_reg16(rm, a->vm);
+ vfp_load_reg16(rd, a->vd);
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
vfp_store_reg32(rd, a->vd);
return true;
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 22/67] target/arm: Expand vfp neg and abs inline
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (20 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16 Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 23/67] target/arm: Convert FNMUL to decodetree Richard Henderson
` (45 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 6 ----
target/arm/tcg/translate.h | 30 +++++++++++++++++++
target/arm/tcg/translate-a64.c | 44 +++++++++++++--------------
target/arm/tcg/translate-vfp.c | 54 +++++++++++++++++-----------------
target/arm/vfp_helper.c | 30 -------------------
5 files changed, 79 insertions(+), 85 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 7ee15b9651..0fd01c9c52 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -132,12 +132,6 @@ DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
-DEF_HELPER_1(vfp_negh, f16, f16)
-DEF_HELPER_1(vfp_negs, f32, f32)
-DEF_HELPER_1(vfp_negd, f64, f64)
-DEF_HELPER_1(vfp_absh, f16, f16)
-DEF_HELPER_1(vfp_abss, f32, f32)
-DEF_HELPER_1(vfp_absd, f64, f64)
DEF_HELPER_2(vfp_sqrth, f16, f16, env)
DEF_HELPER_2(vfp_sqrts, f32, f32, env)
DEF_HELPER_2(vfp_sqrtd, f64, f64, env)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index ecfa242eef..b05a9eb668 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -406,6 +406,36 @@ static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
*/
uint64_t vfp_expand_imm(int size, uint8_t imm8);
+static inline void gen_vfp_absh(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_andi_i32(d, s, INT16_MAX);
+}
+
+static inline void gen_vfp_abss(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_andi_i32(d, s, INT32_MAX);
+}
+
+static inline void gen_vfp_absd(TCGv_i64 d, TCGv_i64 s)
+{
+ tcg_gen_andi_i64(d, s, INT64_MAX);
+}
+
+static inline void gen_vfp_negh(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_xori_i32(d, s, 1u << 15);
+}
+
+static inline void gen_vfp_negs(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_xori_i32(d, s, 1u << 31);
+}
+
+static inline void gen_vfp_negd(TCGv_i64 d, TCGv_i64 s)
+{
+ tcg_gen_xori_i64(d, s, 1ull << 63);
+}
+
/* Vector operations shared between ARM and AArch64. */
void gen_gvec_ceq0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 6f8207d842..878f83298f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -6591,10 +6591,10 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_mov_i32(tcg_res, tcg_op);
break;
case 0x1: /* FABS */
- tcg_gen_andi_i32(tcg_res, tcg_op, 0x7fff);
+ gen_vfp_absh(tcg_res, tcg_op);
break;
case 0x2: /* FNEG */
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
+ gen_vfp_negh(tcg_res, tcg_op);
break;
case 0x3: /* FSQRT */
fpst = fpstatus_ptr(FPST_FPCR_F16);
@@ -6645,10 +6645,10 @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_mov_i32(tcg_res, tcg_op);
goto done;
case 0x1: /* FABS */
- gen_helper_vfp_abss(tcg_res, tcg_op);
+ gen_vfp_abss(tcg_res, tcg_op);
goto done;
case 0x2: /* FNEG */
- gen_helper_vfp_negs(tcg_res, tcg_op);
+ gen_vfp_negs(tcg_res, tcg_op);
goto done;
case 0x3: /* FSQRT */
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
@@ -6720,10 +6720,10 @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
switch (opcode) {
case 0x1: /* FABS */
- gen_helper_vfp_absd(tcg_res, tcg_op);
+ gen_vfp_absd(tcg_res, tcg_op);
goto done;
case 0x2: /* FNEG */
- gen_helper_vfp_negd(tcg_res, tcg_op);
+ gen_vfp_negd(tcg_res, tcg_op);
goto done;
case 0x3: /* FSQRT */
gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
@@ -6949,7 +6949,7 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_negs(tcg_res, tcg_res);
+ gen_vfp_negs(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -6983,7 +6983,7 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_negd(tcg_res, tcg_res);
+ gen_vfp_negd(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -7017,7 +7017,7 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
+ gen_vfp_negh(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -7102,11 +7102,11 @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
* flipped if it is a negated-input.
*/
if (o1 == true) {
- gen_helper_vfp_negs(tcg_op3, tcg_op3);
+ gen_vfp_negs(tcg_op3, tcg_op3);
}
if (o0 != o1) {
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
+ gen_vfp_negs(tcg_op1, tcg_op1);
}
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
@@ -7134,11 +7134,11 @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
* flipped if it is a negated-input.
*/
if (o1 == true) {
- gen_helper_vfp_negd(tcg_op3, tcg_op3);
+ gen_vfp_negd(tcg_op3, tcg_op3);
}
if (o0 != o1) {
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
+ gen_vfp_negd(tcg_op1, tcg_op1);
}
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
@@ -9246,7 +9246,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
switch (fpopcode) {
case 0x39: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
+ gen_vfp_negd(tcg_op1, tcg_op1);
/* fall through */
case 0x19: /* FMLA */
read_vec_element(s, tcg_res, rd, pass, MO_64);
@@ -9270,7 +9270,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_absd(tcg_res, tcg_res);
+ gen_vfp_absd(tcg_res, tcg_res);
break;
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -9304,7 +9304,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
switch (fpopcode) {
case 0x39: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
+ gen_vfp_negs(tcg_op1, tcg_op1);
/* fall through */
case 0x19: /* FMLA */
read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
@@ -9328,7 +9328,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_abss(tcg_res, tcg_res);
+ gen_vfp_abss(tcg_res, tcg_res);
break;
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -9741,10 +9741,10 @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
}
break;
case 0x2f: /* FABS */
- gen_helper_vfp_absd(tcg_rd, tcg_rn);
+ gen_vfp_absd(tcg_rd, tcg_rn);
break;
case 0x6f: /* FNEG */
- gen_helper_vfp_negd(tcg_rd, tcg_rn);
+ gen_vfp_negd(tcg_rd, tcg_rn);
break;
case 0x7f: /* FSQRT */
gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_env);
@@ -12567,10 +12567,10 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
}
break;
case 0x2f: /* FABS */
- gen_helper_vfp_abss(tcg_res, tcg_op);
+ gen_vfp_abss(tcg_res, tcg_op);
break;
case 0x6f: /* FNEG */
- gen_helper_vfp_negs(tcg_res, tcg_op);
+ gen_vfp_negs(tcg_res, tcg_op);
break;
case 0x7f: /* FSQRT */
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
@@ -13291,7 +13291,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
switch (16 * u + opcode) {
case 0x05: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negd(tcg_op, tcg_op);
+ gen_vfp_negd(tcg_op, tcg_op);
/* fall through */
case 0x01: /* FMLA */
read_vec_element(s, tcg_res, rd, pass, MO_64);
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
index 8e755fcde8..39ec971ff7 100644
--- a/target/arm/tcg/translate-vfp.c
+++ b/target/arm/tcg/translate-vfp.c
@@ -1768,7 +1768,7 @@ static void gen_VMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(tmp, tmp);
+ gen_vfp_negh(tmp, tmp);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1786,7 +1786,7 @@ static void gen_VMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(tmp, tmp);
+ gen_vfp_negs(tmp, tmp);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1804,7 +1804,7 @@ static void gen_VMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(tmp, tmp);
+ gen_vfp_negd(tmp, tmp);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1824,7 +1824,7 @@ static void gen_VNMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1844,7 +1844,7 @@ static void gen_VNMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1864,7 +1864,7 @@ static void gen_VNMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1879,8 +1879,8 @@ static void gen_VNMLA_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(tmp, tmp);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(tmp, tmp);
+ gen_vfp_negh(vd, vd);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1895,8 +1895,8 @@ static void gen_VNMLA_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(tmp, tmp);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(tmp, tmp);
+ gen_vfp_negs(vd, vd);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1911,8 +1911,8 @@ static void gen_VNMLA_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(tmp, tmp);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(tmp, tmp);
+ gen_vfp_negd(vd, vd);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1940,7 +1940,7 @@ static void gen_VNMUL_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_mulh(vd, vn, vm, fpst);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
}
static bool trans_VNMUL_hp(DisasContext *s, arg_VNMUL_sp *a)
@@ -1952,7 +1952,7 @@ static void gen_VNMUL_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_muls(vd, vn, vm, fpst);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
}
static bool trans_VNMUL_sp(DisasContext *s, arg_VNMUL_sp *a)
@@ -1964,7 +1964,7 @@ static void gen_VNMUL_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_muld(vd, vn, vm, fpst);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
}
static bool trans_VNMUL_dp(DisasContext *s, arg_VNMUL_dp *a)
@@ -2115,12 +2115,12 @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vfp_load_reg16(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negh(vn, vn);
+ gen_vfp_negh(vn, vn);
}
vfp_load_reg16(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
@@ -2174,12 +2174,12 @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vfp_load_reg32(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negs(vn, vn);
+ gen_vfp_negs(vn, vn);
}
vfp_load_reg32(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
@@ -2239,12 +2239,12 @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
vfp_load_reg64(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negd(vn, vn);
+ gen_vfp_negd(vn, vn);
}
vfp_load_reg64(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
@@ -2414,13 +2414,13 @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
+DO_VFP_2OP(VABS, hp, gen_vfp_absh, aa32_fp16_arith)
+DO_VFP_2OP(VABS, sp, gen_vfp_abss, aa32_fpsp_v2)
+DO_VFP_2OP(VABS, dp, gen_vfp_absd, aa32_fpdp_v2)
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
+DO_VFP_2OP(VNEG, hp, gen_vfp_negh, aa32_fp16_arith)
+DO_VFP_2OP(VNEG, sp, gen_vfp_negs, aa32_fpsp_v2)
+DO_VFP_2OP(VNEG, dp, gen_vfp_negd, aa32_fpdp_v2)
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
{
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
index 3e5e37abbe..ce26b8a71a 100644
--- a/target/arm/vfp_helper.c
+++ b/target/arm/vfp_helper.c
@@ -281,36 +281,6 @@ VFP_BINOP(minnum)
VFP_BINOP(maxnum)
#undef VFP_BINOP
-dh_ctype_f16 VFP_HELPER(neg, h)(dh_ctype_f16 a)
-{
- return float16_chs(a);
-}
-
-float32 VFP_HELPER(neg, s)(float32 a)
-{
- return float32_chs(a);
-}
-
-float64 VFP_HELPER(neg, d)(float64 a)
-{
- return float64_chs(a);
-}
-
-dh_ctype_f16 VFP_HELPER(abs, h)(dh_ctype_f16 a)
-{
- return float16_abs(a);
-}
-
-float32 VFP_HELPER(abs, s)(float32 a)
-{
- return float32_abs(a);
-}
-
-float64 VFP_HELPER(abs, d)(float64 a)
-{
- return float64_abs(a);
-}
-
dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, CPUARMState *env)
{
return float16_sqrt(a, &env->vfp.fp_status_f16);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 23/67] target/arm: Convert FNMUL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (21 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 22/67] target/arm: Expand vfp neg and abs inline Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 24/67] target/arm: Convert FMLA, FMLS " Richard Henderson
` (44 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
This is the last instruction within disas_fp_2src,
so remove that and its subroutines.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 1 +
target/arm/tcg/translate-a64.c | 177 +++++----------------------------
2 files changed, 27 insertions(+), 151 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index e2678d919e..cde4b86303 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -703,6 +703,7 @@ FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+FNMUL_s 0001 1110 ..1 ..... 1000 10 ..... ..... @rrr_hsd
FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 878f83298f..5ba30ba7c8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4950,6 +4950,31 @@ static const FPScalar f_scalar_fmulx = {
};
TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
+static void gen_fnmul_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_mulh(d, n, m, s);
+ gen_vfp_negh(d, d);
+}
+
+static void gen_fnmul_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_muls(d, n, m, s);
+ gen_vfp_negs(d, d);
+}
+
+static void gen_fnmul_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
+{
+ gen_helper_vfp_muld(d, n, m, s);
+ gen_vfp_negd(d, d);
+}
+
+static const FPScalar f_scalar_fnmul = {
+ gen_fnmul_h,
+ gen_fnmul_s,
+ gen_fnmul_d,
+};
+TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -6932,156 +6957,6 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
}
}
-/* Floating-point data-processing (2 source) - single precision */
-static void handle_fp_2src_single(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i32();
- fpst = fpstatus_ptr(FPST_FPCR);
- tcg_op1 = read_fp_sreg(s, rn);
- tcg_op2 = read_fp_sreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negs(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (2 source) - double precision */
-static void handle_fp_2src_double(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i64 tcg_op1;
- TCGv_i64 tcg_op2;
- TCGv_i64 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i64();
- fpst = fpstatus_ptr(FPST_FPCR);
- tcg_op1 = read_fp_dreg(s, rn);
- tcg_op2 = read_fp_dreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negd(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_dreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (2 source) - half precision */
-static void handle_fp_2src_half(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i32();
- fpst = fpstatus_ptr(FPST_FPCR_F16);
- tcg_op1 = read_fp_hreg(s, rn);
- tcg_op2 = read_fp_hreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negh(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating point data-processing (2 source)
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | opcode | 1 0 | Rn | Rd |
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
- */
-static void disas_fp_2src(DisasContext *s, uint32_t insn)
-{
- int mos = extract32(insn, 29, 3);
- int type = extract32(insn, 22, 2);
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int opcode = extract32(insn, 12, 4);
-
- if (opcode > 8 || mos) {
- unallocated_encoding(s);
- return;
- }
-
- switch (type) {
- case 0:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_single(s, opcode, rd, rn, rm);
- break;
- case 1:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_double(s, opcode, rd, rn, rm);
- break;
- case 3:
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_half(s, opcode, rd, rn, rm);
- break;
- default:
- unallocated_encoding(s);
- }
-}
-
/* Floating-point data-processing (3 source) - single precision */
static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
int rd, int rn, int rm, int ra)
@@ -7685,7 +7560,7 @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
break;
case 2:
/* Floating point data-processing (2 source) */
- disas_fp_2src(s, insn);
+ unallocated_encoding(s); /* in decodetree */
break;
case 3:
/* Floating point conditional select */
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 24/67] target/arm: Convert FMLA, FMLS to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (22 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 23/67] target/arm: Convert FNMUL to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 25/67] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Richard Henderson
` (43 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 2 +
target/arm/tcg/a64.decode | 22 +++
target/arm/tcg/translate-a64.c | 241 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 14 ++
4 files changed, 163 insertions(+), 116 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 0fd01c9c52..e021c18517 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -770,9 +770,11 @@ DEF_HELPER_FLAGS_5(gvec_fmls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfma_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfma_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_vfma_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfms_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfms_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_vfms_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index cde4b86303..11527bb5e5 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -742,12 +742,26 @@ FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+FMLA_v 0.00 1110 010 ..... 00001 1 ..... ..... @qrrr_h
+FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
+
+FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
+FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+FMLA_si 0101 1111 00 .. .... 0001 . 0 ..... ..... @rrx_h
+FMLA_si 0101 1111 10 .. .... 0001 . 0 ..... ..... @rrx_s
+FMLA_si 0101 1111 11 0. .... 0001 . 0 ..... ..... @rrx_d
+
+FMLS_si 0101 1111 00 .. .... 0101 . 0 ..... ..... @rrx_h
+FMLS_si 0101 1111 10 .. .... 0101 . 0 ..... ..... @rrx_s
+FMLS_si 0101 1111 11 0. .... 0101 . 0 ..... ..... @rrx_d
+
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
@@ -758,6 +772,14 @@ FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+FMLA_vi 0.00 1111 00 .. .... 0001 . 0 ..... ..... @qrrx_h
+FMLA_vi 0.00 1111 10 . ..... 0001 . 0 ..... ..... @qrrx_s
+FMLA_vi 0.00 1111 11 0 ..... 0001 . 0 ..... ..... @qrrx_d
+
+FMLS_vi 0.00 1111 00 .. .... 0101 . 0 ..... ..... @qrrx_h
+FMLS_vi 0.00 1111 10 . ..... 0101 . 0 ..... ..... @qrrx_s
+FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
+
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5ba30ba7c8..f84c12378d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5066,6 +5066,20 @@ static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
};
TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
+static gen_helper_gvec_3_ptr * const f_vector_fmla[3] = {
+ gen_helper_gvec_vfma_h,
+ gen_helper_gvec_vfma_s,
+ gen_helper_gvec_vfma_d,
+};
+TRANS(FMLA_v, do_fp3_vector, a, f_vector_fmla)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
+ gen_helper_gvec_vfms_h,
+ gen_helper_gvec_vfms_s,
+ gen_helper_gvec_vfms_d,
+};
+TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5115,6 +5129,64 @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
+static bool do_fmla_scalar_idx(DisasContext *s, arg_rrx_e *a, bool neg)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rd);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t2 = tcg_temp_new_i64();
+
+ read_vec_element(s, t2, a->rm, a->idx, MO_64);
+ if (neg) {
+ gen_vfp_negd(t1, t1);
+ }
+ gen_helper_vfp_muladdd(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rd);
+ TCGv_i32 t1 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t2 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_32);
+ if (neg) {
+ gen_vfp_negs(t1, t1);
+ }
+ gen_helper_vfp_muladds(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rd);
+ TCGv_i32 t1 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t2 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_16);
+ if (neg) {
+ gen_vfp_negh(t1, t1);
+ }
+ gen_helper_advsimd_muladdh(t0, t1, t2, t0,
+ fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FMLA_si, do_fmla_scalar_idx, a, false)
+TRANS(FMLS_si, do_fmla_scalar_idx, a, true)
+
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5157,6 +5229,42 @@ static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
};
TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
+static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
+{
+ static gen_helper_gvec_4_ptr * const fns[3] = {
+ gen_helper_gvec_fmla_idx_h,
+ gen_helper_gvec_fmla_idx_s,
+ gen_helper_gvec_fmla_idx_d,
+ };
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
+ esz == MO_16, (a->idx << 1) | neg,
+ fns[esz - 1]);
+ }
+ return true;
+}
+
+TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
+TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -9119,15 +9227,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element(s, tcg_op2, rm, pass, MO_64);
switch (fpopcode) {
- case 0x39: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negd(tcg_op1, tcg_op1);
- /* fall through */
- case 0x19: /* FMLA */
- read_vec_element(s, tcg_res, rd, pass, MO_64);
- gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
- tcg_res, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9155,10 +9254,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -9177,15 +9278,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (fpopcode) {
- case 0x39: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negs(tcg_op1, tcg_op1);
- /* fall through */
- case 0x19: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
- gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
- tcg_res, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9213,10 +9305,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -11140,8 +11234,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
case 0x7d: /* FACGT */
- case 0x19: /* FMLA */
- case 0x39: /* FMLS */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
@@ -11174,10 +11266,12 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -11523,10 +11617,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x1: /* FMLA */
case 0x4: /* FCMEQ */
case 0x7: /* FRECPS */
- case 0x9: /* FMLS */
case 0xf: /* FRSQRTS */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11544,10 +11636,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
break;
default:
case 0x0: /* FMAXNM */
+ case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
+ case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
@@ -11617,24 +11711,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x1: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
- fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x9: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
- fpst);
- break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11656,10 +11738,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
break;
default:
case 0x0: /* FMAXNM */
+ case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
+ case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
@@ -12880,10 +12964,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x0c: /* SQDMULH */
case 0x0d: /* SQRDMULH */
break;
- case 0x01: /* FMLA */
- case 0x05: /* FMLS */
- is_fp = 1;
- break;
case 0x1d: /* SQRDMLAH */
case 0x1f: /* SQRDMLSH */
if (!dc_isar_feature(aa64_rdm, s)) {
@@ -12950,6 +13030,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x01: /* FMLA */
+ case 0x05: /* FMLS */
case 0x09: /* FMUL */
case 0x19: /* FMULX */
unallocated_encoding(s);
@@ -12958,20 +13040,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
switch (is_fp) {
case 1: /* normal fp */
- /* convert insn encoded size to MemOp size */
- switch (size) {
- case 0: /* half-precision */
- size = MO_16;
- is_fp16 = true;
- break;
- case MO_32: /* single precision */
- case MO_64: /* double precision */
- break;
- default:
- unallocated_encoding(s);
- return;
- }
- break;
+ unallocated_encoding(s); /* in decodetree */
+ return;
case 2: /* complex fp */
/* Each indexable element is a complex pair. */
@@ -13150,38 +13220,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
if (size == 3) {
- TCGv_i64 tcg_idx = tcg_temp_new_i64();
- int pass;
-
- assert(is_fp && is_q && !is_long);
-
- read_vec_element(s, tcg_idx, rm, index, MO_64);
-
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
- TCGv_i64 tcg_op = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op, rn, pass, MO_64);
-
- switch (16 * u + opcode) {
- case 0x05: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negd(tcg_op, tcg_op);
- /* fall through */
- case 0x01: /* FMLA */
- read_vec_element(s, tcg_res, rd, pass, MO_64);
- gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
- break;
- default:
- case 0x09: /* FMUL */
- case 0x19: /* FMULX */
- g_assert_not_reached();
- }
-
- write_vec_element(s, tcg_res, rd, pass, MO_64);
- }
-
- clear_vec_high(s, !is_scalar, rd);
+ g_assert_not_reached();
} else if (!is_long) {
/* 32 bit floating point, or 16 or 32 bit integer.
* For the 16 bit scalar case we use the usual Neon helpers and
@@ -13237,38 +13276,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
genfn(tcg_res, tcg_op, tcg_res);
break;
}
- case 0x05: /* FMLS */
- case 0x01: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass,
- is_scalar ? size : MO_32);
- switch (size) {
- case 1:
- if (opcode == 0x5) {
- /* As usual for ARM, separate negation for fused
- * multiply-add */
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80008000);
- }
- if (is_scalar) {
- gen_helper_advsimd_muladdh(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- } else {
- gen_helper_advsimd_muladd2h(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- }
- break;
- case 2:
- if (opcode == 0x5) {
- /* As usual for ARM, separate negation for
- * fused multiply-add */
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80000000);
- }
- gen_helper_vfp_muladds(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13310,6 +13317,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x01: /* FMLA */
+ case 0x05: /* FMLS */
case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 99ef676071..b925b9f21b 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1309,6 +1309,12 @@ static float32 float32_muladd_f(float32 dest, float32 op1, float32 op2,
return float32_muladd(op1, op2, dest, 0, stat);
}
+static float64 float64_muladd_f(float64 dest, float64 op1, float64 op2,
+ float_status *stat)
+{
+ return float64_muladd(op1, op2, dest, 0, stat);
+}
+
static float16 float16_mulsub_f(float16 dest, float16 op1, float16 op2,
float_status *stat)
{
@@ -1321,6 +1327,12 @@ static float32 float32_mulsub_f(float32 dest, float32 op1, float32 op2,
return float32_muladd(float32_chs(op1), op2, dest, 0, stat);
}
+static float64 float64_mulsub_f(float64 dest, float64 op1, float64 op2,
+ float_status *stat)
+{
+ return float64_muladd(float64_chs(op1), op2, dest, 0, stat);
+}
+
#define DO_MULADD(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
@@ -1340,9 +1352,11 @@ DO_MULADD(gvec_fmls_s, float32_mulsub_nf, float32)
DO_MULADD(gvec_vfma_h, float16_muladd_f, float16)
DO_MULADD(gvec_vfma_s, float32_muladd_f, float32)
+DO_MULADD(gvec_vfma_d, float64_muladd_f, float64)
DO_MULADD(gvec_vfms_h, float16_mulsub_f, float16)
DO_MULADD(gvec_vfms_s, float32_mulsub_f, float32)
+DO_MULADD(gvec_vfms_d, float64_mulsub_f, float64)
/* For the indexed ops, SVE applies the index per 128-bit vector segment.
* For AdvSIMD, there is of course only one such vector segment.
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 25/67] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (23 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 24/67] target/arm: Convert FMLA, FMLS " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 26/67] target/arm: Convert FABD " Richard Henderson
` (42 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 5 +
target/arm/tcg/a64.decode | 30 ++++++
target/arm/tcg/translate-a64.c | 188 +++++++++++++++++++--------------
target/arm/tcg/vec_helper.c | 30 ++++++
4 files changed, 174 insertions(+), 79 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index e021c18517..8d076011c1 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -727,18 +727,23 @@ DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fceq_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fcge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fcgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_facge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_facgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 11527bb5e5..7fc3277be6 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -713,6 +713,21 @@ FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
+FCMEQ_s 0101 1110 010 ..... 00100 1 ..... ..... @rrr_h
+FCMEQ_s 0101 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FCMGE_s 0111 1110 010 ..... 00100 1 ..... ..... @rrr_h
+FCMGE_s 0111 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FCMGT_s 0111 1110 110 ..... 00100 1 ..... ..... @rrr_h
+FCMGT_s 0111 1110 1.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FACGE_s 0111 1110 010 ..... 00101 1 ..... ..... @rrr_h
+FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
+
+FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
+FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -748,6 +763,21 @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
+FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FCMGE_v 0.10 1110 010 ..... 00100 1 ..... ..... @qrrr_h
+FCMGE_v 0.10 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FCMGT_v 0.10 1110 110 ..... 00100 1 ..... ..... @qrrr_h
+FCMGT_v 0.10 1110 1.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FACGE_v 0.10 1110 010 ..... 00101 1 ..... ..... @qrrr_h
+FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
+
+FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
+FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index f84c12378d..75b0c1a005 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4975,6 +4975,41 @@ static const FPScalar f_scalar_fnmul = {
};
TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
+static const FPScalar f_scalar_fcmeq = {
+ gen_helper_advsimd_ceq_f16,
+ gen_helper_neon_ceq_f32,
+ gen_helper_neon_ceq_f64,
+};
+TRANS(FCMEQ_s, do_fp3_scalar, a, &f_scalar_fcmeq)
+
+static const FPScalar f_scalar_fcmge = {
+ gen_helper_advsimd_cge_f16,
+ gen_helper_neon_cge_f32,
+ gen_helper_neon_cge_f64,
+};
+TRANS(FCMGE_s, do_fp3_scalar, a, &f_scalar_fcmge)
+
+static const FPScalar f_scalar_fcmgt = {
+ gen_helper_advsimd_cgt_f16,
+ gen_helper_neon_cgt_f32,
+ gen_helper_neon_cgt_f64,
+};
+TRANS(FCMGT_s, do_fp3_scalar, a, &f_scalar_fcmgt)
+
+static const FPScalar f_scalar_facge = {
+ gen_helper_advsimd_acge_f16,
+ gen_helper_neon_acge_f32,
+ gen_helper_neon_acge_f64,
+};
+TRANS(FACGE_s, do_fp3_scalar, a, &f_scalar_facge)
+
+static const FPScalar f_scalar_facgt = {
+ gen_helper_advsimd_acgt_f16,
+ gen_helper_neon_acgt_f32,
+ gen_helper_neon_acgt_f64,
+};
+TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5080,6 +5115,41 @@ static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
};
TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
+static gen_helper_gvec_3_ptr * const f_vector_fcmeq[3] = {
+ gen_helper_gvec_fceq_h,
+ gen_helper_gvec_fceq_s,
+ gen_helper_gvec_fceq_d,
+};
+TRANS(FCMEQ_v, do_fp3_vector, a, f_vector_fcmeq)
+
+static gen_helper_gvec_3_ptr * const f_vector_fcmge[3] = {
+ gen_helper_gvec_fcge_h,
+ gen_helper_gvec_fcge_s,
+ gen_helper_gvec_fcge_d,
+};
+TRANS(FCMGE_v, do_fp3_vector, a, f_vector_fcmge)
+
+static gen_helper_gvec_3_ptr * const f_vector_fcmgt[3] = {
+ gen_helper_gvec_fcgt_h,
+ gen_helper_gvec_fcgt_s,
+ gen_helper_gvec_fcgt_d,
+};
+TRANS(FCMGT_v, do_fp3_vector, a, f_vector_fcmgt)
+
+static gen_helper_gvec_3_ptr * const f_vector_facge[3] = {
+ gen_helper_gvec_facge_h,
+ gen_helper_gvec_facge_s,
+ gen_helper_gvec_facge_d,
+};
+TRANS(FACGE_v, do_fp3_vector, a, f_vector_facge)
+
+static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
+ gen_helper_gvec_facgt_h,
+ gen_helper_gvec_facgt_s,
+ gen_helper_gvec_facgt_d,
+};
+TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9227,43 +9297,33 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element(s, tcg_op2, rm, pass, MO_64);
switch (fpopcode) {
- case 0x1c: /* FCMEQ */
- gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5c: /* FCMGE */
- gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5d: /* FACGE */
- gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
gen_vfp_absd(tcg_res, tcg_res);
break;
- case 0x7c: /* FCMGT */
- gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7d: /* FACGT */
- gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7c: /* FCMGT */
+ case 0x7d: /* FACGT */
g_assert_not_reached();
}
@@ -9278,43 +9338,33 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (fpopcode) {
- case 0x1c: /* FCMEQ */
- gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5c: /* FCMGE */
- gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5d: /* FACGE */
- gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
gen_vfp_abss(tcg_res, tcg_res);
break;
- case 0x7c: /* FCMGT */
- gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7d: /* FACGT */
- gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7c: /* FCMGT */
+ case 0x7d: /* FACGT */
g_assert_not_reached();
}
@@ -9355,15 +9405,15 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
+ case 0x7a: /* FABD */
+ break;
+ default:
+ case 0x1b: /* FMULX */
case 0x5d: /* FACGE */
case 0x7d: /* FACGT */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
case 0x7c: /* FCMGT */
- case 0x7a: /* FABD */
- break;
- default:
- case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9516,17 +9566,17 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
TCGv_i32 tcg_res;
switch (fpopcode) {
- case 0x04: /* FCMEQ (reg) */
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
break;
default:
case 0x03: /* FMULX */
+ case 0x04: /* FCMEQ (reg) */
+ case 0x14: /* FCMGE (reg) */
+ case 0x15: /* FACGE */
+ case 0x1c: /* FCMGT (reg) */
+ case 0x1d: /* FACGT */
unallocated_encoding(s);
return;
}
@@ -9546,33 +9596,23 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
tcg_res = tcg_temp_new_i32();
switch (fpopcode) {
- case 0x04: /* FCMEQ (reg) */
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x07: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x0f: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x14: /* FCMGE (reg) */
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x15: /* FACGE */
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
break;
- case 0x1c: /* FCMGT (reg) */
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1d: /* FACGT */
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x03: /* FMULX */
+ case 0x04: /* FCMEQ (reg) */
+ case 0x14: /* FCMGE (reg) */
+ case 0x15: /* FACGE */
+ case 0x1c: /* FCMGT (reg) */
+ case 0x1d: /* FACGT */
g_assert_not_reached();
}
@@ -11232,12 +11272,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x5d: /* FACGE */
- case 0x7d: /* FACGT */
- case 0x1c: /* FCMEQ */
- case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
if (!fp_access_check(s)) {
return;
}
@@ -11269,13 +11304,18 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7d: /* FACGT */
+ case 0x7c: /* FCMGT */
unallocated_encoding(s);
return;
}
@@ -11617,14 +11657,9 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x4: /* FCMEQ */
case 0x7: /* FRECPS */
case 0xf: /* FRSQRTS */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
pairwise = false;
break;
case 0x10: /* FMAXNMP */
@@ -11639,13 +11674,18 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
+ case 0x14: /* FCMGE */
+ case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1c: /* FCMGT */
+ case 0x1d: /* FACGT */
unallocated_encoding(s);
return;
}
@@ -11711,43 +11751,33 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x4: /* FCMEQ */
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x14: /* FCMGE */
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x15: /* FACGE */
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
break;
- case 0x1c: /* FCMGT */
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1d: /* FACGT */
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
+ case 0x14: /* FCMGE */
+ case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1c: /* FCMGT */
+ case 0x1d: /* FACGT */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index b925b9f21b..dabefa3526 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -971,6 +971,11 @@ static uint32_t float32_ceq(float32 op1, float32 op2, float_status *stat)
return -float32_eq_quiet(op1, op2, stat);
}
+static uint64_t float64_ceq(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_eq_quiet(op1, op2, stat);
+}
+
static uint16_t float16_cge(float16 op1, float16 op2, float_status *stat)
{
return -float16_le(op2, op1, stat);
@@ -981,6 +986,11 @@ static uint32_t float32_cge(float32 op1, float32 op2, float_status *stat)
return -float32_le(op2, op1, stat);
}
+static uint64_t float64_cge(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_le(op2, op1, stat);
+}
+
static uint16_t float16_cgt(float16 op1, float16 op2, float_status *stat)
{
return -float16_lt(op2, op1, stat);
@@ -991,6 +1001,11 @@ static uint32_t float32_cgt(float32 op1, float32 op2, float_status *stat)
return -float32_lt(op2, op1, stat);
}
+static uint64_t float64_cgt(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_lt(op2, op1, stat);
+}
+
static uint16_t float16_acge(float16 op1, float16 op2, float_status *stat)
{
return -float16_le(float16_abs(op2), float16_abs(op1), stat);
@@ -1001,6 +1016,11 @@ static uint32_t float32_acge(float32 op1, float32 op2, float_status *stat)
return -float32_le(float32_abs(op2), float32_abs(op1), stat);
}
+static uint64_t float64_acge(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_le(float64_abs(op2), float64_abs(op1), stat);
+}
+
static uint16_t float16_acgt(float16 op1, float16 op2, float_status *stat)
{
return -float16_lt(float16_abs(op2), float16_abs(op1), stat);
@@ -1011,6 +1031,11 @@ static uint32_t float32_acgt(float32 op1, float32 op2, float_status *stat)
return -float32_lt(float32_abs(op2), float32_abs(op1), stat);
}
+static uint64_t float64_acgt(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_lt(float64_abs(op2), float64_abs(op1), stat);
+}
+
static int16_t vfp_tosszh(float16 x, void *fpstp)
{
float_status *fpst = fpstp;
@@ -1216,18 +1241,23 @@ DO_3OP(gvec_fabd_s, float32_abd, float32)
DO_3OP(gvec_fceq_h, float16_ceq, float16)
DO_3OP(gvec_fceq_s, float32_ceq, float32)
+DO_3OP(gvec_fceq_d, float64_ceq, float64)
DO_3OP(gvec_fcge_h, float16_cge, float16)
DO_3OP(gvec_fcge_s, float32_cge, float32)
+DO_3OP(gvec_fcge_d, float64_cge, float64)
DO_3OP(gvec_fcgt_h, float16_cgt, float16)
DO_3OP(gvec_fcgt_s, float32_cgt, float32)
+DO_3OP(gvec_fcgt_d, float64_cgt, float64)
DO_3OP(gvec_facge_h, float16_acge, float16)
DO_3OP(gvec_facge_s, float32_acge, float32)
+DO_3OP(gvec_facge_d, float64_acge, float64)
DO_3OP(gvec_facgt_h, float16_acgt, float16)
DO_3OP(gvec_facgt_s, float32_acgt, float32)
+DO_3OP(gvec_facgt_d, float64_acgt, float64)
DO_3OP(gvec_fmax_h, float16_max, float16)
DO_3OP(gvec_fmax_s, float32_max, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 26/67] target/arm: Convert FABD to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (24 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 25/67] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 27/67] target/arm: Convert FRECPS, FRSQRTS " Richard Henderson
` (41 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 1 +
target/arm/tcg/a64.decode | 6 ++++
target/arm/tcg/translate-a64.c | 60 ++++++++++++++++++++++------------
target/arm/tcg/vec_helper.c | 6 ++++
4 files changed, 53 insertions(+), 20 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 8d076011c1..ff6e3094f4 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -724,6 +724,7 @@ DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7fc3277be6..a852b5f06f 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -728,6 +728,9 @@ FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
+FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
+FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -778,6 +781,9 @@ FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
+FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
+FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 75b0c1a005..633384d2a5 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5010,6 +5010,31 @@ static const FPScalar f_scalar_facgt = {
};
TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
+static void gen_fabd_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subh(d, n, m, s);
+ gen_vfp_absh(d, d);
+}
+
+static void gen_fabd_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subs(d, n, m, s);
+ gen_vfp_abss(d, d);
+}
+
+static void gen_fabd_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subd(d, n, m, s);
+ gen_vfp_absd(d, d);
+}
+
+static const FPScalar f_scalar_fabd = {
+ gen_fabd_h,
+ gen_fabd_s,
+ gen_fabd_d,
+};
+TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5150,6 +5175,13 @@ static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
};
TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
+static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
+ gen_helper_gvec_fabd_h,
+ gen_helper_gvec_fabd_s,
+ gen_helper_gvec_fabd_d,
+};
+TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9303,10 +9335,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x7a: /* FABD */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_absd(tcg_res, tcg_res);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
@@ -9322,6 +9350,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
case 0x7d: /* FACGT */
g_assert_not_reached();
@@ -9344,10 +9373,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x7a: /* FABD */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_abss(tcg_res, tcg_res);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
@@ -9363,6 +9388,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
case 0x7d: /* FACGT */
g_assert_not_reached();
@@ -9405,7 +9431,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x7a: /* FABD */
break;
default:
case 0x1b: /* FMULX */
@@ -9413,6 +9438,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x7d: /* FACGT */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
unallocated_encoding(s);
return;
@@ -9568,13 +9594,13 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
switch (fpopcode) {
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
- case 0x1a: /* FABD */
break;
default:
case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x14: /* FCMGE (reg) */
case 0x15: /* FACGE */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT (reg) */
case 0x1d: /* FACGT */
unallocated_encoding(s);
@@ -9602,15 +9628,12 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
case 0x0f: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FABD */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
- break;
default:
case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x14: /* FCMGE (reg) */
case 0x15: /* FACGE */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT (reg) */
case 0x1d: /* FACGT */
g_assert_not_reached();
@@ -11272,7 +11295,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x7a: /* FABD */
if (!fp_access_check(s)) {
return;
}
@@ -11314,6 +11336,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7d: /* FACGT */
case 0x7c: /* FCMGT */
unallocated_encoding(s);
@@ -11659,7 +11682,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x7: /* FRECPS */
case 0xf: /* FRSQRTS */
- case 0x1a: /* FABD */
pairwise = false;
break;
case 0x10: /* FMAXNMP */
@@ -11684,6 +11706,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
unallocated_encoding(s);
@@ -11757,10 +11780,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FABD */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
- break;
default:
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
@@ -11776,6 +11795,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index dabefa3526..e9d7922f30 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1154,6 +1154,11 @@ static float32 float32_abd(float32 op1, float32 op2, float_status *stat)
return float32_abs(float32_sub(op1, op2, stat));
}
+static float64 float64_abd(float64 op1, float64 op2, float_status *stat)
+{
+ return float64_abs(float64_sub(op1, op2, stat));
+}
+
/*
* Reciprocal step. These are the AArch32 version which uses a
* non-fused multiply-and-subtract.
@@ -1238,6 +1243,7 @@ DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
DO_3OP(gvec_fabd_h, float16_abd, float16)
DO_3OP(gvec_fabd_s, float32_abd, float32)
+DO_3OP(gvec_fabd_d, float64_abd, float64)
DO_3OP(gvec_fceq_h, float16_ceq, float16)
DO_3OP(gvec_fceq_s, float32_ceq, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 27/67] target/arm: Convert FRECPS, FRSQRTS to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (25 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 26/67] target/arm: Convert FABD " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 28/67] target/arm: Convert FADDP " Richard Henderson
` (40 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
These are the last instructions within handle_3same_float
and disas_simd_scalar_three_reg_same_fp16 so remove them.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 12 ++
target/arm/tcg/translate-a64.c | 293 ++++-----------------------------
2 files changed, 46 insertions(+), 259 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index a852b5f06f..84cb38f1dd 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -731,6 +731,12 @@ FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
+FRECPS_s 0101 1110 010 ..... 00111 1 ..... ..... @rrr_h
+FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
+
+FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
+FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -784,6 +790,12 @@ FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+FRECPS_v 0.00 1110 010 ..... 00111 1 ..... ..... @qrrr_h
+FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
+
+FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
+FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 633384d2a5..a7537a5104 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5035,6 +5035,20 @@ static const FPScalar f_scalar_fabd = {
};
TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
+static const FPScalar f_scalar_frecps = {
+ gen_helper_recpsf_f16,
+ gen_helper_recpsf_f32,
+ gen_helper_recpsf_f64,
+};
+TRANS(FRECPS_s, do_fp3_scalar, a, &f_scalar_frecps)
+
+static const FPScalar f_scalar_frsqrts = {
+ gen_helper_rsqrtsf_f16,
+ gen_helper_rsqrtsf_f32,
+ gen_helper_rsqrtsf_f64,
+};
+TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5182,6 +5196,20 @@ static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
};
TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
+static gen_helper_gvec_3_ptr * const f_vector_frecps[3] = {
+ gen_helper_gvec_recps_h,
+ gen_helper_gvec_recps_s,
+ gen_helper_gvec_recps_d,
+};
+TRANS(FRECPS_v, do_fp3_vector, a, f_vector_frecps)
+
+static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
+ gen_helper_gvec_rsqrts_h,
+ gen_helper_gvec_rsqrts_s,
+ gen_helper_gvec_rsqrts_d,
+};
+TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9308,107 +9336,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
}
-/* Handle the 3-same-operands float operations; shared by the scalar
- * and vector encodings. The caller must filter out any encodings
- * not allocated for the encoding it is dealing with.
- */
-static void handle_3same_float(DisasContext *s, int size, int elements,
- int fpopcode, int rd, int rn, int rm)
-{
- int pass;
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
-
- for (pass = 0; pass < elements; pass++) {
- if (size) {
- /* Double */
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
-
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3f: /* FRSQRTS */
- gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5f: /* FDIV */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- case 0x7d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_vec_element(s, tcg_res, rd, pass, MO_64);
- } else {
- /* Single */
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
-
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3f: /* FRSQRTS */
- gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5f: /* FDIV */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- case 0x7d: /* FACGT */
- g_assert_not_reached();
- }
-
- if (elements == 1) {
- /* scalar single so clear high part */
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
-
- tcg_gen_extu_i32_i64(tcg_tmp, tcg_res);
- write_vec_element(s, tcg_tmp, rd, pass, MO_64);
- } else {
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
- }
- }
- }
-
- clear_vec_high(s, elements * (size ? 8 : 4) > 8, rd);
-}
-
/* AdvSIMD scalar three same
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+--------+---+------+------+
@@ -9425,33 +9352,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
bool u = extract32(insn, 29, 1);
TCGv_i64 tcg_rd;
- if (opcode >= 0x18) {
- /* Floating point: U, size[1] and opcode indicate operation */
- int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- case 0x3f: /* FRSQRTS */
- break;
- default:
- case 0x1b: /* FMULX */
- case 0x5d: /* FACGE */
- case 0x7d: /* FACGT */
- case 0x1c: /* FCMEQ */
- case 0x5c: /* FCMGE */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- handle_3same_float(s, extract32(size, 0, 1), 1, fpopcode, rd, rn, rm);
- return;
- }
-
switch (opcode) {
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
@@ -9568,80 +9468,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
write_fp_dreg(s, rd, tcg_rd);
}
-/* AdvSIMD scalar three same FP16
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
- * | 0 1 | U | 1 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
- * v: 0101 1110 0100 0000 0000 0100 0000 0000 => 5e400400
- * m: 1101 1111 0110 0000 1100 0100 0000 0000 => df60c400
- */
-static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
- uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int opcode = extract32(insn, 11, 3);
- int rm = extract32(insn, 16, 5);
- bool u = extract32(insn, 29, 1);
- bool a = extract32(insn, 23, 1);
- int fpopcode = opcode | (a << 3) | (u << 4);
- TCGv_ptr fpst;
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
-
- switch (fpopcode) {
- case 0x07: /* FRECPS */
- case 0x0f: /* FRSQRTS */
- break;
- default:
- case 0x03: /* FMULX */
- case 0x04: /* FCMEQ (reg) */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(FPST_FPCR_F16);
-
- tcg_op1 = read_fp_hreg(s, rn);
- tcg_op2 = read_fp_hreg(s, rm);
- tcg_res = tcg_temp_new_i32();
-
- switch (fpopcode) {
- case 0x07: /* FRECPS */
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x0f: /* FRSQRTS */
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x03: /* FMULX */
- case 0x04: /* FCMEQ (reg) */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
/* AdvSIMD scalar three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+---+--------+---+----+----+
@@ -11114,7 +10940,7 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
/* Pairwise op subgroup of C3.6.16.
*
- * This is called directly or via the handle_3same_float for float pairwise
+ * This is called directly for float pairwise
* operations where the opcode and size are calculated differently.
*/
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
@@ -11271,10 +11097,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
- int datasize = is_q ? 128 : 64;
- int esize = 32 << size;
- int elements = datasize / esize;
-
if (size == 1 && !is_q) {
unallocated_encoding(s);
return;
@@ -11293,13 +11115,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
rn, rm, rd);
return;
- case 0x1f: /* FRECPS */
- case 0x3f: /* FRSQRTS */
- if (!fp_access_check(s)) {
- return;
- }
- handle_3same_float(s, size, elements, fpopcode, rd, rn, rm);
- return;
case 0x1d: /* FMLAL */
case 0x3d: /* FMLSL */
@@ -11328,10 +11143,12 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x1b: /* FMULX */
case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
+ case 0x1f: /* FRECPS */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
+ case 0x3f: /* FRSQRTS */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
@@ -11673,17 +11490,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
* together indicate the operation.
*/
int fpopcode = opcode | (a << 3) | (u << 4);
- int datasize = is_q ? 128 : 64;
- int elements = datasize / 16;
bool pairwise;
TCGv_ptr fpst;
int pass;
switch (fpopcode) {
- case 0x7: /* FRECPS */
- case 0xf: /* FRSQRTS */
- pairwise = false;
- break;
case 0x10: /* FMAXNMP */
case 0x12: /* FADDP */
case 0x16: /* FMAXP */
@@ -11698,10 +11509,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x3: /* FMULX */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
+ case 0x7: /* FRECPS */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
+ case 0xf: /* FRSQRTS */
case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11765,44 +11578,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
}
} else {
- for (pass = 0; pass < elements; pass++) {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
-
- switch (fpopcode) {
- case 0x7: /* FRECPS */
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FRSQRTS */
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x0: /* FMAXNM */
- case 0x1: /* FMLA */
- case 0x2: /* FADD */
- case 0x3: /* FMULX */
- case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
- case 0x8: /* FMINNM */
- case 0x9: /* FMLS */
- case 0xa: /* FSUB */
- case 0xe: /* FMIN */
- case 0x13: /* FMUL */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
- case 0x17: /* FDIV */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- }
+ g_assert_not_reached();
}
clear_vec_high(s, is_q, rd);
@@ -13572,7 +13348,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
- { 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
{ 0x00000000, 0x00000000, NULL }
};
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 28/67] target/arm: Convert FADDP to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (26 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 27/67] target/arm: Convert FRECPS, FRSQRTS " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 29/67] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Richard Henderson
` (39 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 4 ++
target/arm/tcg/a64.decode | 12 +++++
target/arm/tcg/translate-a64.c | 87 ++++++++++++++++++++++++++--------
target/arm/tcg/vec_helper.c | 23 +++++++++
4 files changed, 105 insertions(+), 21 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index ff6e3094f4..8441b49d1f 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1048,6 +1048,10 @@ DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 84cb38f1dd..d2a02365e1 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -29,6 +29,7 @@
&ri rd imm
&rri_sf rd rn imm sf
&i imm
+&rr_e rd rn esz
&rrr_e rd rn rm esz
&rrx_e rd rn rm idx esz
&qrr_e q rd rn esz
@@ -36,6 +37,9 @@
&qrrx_e q rd rn rm idx esz
&qrrrr_e q rd rn rm ra esz
+@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
+@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
+
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@@ -737,6 +741,11 @@ FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
+### Advanced SIMD scalar pairwise
+
+FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
+FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -796,6 +805,9 @@ FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
+FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
+FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index a7537a5104..78949ab34f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5210,6 +5210,13 @@ static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
};
TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
+static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
+ gen_helper_gvec_faddp_h,
+ gen_helper_gvec_faddp_s,
+ gen_helper_gvec_faddp_d,
+};
+TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5395,6 +5402,56 @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+/*
+ * Advanced SIMD scalar pairwise
+ */
+
+static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t0, a->rn, 0, MO_64);
+ read_vec_element(s, t1, a->rn, 1, MO_64);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, MO_32);
+ read_vec_element_i32(s, t1, a->rn, 1, MO_32);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, MO_16);
+ read_vec_element_i32(s, t1, a->rn, 1, MO_16);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -8357,7 +8414,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
fpst = NULL;
break;
case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
case 0xf: /* FMAXP */
case 0x2c: /* FMINNMP */
case 0x2f: /* FMINP */
@@ -8380,6 +8436,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
break;
default:
+ case 0xd: /* FADDP */
unallocated_encoding(s);
return;
}
@@ -8399,9 +8456,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8412,6 +8466,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
@@ -8429,9 +8484,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8442,6 +8494,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
} else {
@@ -8449,9 +8502,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8462,6 +8512,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
}
@@ -10982,9 +11033,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x58: /* FMAXNMP */
gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
- case 0x5a: /* FADDP */
- gen_helper_vfp_addd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x5e: /* FMAXP */
gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -10995,6 +11043,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x5a: /* FADDP */
g_assert_not_reached();
}
}
@@ -11052,9 +11101,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x58: /* FMAXNMP */
gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
- case 0x5a: /* FADDP */
- gen_helper_vfp_adds(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x5e: /* FMAXP */
gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -11065,6 +11111,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x5a: /* FADDP */
g_assert_not_reached();
}
@@ -11104,7 +11151,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
case 0x5e: /* FMAXP */
case 0x78: /* FMINNMP */
case 0x7e: /* FMINP */
@@ -11149,6 +11195,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x3f: /* FRSQRTS */
+ case 0x5a: /* FADDP */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
@@ -11496,7 +11543,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x10: /* FMAXNMP */
- case 0x12: /* FADDP */
case 0x16: /* FMAXP */
case 0x18: /* FMINNMP */
case 0x1e: /* FMINP */
@@ -11515,6 +11561,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
+ case 0x12: /* FADDP */
case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11556,9 +11603,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
fpst);
break;
- case 0x12: /* FADDP */
- gen_helper_advsimd_addh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x16: /* FMAXP */
gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -11570,6 +11614,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x12: /* FADDP */
g_assert_not_reached();
}
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index e9d7922f30..28989c7d7a 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2237,6 +2237,29 @@ DO_NEON_PAIRWISE(neon_pmin, min)
#undef DO_NEON_PAIRWISE
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
+{ \
+ ARMVectorReg scratch; \
+ intptr_t oprsz = simd_oprsz(desc); \
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
+ TYPE *d = vd, *n = vn, *m = vm; \
+ if (unlikely(d == m)) { \
+ m = memcpy(&scratch, m, oprsz); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)], stat); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)], stat); \
+ } \
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
+}
+
+DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
+DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
+DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 29/67] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (27 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 28/67] target/arm: Convert FADDP " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 30/67] target/arm: Use gvec for neon faddp, fmaxp, fminp Richard Henderson
` (38 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
These are the last instructions within disas_simd_three_reg_same_fp16,
so remove it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 16 ++
target/arm/tcg/a64.decode | 24 +++
target/arm/tcg/translate-a64.c | 296 ++++++---------------------------
target/arm/tcg/vec_helper.c | 16 ++
4 files changed, 107 insertions(+), 245 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 8441b49d1f..3268477329 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1052,6 +1052,22 @@ DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d2a02365e1..43557fdccc 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -746,6 +746,18 @@ FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
+FMAXP_s 0101 1110 0011 0000 1111 10 ..... ..... @rr_h
+FMAXP_s 0111 1110 0.11 0000 1111 10 ..... ..... @rr_sd
+
+FMINP_s 0101 1110 1011 0000 1111 10 ..... ..... @rr_h
+FMINP_s 0111 1110 1.11 0000 1111 10 ..... ..... @rr_sd
+
+FMAXNMP_s 0101 1110 0011 0000 1100 10 ..... ..... @rr_h
+FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
+
+FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
+FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -808,6 +820,18 @@ FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+FMAXP_v 0.10 1110 010 ..... 00110 1 ..... ..... @qrrr_h
+FMAXP_v 0.10 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMINP_v 0.10 1110 110 ..... 00110 1 ..... ..... @qrrr_h
+FMINP_v 0.10 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMAXNMP_v 0.10 1110 010 ..... 00000 1 ..... ..... @qrrr_h
+FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
+
+FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
+FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 78949ab34f..07415bd285 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5217,6 +5217,34 @@ static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
};
TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
+static gen_helper_gvec_3_ptr * const f_vector_fmaxp[3] = {
+ gen_helper_gvec_fmaxp_h,
+ gen_helper_gvec_fmaxp_s,
+ gen_helper_gvec_fmaxp_d,
+};
+TRANS(FMAXP_v, do_fp3_vector, a, f_vector_fmaxp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminp[3] = {
+ gen_helper_gvec_fminp_h,
+ gen_helper_gvec_fminp_s,
+ gen_helper_gvec_fminp_d,
+};
+TRANS(FMINP_v, do_fp3_vector, a, f_vector_fminp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnmp[3] = {
+ gen_helper_gvec_fmaxnump_h,
+ gen_helper_gvec_fmaxnump_s,
+ gen_helper_gvec_fmaxnump_d,
+};
+TRANS(FMAXNMP_v, do_fp3_vector, a, f_vector_fmaxnmp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
+ gen_helper_gvec_fminnump_h,
+ gen_helper_gvec_fminnump_s,
+ gen_helper_gvec_fminnump_d,
+};
+TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5452,6 +5480,10 @@ static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
}
TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
+TRANS(FMAXP_s, do_fp3_scalar_pair, a, &f_scalar_fmax)
+TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
+TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
+TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -8393,7 +8425,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
int opcode = extract32(insn, 12, 5);
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
- TCGv_ptr fpst;
/* For some ops (the FP ones), size[1] is part of the encoding.
* For ADDP strictly it is not but size[1] is always 1 for valid
@@ -8410,33 +8441,13 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
if (!fp_access_check(s)) {
return;
}
-
- fpst = NULL;
break;
+ default:
case 0xc: /* FMAXNMP */
+ case 0xd: /* FADDP */
case 0xf: /* FMAXP */
case 0x2c: /* FMINNMP */
case 0x2f: /* FMINP */
- /* FP op, size[0] is 32 or 64 bit*/
- if (!u) {
- if ((size & 1) || !dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- } else {
- size = MO_16;
- }
- } else {
- size = extract32(size, 0, 1) ? MO_64 : MO_32;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
- break;
- default:
- case 0xd: /* FADDP */
unallocated_encoding(s);
return;
}
@@ -8453,71 +8464,18 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0x3b: /* ADDP */
tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
break;
- case 0xc: /* FMAXNMP */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0xc: /* FMAXNMP */
case 0xd: /* FADDP */
+ case 0xf: /* FMAXP */
+ case 0x2c: /* FMINNMP */
+ case 0x2f: /* FMINP */
g_assert_not_reached();
}
write_fp_dreg(s, rd, tcg_res);
} else {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, 0, size);
- read_vec_element_i32(s, tcg_op2, rn, 1, size);
-
- if (size == MO_16) {
- switch (opcode) {
- case 0xc: /* FMAXNMP */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0xd: /* FADDP */
- g_assert_not_reached();
- }
- } else {
- switch (opcode) {
- case 0xc: /* FMAXNMP */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0xd: /* FADDP */
- g_assert_not_reached();
- }
- }
-
- write_fp_sreg(s, rd, tcg_res);
+ g_assert_not_reached();
}
}
@@ -10997,16 +10955,8 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
int size, int rn, int rm, int rd)
{
- TCGv_ptr fpst;
int pass;
- /* Floating point operations need fpst */
- if (opcode >= 0x58) {
- fpst = fpstatus_ptr(FPST_FPCR);
- } else {
- fpst = NULL;
- }
-
if (!fp_access_check(s)) {
return;
}
@@ -11030,20 +10980,12 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x17: /* ADDP */
tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
break;
- case 0x58: /* FMAXNMP */
- gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x5e: /* FMAXP */
- gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x78: /* FMINNMP */
- gen_helper_vfp_minnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x7e: /* FMINP */
- gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
+ case 0x5e: /* FMAXP */
+ case 0x78: /* FMINNMP */
+ case 0x7e: /* FMINP */
g_assert_not_reached();
}
}
@@ -11097,21 +11039,12 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
genfn = fns[size][u];
break;
}
- /* The FP operations are all on single floats (32 bit) */
- case 0x58: /* FMAXNMP */
- gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x5e: /* FMAXP */
- gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x78: /* FMINNMP */
- gen_helper_vfp_minnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x7e: /* FMINP */
- gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
+ case 0x5e: /* FMAXP */
+ case 0x78: /* FMINNMP */
+ case 0x7e: /* FMINP */
g_assert_not_reached();
}
@@ -11150,18 +11083,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
}
switch (fpopcode) {
- case 0x58: /* FMAXNMP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- if (size && !is_q) {
- unallocated_encoding(s);
- return;
- }
- handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
- rn, rm, rd);
- return;
-
case 0x1d: /* FMLAL */
case 0x3d: /* FMLSL */
case 0x59: /* FMLAL2 */
@@ -11195,14 +11116,18 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x3f: /* FRSQRTS */
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
+ case 0x5e: /* FMAXP */
case 0x5f: /* FDIV */
+ case 0x78: /* FMINNMP */
case 0x7a: /* FABD */
case 0x7d: /* FACGT */
case 0x7c: /* FCMGT */
+ case 0x7e: /* FMINP */
unallocated_encoding(s);
return;
}
@@ -11511,124 +11436,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
}
}
-/*
- * Advanced SIMD three same (ARMv8.2 FP16 variants)
- *
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
- * | 0 | Q | U | 0 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
- *
- * This includes FMULX, FCMEQ (register), FRECPS, FRSQRTS, FCMGE
- * (register), FACGE, FABD, FCMGT (register) and FACGT.
- *
- */
-static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 11, 3);
- int u = extract32(insn, 29, 1);
- int a = extract32(insn, 23, 1);
- int is_q = extract32(insn, 30, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- /*
- * For these floating point ops, the U, a and opcode bits
- * together indicate the operation.
- */
- int fpopcode = opcode | (a << 3) | (u << 4);
- bool pairwise;
- TCGv_ptr fpst;
- int pass;
-
- switch (fpopcode) {
- case 0x10: /* FMAXNMP */
- case 0x16: /* FMAXP */
- case 0x18: /* FMINNMP */
- case 0x1e: /* FMINP */
- pairwise = true;
- break;
- default:
- case 0x0: /* FMAXNM */
- case 0x1: /* FMLA */
- case 0x2: /* FADD */
- case 0x3: /* FMULX */
- case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
- case 0x7: /* FRECPS */
- case 0x8: /* FMINNM */
- case 0x9: /* FMLS */
- case 0xa: /* FSUB */
- case 0xe: /* FMIN */
- case 0xf: /* FRSQRTS */
- case 0x12: /* FADDP */
- case 0x13: /* FMUL */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
- case 0x17: /* FDIV */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(FPST_FPCR_F16);
-
- if (pairwise) {
- int maxpass = is_q ? 8 : 4;
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res[8];
-
- for (pass = 0; pass < maxpass; pass++) {
- int passreg = pass < (maxpass / 2) ? rn : rm;
- int passelt = (pass << 1) & (maxpass - 1);
-
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
- tcg_res[pass] = tcg_temp_new_i32();
-
- switch (fpopcode) {
- case 0x10: /* FMAXNMP */
- gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
- fpst);
- break;
- case 0x16: /* FMAXP */
- gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x18: /* FMINNMP */
- gen_helper_advsimd_minnumh(tcg_res[pass], tcg_op1, tcg_op2,
- fpst);
- break;
- case 0x1e: /* FMINP */
- gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x12: /* FADDP */
- g_assert_not_reached();
- }
- }
-
- for (pass = 0; pass < maxpass; pass++) {
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
- }
- } else {
- g_assert_not_reached();
- }
-
- clear_vec_high(s, is_q, rd);
-}
-
/* AdvSIMD three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +---+---+---+-----------+------+---+------+---+--------+---+----+----+
@@ -13391,7 +13198,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x00000000, 0x00000000, NULL }
};
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 28989c7d7a..79e1fdcaa9 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2260,6 +2260,22 @@ DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
+DO_3OP_PAIR(gvec_fmaxp_h, float16_max, float16, H2)
+DO_3OP_PAIR(gvec_fmaxp_s, float32_max, float32, H4)
+DO_3OP_PAIR(gvec_fmaxp_d, float64_max, float64, )
+
+DO_3OP_PAIR(gvec_fminp_h, float16_min, float16, H2)
+DO_3OP_PAIR(gvec_fminp_s, float32_min, float32, H4)
+DO_3OP_PAIR(gvec_fminp_d, float64_min, float64, )
+
+DO_3OP_PAIR(gvec_fmaxnump_h, float16_maxnum, float16, H2)
+DO_3OP_PAIR(gvec_fmaxnump_s, float32_maxnum, float32, H4)
+DO_3OP_PAIR(gvec_fmaxnump_d, float64_maxnum, float64, )
+
+DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
+DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
+DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 30/67] target/arm: Use gvec for neon faddp, fmaxp, fminp
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (28 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 29/67] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 31/67] target/arm: Convert ADDP to decodetree Richard Henderson
` (37 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 7 -----
target/arm/tcg/translate-neon.c | 55 ++-------------------------------
target/arm/tcg/vec_helper.c | 45 ---------------------------
3 files changed, 3 insertions(+), 104 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 3268477329..065460ea80 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -650,13 +650,6 @@ DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_paddh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmaxh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pminh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_padds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmaxs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmins, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-
DEF_HELPER_FLAGS_4(gvec_sstoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sitos, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ustoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 144f18ba22..2326a05a0a 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -1144,6 +1144,9 @@ DO_3S_FP_GVEC(VFMA, gen_helper_gvec_vfma_s, gen_helper_gvec_vfma_h)
DO_3S_FP_GVEC(VFMS, gen_helper_gvec_vfms_s, gen_helper_gvec_vfms_h)
DO_3S_FP_GVEC(VRECPS, gen_helper_gvec_recps_nf_s, gen_helper_gvec_recps_nf_h)
DO_3S_FP_GVEC(VRSQRTS, gen_helper_gvec_rsqrts_nf_s, gen_helper_gvec_rsqrts_nf_h)
+DO_3S_FP_GVEC(VPADD, gen_helper_gvec_faddp_s, gen_helper_gvec_faddp_h)
+DO_3S_FP_GVEC(VPMAX, gen_helper_gvec_fmaxp_s, gen_helper_gvec_fmaxp_h)
+DO_3S_FP_GVEC(VPMIN, gen_helper_gvec_fminp_s, gen_helper_gvec_fminp_h)
WRAP_FP_GVEC(gen_VMAXNM_fp32_3s, FPST_STD, gen_helper_gvec_fmaxnum_s)
WRAP_FP_GVEC(gen_VMAXNM_fp16_3s, FPST_STD_F16, gen_helper_gvec_fmaxnum_h)
@@ -1180,58 +1183,6 @@ static bool trans_VMINNM_fp_3s(DisasContext *s, arg_3same *a)
return do_3same(s, a, gen_VMINNM_fp32_3s);
}
-static bool do_3same_fp_pair(DisasContext *s, arg_3same *a,
- gen_helper_gvec_3_ptr *fn)
-{
- /* FP pairwise operations */
- TCGv_ptr fpstatus;
-
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
- return false;
- }
-
- /* UNDEF accesses to D16-D31 if they don't exist. */
- if (!dc_isar_feature(aa32_simd_r32, s) &&
- ((a->vd | a->vn | a->vm) & 0x10)) {
- return false;
- }
-
- if (!vfp_access_check(s)) {
- return true;
- }
-
- assert(a->q == 0); /* enforced by decode patterns */
-
-
- fpstatus = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
- vfp_reg_offset(1, a->vn),
- vfp_reg_offset(1, a->vm),
- fpstatus, 8, 8, 0, fn);
-
- return true;
-}
-
-/*
- * For all the functions using this macro, size == 1 means fp16,
- * which is an architecture extension we don't implement yet.
- */
-#define DO_3S_FP_PAIR(INSN,FUNC) \
- static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
- { \
- if (a->size == MO_16) { \
- if (!dc_isar_feature(aa32_fp16_arith, s)) { \
- return false; \
- } \
- return do_3same_fp_pair(s, a, FUNC##h); \
- } \
- return do_3same_fp_pair(s, a, FUNC##s); \
- }
-
-DO_3S_FP_PAIR(VPADD, gen_helper_neon_padd)
-DO_3S_FP_PAIR(VPMAX, gen_helper_neon_pmax)
-DO_3S_FP_PAIR(VPMIN, gen_helper_neon_pmin)
-
static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
{
/* Handle a 2-reg-shift insn which can be vectorized. */
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 79e1fdcaa9..26a9ca9c14 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2192,51 +2192,6 @@ DO_ABA(gvec_uaba_d, uint64_t)
#undef DO_ABA
-#define DO_NEON_PAIRWISE(NAME, OP) \
- void HELPER(NAME##s)(void *vd, void *vn, void *vm, \
- void *stat, uint32_t oprsz) \
- { \
- float_status *fpst = stat; \
- float32 *d = vd; \
- float32 *n = vn; \
- float32 *m = vm; \
- float32 r0, r1; \
- \
- /* Read all inputs before writing outputs in case vm == vd */ \
- r0 = float32_##OP(n[H4(0)], n[H4(1)], fpst); \
- r1 = float32_##OP(m[H4(0)], m[H4(1)], fpst); \
- \
- d[H4(0)] = r0; \
- d[H4(1)] = r1; \
- } \
- \
- void HELPER(NAME##h)(void *vd, void *vn, void *vm, \
- void *stat, uint32_t oprsz) \
- { \
- float_status *fpst = stat; \
- float16 *d = vd; \
- float16 *n = vn; \
- float16 *m = vm; \
- float16 r0, r1, r2, r3; \
- \
- /* Read all inputs before writing outputs in case vm == vd */ \
- r0 = float16_##OP(n[H2(0)], n[H2(1)], fpst); \
- r1 = float16_##OP(n[H2(2)], n[H2(3)], fpst); \
- r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
- r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
- \
- d[H2(0)] = r0; \
- d[H2(1)] = r1; \
- d[H2(2)] = r2; \
- d[H2(3)] = r3; \
- }
-
-DO_NEON_PAIRWISE(neon_padd, add)
-DO_NEON_PAIRWISE(neon_pmax, max)
-DO_NEON_PAIRWISE(neon_pmin, min)
-
-#undef DO_NEON_PAIRWISE
-
#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 31/67] target/arm: Convert ADDP to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (29 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 30/67] target/arm: Use gvec for neon faddp, fmaxp, fminp Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 32/67] target/arm: Use gvec for neon padd Richard Henderson
` (36 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 5 ++
target/arm/tcg/translate.h | 3 +
target/arm/tcg/a64.decode | 6 ++
target/arm/tcg/gengvec.c | 12 ++++
target/arm/tcg/translate-a64.c | 128 ++++++---------------------------
target/arm/tcg/vec_helper.c | 30 ++++++++
6 files changed, 77 insertions(+), 107 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 065460ea80..d3579a101f 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1061,6 +1061,11 @@ DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i
DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index b05a9eb668..04771f483b 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -514,6 +514,9 @@ void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
/*
* Forward to the isar_feature_* tests given a DisasContext pointer.
*/
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 43557fdccc..84f5bcc0e0 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -38,6 +38,7 @@
&qrrrr_e q rd rn rm ra esz
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
+@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@@ -56,6 +57,7 @@
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
+@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
&qrrx_e esz=1 idx=%hlm
@@ -758,6 +760,8 @@ FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
+ADDP_s 0101 1110 1111 0001 1011 10 ..... ..... @rr_d
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -832,6 +836,8 @@ FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 7a1856253f..f010dd5a0e 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1610,3 +1610,15 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
};
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_addp_b,
+ gen_helper_gvec_addp_h,
+ gen_helper_gvec_addp_s,
+ gen_helper_gvec_addp_d,
+ };
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 07415bd285..b8add91112 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5245,6 +5245,8 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
};
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5485,6 +5487,20 @@ TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
+static bool trans_ADDP_s(DisasContext *s, arg_rr_e *a)
+{
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t0, a->rn, 0, MO_64);
+ read_vec_element(s, t1, a->rn, 1, MO_64);
+ tcg_gen_add_i64(t0, t0, t1);
+ write_fp_dreg(s, a->rd, t0);
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -8412,73 +8428,6 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
}
}
-/* AdvSIMD scalar pairwise
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
- * | 0 1 | U | 1 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
- */
-static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
-{
- int u = extract32(insn, 29, 1);
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- /* For some ops (the FP ones), size[1] is part of the encoding.
- * For ADDP strictly it is not but size[1] is always 1 for valid
- * encodings.
- */
- opcode |= (extract32(size, 1, 1) << 5);
-
- switch (opcode) {
- case 0x3b: /* ADDP */
- if (u || size != 3) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- break;
- default:
- case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
- case 0xf: /* FMAXP */
- case 0x2c: /* FMINNMP */
- case 0x2f: /* FMINP */
- unallocated_encoding(s);
- return;
- }
-
- if (size == MO_64) {
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op1, rn, 0, MO_64);
- read_vec_element(s, tcg_op2, rn, 1, MO_64);
-
- switch (opcode) {
- case 0x3b: /* ADDP */
- tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
- break;
- default:
- case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
- case 0xf: /* FMAXP */
- case 0x2c: /* FMINNMP */
- case 0x2f: /* FMINP */
- g_assert_not_reached();
- }
-
- write_fp_dreg(s, rd, tcg_res);
- } else {
- g_assert_not_reached();
- }
-}
-
/*
* Common SSHR[RA]/USHR[RA] - Shift right (optional rounding/accumulate)
*
@@ -10965,34 +10914,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
* adjacent elements being operated on to produce an element in the result.
*/
if (size == 3) {
- TCGv_i64 tcg_res[2];
-
- for (pass = 0; pass < 2; pass++) {
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- int passreg = (pass == 0) ? rn : rm;
-
- read_vec_element(s, tcg_op1, passreg, 0, MO_64);
- read_vec_element(s, tcg_op2, passreg, 1, MO_64);
- tcg_res[pass] = tcg_temp_new_i64();
-
- switch (opcode) {
- case 0x17: /* ADDP */
- tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
- break;
- default:
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- g_assert_not_reached();
- }
- }
-
- for (pass = 0; pass < 2; pass++) {
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
- }
+ g_assert_not_reached();
} else {
int maxpass = is_q ? 4 : 2;
TCGv_i32 tcg_res[4];
@@ -11009,16 +10931,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
tcg_res[pass] = tcg_temp_new_i32();
switch (opcode) {
- case 0x17: /* ADDP */
- {
- static NeonGenTwoOpFn * const fns[3] = {
- gen_helper_neon_padd_u8,
- gen_helper_neon_padd_u16,
- tcg_gen_add_i32,
- };
- genfn = fns[size];
- break;
- }
case 0x14: /* SMAXP, UMAXP */
{
static NeonGenTwoOpFn * const fns[3][2] = {
@@ -11040,6 +10952,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
break;
}
default:
+ case 0x17: /* ADDP */
case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
case 0x5e: /* FMAXP */
@@ -11401,7 +11314,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x17: /* ADDP */
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
{
@@ -11433,6 +11345,9 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x17: /* ADDP */
+ unallocated_encoding(s);
+ break;
}
}
@@ -13195,7 +13110,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
- { 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 26a9ca9c14..5069899415 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2231,6 +2231,36 @@ DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
+#undef DO_3OP_PAIR
+
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{ \
+ ARMVectorReg scratch; \
+ intptr_t oprsz = simd_oprsz(desc); \
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
+ TYPE *d = vd, *n = vn, *m = vm; \
+ if (unlikely(d == m)) { \
+ m = memcpy(&scratch, m, oprsz); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)]); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)]); \
+ } \
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
+}
+
+#define ADD(A, B) (A + B)
+DO_3OP_PAIR(gvec_addp_b, ADD, uint8_t, H1)
+DO_3OP_PAIR(gvec_addp_h, ADD, uint16_t, H2)
+DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
+DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
+#undef ADD
+
+#undef DO_3OP_PAIR
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 32/67] target/arm: Use gvec for neon padd
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (30 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 31/67] target/arm: Convert ADDP to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 33/67] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Richard Henderson
` (35 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 2 --
target/arm/tcg/neon_helper.c | 5 -----
target/arm/tcg/translate-neon.c | 3 +--
3 files changed, 1 insertion(+), 9 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index d3579a101f..51ed49aa50 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -354,8 +354,6 @@ DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
DEF_HELPER_2(neon_add_u8, i32, i32, i32)
DEF_HELPER_2(neon_add_u16, i32, i32, i32)
-DEF_HELPER_2(neon_padd_u8, i32, i32, i32)
-DEF_HELPER_2(neon_padd_u16, i32, i32, i32)
DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
DEF_HELPER_2(neon_sub_u16, i32, i32, i32)
DEF_HELPER_2(neon_mul_u8, i32, i32, i32)
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index bc6c4a54e9..a0b51c8809 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -745,11 +745,6 @@ uint32_t HELPER(neon_add_u16)(uint32_t a, uint32_t b)
return (a + b) ^ mask;
}
-#define NEON_FN(dest, src1, src2) dest = src1 + src2
-NEON_POP(padd_u8, neon_u8, 4)
-NEON_POP(padd_u16, neon_u16, 2)
-#undef NEON_FN
-
#define NEON_FN(dest, src1, src2) dest = src1 - src2
NEON_VOP(sub_u8, neon_u8, 4)
NEON_VOP(sub_u16, neon_u16, 2)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 2326a05a0a..6c5a7a98e1 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -830,6 +830,7 @@ DO_3SAME_NO_SZ_3(VABD_S, gen_gvec_sabd)
DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
+DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -1070,13 +1071,11 @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
-#define gen_helper_neon_padd_u32 tcg_gen_add_i32
DO_3SAME_PAIR(VPMAX_S, pmax_s)
DO_3SAME_PAIR(VPMIN_S, pmin_s)
DO_3SAME_PAIR(VPMAX_U, pmax_u)
DO_3SAME_PAIR(VPMIN_U, pmin_u)
-DO_3SAME_PAIR(VPADD, padd_u)
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 33/67] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (31 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 32/67] target/arm: Use gvec for neon padd Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 34/67] target/arm: Use gvec for neon pmax, pmin Richard Henderson
` (34 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
These are the last instructions within handle_simd_3same_pair
so remove it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 16 +++++
target/arm/tcg/translate.h | 8 +++
target/arm/tcg/a64.decode | 4 ++
target/arm/tcg/gengvec.c | 48 +++++++++++++
target/arm/tcg/translate-a64.c | 119 +++++----------------------------
target/arm/tcg/vec_helper.c | 16 +++++
6 files changed, 109 insertions(+), 102 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 51ed49aa50..f830531dd3 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1064,6 +1064,22 @@ DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_sminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_sminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_sminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_umaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_uminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_uminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_uminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 04771f483b..3abdbedfe5 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -516,6 +516,14 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
/*
* Forward to the isar_feature_* tests given a DisasContext pointer.
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 84f5bcc0e0..22dfe8568d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -837,6 +837,10 @@ FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
+SMAXP_v 0.00 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
+SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
+UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
+UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index f010dd5a0e..22c9d17dce 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1622,3 +1622,51 @@ void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
};
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
+
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_smaxp_b,
+ gen_helper_gvec_smaxp_h,
+ gen_helper_gvec_smaxp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_sminp_b,
+ gen_helper_gvec_sminp_h,
+ gen_helper_gvec_sminp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_umaxp_b,
+ gen_helper_gvec_umaxp_h,
+ gen_helper_gvec_umaxp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_uminp_b,
+ gen_helper_gvec_uminp_h,
+ gen_helper_gvec_uminp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index b8add91112..9fe70a939b 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1352,6 +1352,17 @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
return true;
}
+static bool do_gvec_fn3_no64(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
+{
+ if (a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
+ }
+ return true;
+}
+
static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
{
if (!a->q && a->esz == MO_64) {
@@ -5246,6 +5257,10 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
+TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
+TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
+TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
+TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
/*
* Advanced SIMD scalar/vector x indexed element
@@ -10896,84 +10911,6 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
}
}
-/* Pairwise op subgroup of C3.6.16.
- *
- * This is called directly for float pairwise
- * operations where the opcode and size are calculated differently.
- */
-static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
- int size, int rn, int rm, int rd)
-{
- int pass;
-
- if (!fp_access_check(s)) {
- return;
- }
-
- /* These operations work on the concatenated rm:rn, with each pair of
- * adjacent elements being operated on to produce an element in the result.
- */
- if (size == 3) {
- g_assert_not_reached();
- } else {
- int maxpass = is_q ? 4 : 2;
- TCGv_i32 tcg_res[4];
-
- for (pass = 0; pass < maxpass; pass++) {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- NeonGenTwoOpFn *genfn = NULL;
- int passreg = pass < (maxpass / 2) ? rn : rm;
- int passelt = (is_q && (pass & 1)) ? 2 : 0;
-
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32);
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32);
- tcg_res[pass] = tcg_temp_new_i32();
-
- switch (opcode) {
- case 0x14: /* SMAXP, UMAXP */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_pmax_s8, gen_helper_neon_pmax_u8 },
- { gen_helper_neon_pmax_s16, gen_helper_neon_pmax_u16 },
- { tcg_gen_smax_i32, tcg_gen_umax_i32 },
- };
- genfn = fns[size][u];
- break;
- }
- case 0x15: /* SMINP, UMINP */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_pmin_s8, gen_helper_neon_pmin_u8 },
- { gen_helper_neon_pmin_s16, gen_helper_neon_pmin_u16 },
- { tcg_gen_smin_i32, tcg_gen_umin_i32 },
- };
- genfn = fns[size][u];
- break;
- }
- default:
- case 0x17: /* ADDP */
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- g_assert_not_reached();
- }
-
- /* FP ops called directly, otherwise call now */
- if (genfn) {
- genfn(tcg_res[pass], tcg_op1, tcg_op2);
- }
- }
-
- for (pass = 0; pass < maxpass; pass++) {
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
- }
- clear_vec_high(s, is_q, rd);
- }
-}
-
/* Floating point op subgroup of C3.6.16. */
static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
{
@@ -11314,30 +11251,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x14: /* SMAXP, UMAXP */
- case 0x15: /* SMINP, UMINP */
- {
- /* Pairwise operations */
- int is_q = extract32(insn, 30, 1);
- int u = extract32(insn, 29, 1);
- int size = extract32(insn, 22, 2);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- if (opcode == 0x17) {
- if (u || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- } else {
- if (size == 3) {
- unallocated_encoding(s);
- return;
- }
- }
- handle_simd_3same_pair(s, is_q, u, opcode, size, rn, rm, rd);
- break;
- }
case 0x18 ... 0x31:
/* floating point ops, sz[1] and U are part of opcode */
disas_simd_3same_float(s, insn);
@@ -11345,6 +11258,8 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x14: /* SMAXP, UMAXP */
+ case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
unallocated_encoding(s);
break;
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 5069899415..56fea14edb 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2259,6 +2259,22 @@ DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
#undef ADD
+DO_3OP_PAIR(gvec_smaxp_b, MAX, int8_t, H1)
+DO_3OP_PAIR(gvec_smaxp_h, MAX, int16_t, H2)
+DO_3OP_PAIR(gvec_smaxp_s, MAX, int32_t, H4)
+
+DO_3OP_PAIR(gvec_umaxp_b, MAX, uint8_t, H1)
+DO_3OP_PAIR(gvec_umaxp_h, MAX, uint16_t, H2)
+DO_3OP_PAIR(gvec_umaxp_s, MAX, uint32_t, H4)
+
+DO_3OP_PAIR(gvec_sminp_b, MIN, int8_t, H1)
+DO_3OP_PAIR(gvec_sminp_h, MIN, int16_t, H2)
+DO_3OP_PAIR(gvec_sminp_s, MIN, int32_t, H4)
+
+DO_3OP_PAIR(gvec_uminp_b, MIN, uint8_t, H1)
+DO_3OP_PAIR(gvec_uminp_h, MIN, uint16_t, H2)
+DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
+
#undef DO_3OP_PAIR
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 34/67] target/arm: Use gvec for neon pmax, pmin
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (32 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 33/67] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 35/67] target/arm: Convert FMLAL, FMLSL to decodetree Richard Henderson
` (33 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-neon.c | 78 ++-------------------------------
1 file changed, 4 insertions(+), 74 deletions(-)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 6c5a7a98e1..18b048611b 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -831,6 +831,10 @@ DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
+DO_3SAME_NO_SZ_3(VPMAX_S, gen_gvec_smaxp)
+DO_3SAME_NO_SZ_3(VPMIN_S, gen_gvec_sminp)
+DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
+DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -1003,80 +1007,6 @@ DO_3SAME_32_ENV(VQSHL_U, qshl_u)
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
-static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
-{
- /* Operations handled pairwise 32 bits at a time */
- TCGv_i32 tmp, tmp2, tmp3;
-
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
- return false;
- }
-
- /* UNDEF accesses to D16-D31 if they don't exist. */
- if (!dc_isar_feature(aa32_simd_r32, s) &&
- ((a->vd | a->vn | a->vm) & 0x10)) {
- return false;
- }
-
- if (a->size == 3) {
- return false;
- }
-
- if (!vfp_access_check(s)) {
- return true;
- }
-
- assert(a->q == 0); /* enforced by decode patterns */
-
- /*
- * Note that we have to be careful not to clobber the source operands
- * in the "vm == vd" case by storing the result of the first pass too
- * early. Since Q is 0 there are always just two passes, so instead
- * of a complicated loop over each pass we just unroll.
- */
- tmp = tcg_temp_new_i32();
- tmp2 = tcg_temp_new_i32();
- tmp3 = tcg_temp_new_i32();
-
- read_neon_element32(tmp, a->vn, 0, MO_32);
- read_neon_element32(tmp2, a->vn, 1, MO_32);
- fn(tmp, tmp, tmp2);
-
- read_neon_element32(tmp3, a->vm, 0, MO_32);
- read_neon_element32(tmp2, a->vm, 1, MO_32);
- fn(tmp3, tmp3, tmp2);
-
- write_neon_element32(tmp, a->vd, 0, MO_32);
- write_neon_element32(tmp3, a->vd, 1, MO_32);
-
- return true;
-}
-
-#define DO_3SAME_PAIR(INSN, func) \
- static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
- { \
- static NeonGenTwoOpFn * const fns[] = { \
- gen_helper_neon_##func##8, \
- gen_helper_neon_##func##16, \
- gen_helper_neon_##func##32, \
- }; \
- if (a->size > 2) { \
- return false; \
- } \
- return do_3same_pair(s, a, fns[a->size]); \
- }
-
-/* 32-bit pairwise ops end up the same as the elementwise versions. */
-#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
-#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
-#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
-#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
-
-DO_3SAME_PAIR(VPMAX_S, pmax_s)
-DO_3SAME_PAIR(VPMIN_S, pmin_s)
-DO_3SAME_PAIR(VPMAX_U, pmax_u)
-DO_3SAME_PAIR(VPMIN_U, pmin_u)
-
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 35/67] target/arm: Convert FMLAL, FMLSL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (33 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 34/67] target/arm: Use gvec for neon pmax, pmin Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 36/67] target/arm: Convert disas_simd_3same_logic " Richard Henderson
` (32 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 10 +++
target/arm/tcg/translate-a64.c | 144 ++++++++++-----------------------
2 files changed, 51 insertions(+), 103 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 22dfe8568d..7e993ed345 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -797,6 +797,11 @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+FMLAL_v 0.00 1110 001 ..... 11101 1 ..... ..... @qrrr_h
+FMLSL_v 0.00 1110 101 ..... 11101 1 ..... ..... @qrrr_h
+FMLAL2_v 0.10 1110 001 ..... 11001 1 ..... ..... @qrrr_h
+FMLSL2_v 0.10 1110 101 ..... 11001 1 ..... ..... @qrrr_h
+
FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
@@ -877,3 +882,8 @@ FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+
+FMLAL_vi 0.00 1111 10 .. .... 0000 . 0 ..... ..... @qrrx_h
+FMLSL_vi 0.00 1111 10 .. .... 0100 . 0 ..... ..... @qrrx_h
+FMLAL2_vi 0.10 1111 10 .. .... 1000 . 0 ..... ..... @qrrx_h
+FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 9fe70a939b..a4ff1fd202 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5256,6 +5256,24 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
};
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+static bool do_fmlal(DisasContext *s, arg_qrrr_e *a, bool is_s, bool is_2)
+{
+ if (fp_access_check(s)) {
+ int data = (is_2 << 1) | is_s;
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), tcg_env,
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ data, gen_helper_gvec_fmlal_a64);
+ }
+ return true;
+}
+
+TRANS_FEAT(FMLAL_v, aa64_fhm, do_fmlal, a, false, false)
+TRANS_FEAT(FMLSL_v, aa64_fhm, do_fmlal, a, true, false)
+TRANS_FEAT(FMLAL2_v, aa64_fhm, do_fmlal, a, false, true)
+TRANS_FEAT(FMLSL2_v, aa64_fhm, do_fmlal, a, true, true)
+
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
@@ -5447,6 +5465,24 @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+static bool do_fmlal_idx(DisasContext *s, arg_qrrx_e *a, bool is_s, bool is_2)
+{
+ if (fp_access_check(s)) {
+ int data = (a->idx << 2) | (is_2 << 1) | is_s;
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), tcg_env,
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ data, gen_helper_gvec_fmlal_idx_a64);
+ }
+ return true;
+}
+
+TRANS_FEAT(FMLAL_vi, aa64_fhm, do_fmlal_idx, a, false, false)
+TRANS_FEAT(FMLSL_vi, aa64_fhm, do_fmlal_idx, a, true, false)
+TRANS_FEAT(FMLAL2_vi, aa64_fhm, do_fmlal_idx, a, false, true)
+TRANS_FEAT(FMLSL2_vi, aa64_fhm, do_fmlal_idx, a, true, true)
+
/*
* Advanced SIMD scalar pairwise
*/
@@ -10911,78 +10947,6 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
}
}
-/* Floating point op subgroup of C3.6.16. */
-static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
-{
- /* For floating point ops, the U, size[1] and opcode bits
- * together indicate the operation. size[0] indicates single
- * or double.
- */
- int fpopcode = extract32(insn, 11, 5)
- | (extract32(insn, 23, 1) << 5)
- | (extract32(insn, 29, 1) << 6);
- int is_q = extract32(insn, 30, 1);
- int size = extract32(insn, 22, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (size == 1 && !is_q) {
- unallocated_encoding(s);
- return;
- }
-
- switch (fpopcode) {
- case 0x1d: /* FMLAL */
- case 0x3d: /* FMLSL */
- case 0x59: /* FMLAL2 */
- case 0x79: /* FMLSL2 */
- if (size & 1 || !dc_isar_feature(aa64_fhm, s)) {
- unallocated_encoding(s);
- return;
- }
- if (fp_access_check(s)) {
- int is_s = extract32(insn, 23, 1);
- int is_2 = extract32(insn, 29, 1);
- int data = (is_2 << 1) | is_s;
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), tcg_env,
- is_q ? 16 : 8, vec_full_reg_size(s),
- data, gen_helper_gvec_fmlal_a64);
- }
- return;
-
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x1f: /* FRECPS */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x3f: /* FRSQRTS */
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5e: /* FMAXP */
- case 0x5f: /* FDIV */
- case 0x78: /* FMINNMP */
- case 0x7a: /* FABD */
- case 0x7d: /* FACGT */
- case 0x7c: /* FCMGT */
- case 0x7e: /* FMINP */
- unallocated_encoding(s);
- return;
- }
-}
-
/* Integer op subgroup of C3.6.16. */
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
{
@@ -11251,16 +11215,13 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x18 ... 0x31:
- /* floating point ops, sz[1] and U are part of opcode */
- disas_simd_3same_float(s, insn);
- break;
default:
disas_simd_3same_int(s, insn);
break;
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
+ case 0x18 ... 0x31: /* floating point ops */
unallocated_encoding(s);
break;
}
@@ -12526,22 +12487,15 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
is_fp = 2;
break;
- case 0x00: /* FMLAL */
- case 0x04: /* FMLSL */
- case 0x18: /* FMLAL2 */
- case 0x1c: /* FMLSL2 */
- if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_fhm, s)) {
- unallocated_encoding(s);
- return;
- }
- size = MO_16;
- /* is_fp, but we pass tcg_env not fp_status. */
- break;
default:
+ case 0x00: /* FMLAL */
case 0x01: /* FMLA */
+ case 0x04: /* FMLSL */
case 0x05: /* FMLS */
case 0x09: /* FMUL */
+ case 0x18: /* FMLAL2 */
case 0x19: /* FMULX */
+ case 0x1c: /* FMLSL2 */
unallocated_encoding(s);
return;
}
@@ -12660,22 +12614,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
return;
- case 0x00: /* FMLAL */
- case 0x04: /* FMLSL */
- case 0x18: /* FMLAL2 */
- case 0x1c: /* FMLSL2 */
- {
- int is_s = extract32(opcode, 2, 1);
- int is_2 = u;
- int data = (index << 2) | (is_2 << 1) | is_s;
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), tcg_env,
- is_q ? 16 : 8, vec_full_reg_size(s),
- data, gen_helper_gvec_fmlal_idx_a64);
- }
- return;
-
case 0x08: /* MUL */
if (!is_long && !is_scalar) {
static gen_helper_gvec_3 * const fns[3] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 36/67] target/arm: Convert disas_simd_3same_logic to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (34 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 35/67] target/arm: Convert FMLAL, FMLSL to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB Richard Henderson
` (31 subsequent siblings)
67 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm, Peter Maydell
This includes AND, ORR, EOR, BIC, ORN, BSF, BIT, BIF.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 10 +++++
target/arm/tcg/translate-a64.c | 68 ++++++++++------------------------
2 files changed, 29 insertions(+), 49 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7e993ed345..f48adef5bb 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -55,6 +55,7 @@
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
+@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
@@ -847,6 +848,15 @@ SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
+AND_v 0.00 1110 001 ..... 00011 1 ..... ..... @qrrr_b
+BIC_v 0.00 1110 011 ..... 00011 1 ..... ..... @qrrr_b
+ORR_v 0.00 1110 101 ..... 00011 1 ..... ..... @qrrr_b
+ORN_v 0.00 1110 111 ..... 00011 1 ..... ..... @qrrr_b
+EOR_v 0.10 1110 001 ..... 00011 1 ..... ..... @qrrr_b
+BSL_v 0.10 1110 011 ..... 00011 1 ..... ..... @qrrr_b
+BIT_v 0.10 1110 101 ..... 00011 1 ..... ..... @qrrr_b
+BIF_v 0.10 1110 111 ..... 00011 1 ..... ..... @qrrr_b
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index a4ff1fd202..9167e4d0bd 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5280,6 +5280,24 @@ TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
+TRANS(AND_v, do_gvec_fn3, a, tcg_gen_gvec_and)
+TRANS(BIC_v, do_gvec_fn3, a, tcg_gen_gvec_andc)
+TRANS(ORR_v, do_gvec_fn3, a, tcg_gen_gvec_or)
+TRANS(ORN_v, do_gvec_fn3, a, tcg_gen_gvec_orc)
+TRANS(EOR_v, do_gvec_fn3, a, tcg_gen_gvec_xor)
+
+static bool do_bitsel(DisasContext *s, bool is_q, int d, int a, int b, int c)
+{
+ if (fp_access_check(s)) {
+ gen_gvec_fn4(s, is_q, d, a, b, c, tcg_gen_gvec_bitsel, 0);
+ }
+ return true;
+}
+
+TRANS(BSL_v, do_bitsel, a->q, a->rd, a->rd, a->rn, a->rm)
+TRANS(BIT_v, do_bitsel, a->q, a->rd, a->rm, a->rn, a->rd)
+TRANS(BIF_v, do_bitsel, a->q, a->rd, a->rm, a->rd, a->rn)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -10901,52 +10919,6 @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
}
}
-/* Logic op (opcode == 3) subgroup of C3.6.16. */
-static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int size = extract32(insn, 22, 2);
- bool is_u = extract32(insn, 29, 1);
- bool is_q = extract32(insn, 30, 1);
-
- if (!fp_access_check(s)) {
- return;
- }
-
- switch (size + 4 * is_u) {
- case 0: /* AND */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_and, 0);
- return;
- case 1: /* BIC */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_andc, 0);
- return;
- case 2: /* ORR */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_or, 0);
- return;
- case 3: /* ORN */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_orc, 0);
- return;
- case 4: /* EOR */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_xor, 0);
- return;
-
- case 5: /* BSL bitwise select */
- gen_gvec_fn4(s, is_q, rd, rd, rn, rm, tcg_gen_gvec_bitsel, 0);
- return;
- case 6: /* BIT, bitwise insert if true */
- gen_gvec_fn4(s, is_q, rd, rm, rn, rd, tcg_gen_gvec_bitsel, 0);
- return;
- case 7: /* BIF, bitwise insert if false */
- gen_gvec_fn4(s, is_q, rd, rm, rd, rn, tcg_gen_gvec_bitsel, 0);
- return;
-
- default:
- g_assert_not_reached();
- }
-}
-
/* Integer op subgroup of C3.6.16. */
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
{
@@ -11212,12 +11184,10 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
int opcode = extract32(insn, 11, 5);
switch (opcode) {
- case 0x3: /* logic ops */
- disas_simd_3same_logic(s, insn);
- break;
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x3: /* logic ops */
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (35 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 36/67] target/arm: Convert disas_simd_3same_logic " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:24 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec Richard Henderson
` (30 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
No need for a full comparison; xor produces non-zero bits
for QC just fine.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/gengvec.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 22c9d17dce..bfe6885a01 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1217,21 +1217,21 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
-static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_add_vec(vece, x, a, b);
tcg_gen_usadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
+ tcg_gen_xor_vec(vece, x, x, t);
+ tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
- INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ INDEX_op_usadd_vec, INDEX_op_add_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_uqadd_vec,
@@ -1259,21 +1259,21 @@ void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
-static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_add_vec(vece, x, a, b);
tcg_gen_ssadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
+ tcg_gen_xor_vec(vece, x, x, t);
+ tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
- INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ INDEX_op_ssadd_vec, INDEX_op_add_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_sqadd_vec,
@@ -1301,21 +1301,21 @@ void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
-static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_sub_vec(vece, x, a, b);
tcg_gen_ussub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
+ tcg_gen_xor_vec(vece, x, x, t);
+ tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
- INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ INDEX_op_ussub_vec, INDEX_op_sub_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_uqsub_vec,
@@ -1343,21 +1343,21 @@ void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
-static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_sub_vec(vece, x, a, b);
tcg_gen_sssub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
+ tcg_gen_xor_vec(vece, x, x, t);
+ tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
- INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ INDEX_op_sssub_vec, INDEX_op_sub_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_sqsub_vec,
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (36 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:37 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD Richard Henderson
` (29 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 16 +++++
target/arm/tcg/translate-a64.h | 6 ++
target/arm/tcg/gengvec64.c | 106 +++++++++++++++++++++++++++++++
target/arm/tcg/translate-a64.c | 113 ++++++++++++++-------------------
target/arm/tcg/vec_helper.c | 64 +++++++++++++++++++
5 files changed, 241 insertions(+), 64 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index f830531dd3..de2c5c9aef 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -836,6 +836,22 @@ DEF_HELPER_FLAGS_5(gvec_sqsub_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_sqsub_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usqadd_b, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usqadd_h, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usqadd_s, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usqadd_d, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_suqadd_b, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_suqadd_h, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_suqadd_s, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_suqadd_d, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmlal_a32, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
index 91750f0ca9..b5cb26f8a2 100644
--- a/target/arm/tcg/translate-a64.h
+++ b/target/arm/tcg/translate-a64.h
@@ -197,6 +197,12 @@ void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz);
void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz);
+void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
+ uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
+ uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz);
void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
index 093b498b13..4b76e476a0 100644
--- a/target/arm/tcg/gengvec64.c
+++ b/target/arm/tcg/gengvec64.c
@@ -188,3 +188,109 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
+static void gen_suqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec max =
+ tcg_constant_vec_matching(t, vece, (1ull << ((8 << vece) - 1)) - 1);
+ TCGv_vec u = tcg_temp_new_vec_matching(t);
+
+ /* Maximum value that can be added to @a without overflow. */
+ tcg_gen_sub_vec(vece, u, max, a);
+
+ /* Constrain addend so that the next addition never overflows. */
+ tcg_gen_umin_vec(vece, u, u, b);
+ tcg_gen_add_vec(vece, t, u, a);
+
+ /* Compute QC by comparing the adjusted @b. */
+ tcg_gen_xor_vec(vece, u, u, b);
+ tcg_gen_or_vec(vece, qc, qc, u);
+}
+
+void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
+ uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_add_vec, INDEX_op_sub_vec, INDEX_op_umin_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_suqadd_vec,
+ .fno = gen_helper_gvec_suqadd_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_suqadd_vec,
+ .fno = gen_helper_gvec_suqadd_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_suqadd_vec,
+ .fno = gen_helper_gvec_suqadd_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_suqadd_vec,
+ .fno = gen_helper_gvec_suqadd_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_usqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec u = tcg_temp_new_vec_matching(t);
+ TCGv_vec z = tcg_constant_vec_matching(t, vece, 0);
+
+ /* Compute unsigned saturation of add for +b and sub for -b. */
+ tcg_gen_neg_vec(vece, t, b);
+ tcg_gen_usadd_vec(vece, u, a, b);
+ tcg_gen_ussub_vec(vece, t, a, t);
+
+ /* Select the correct result depending on the sign of b. */
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, t, b, z, t, u);
+
+ /* Compute QC by comparing against the non-saturated result. */
+ tcg_gen_add_vec(vece, u, a, b);
+ tcg_gen_xor_vec(vece, u, u, t);
+ tcg_gen_or_vec(vece, qc, qc, u);
+}
+
+void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
+ uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_neg_vec, INDEX_op_add_vec,
+ INDEX_op_usadd_vec, INDEX_op_ussub_vec,
+ INDEX_op_cmpsel_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_usqadd_vec,
+ .fno = gen_helper_gvec_usqadd_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_usqadd_vec,
+ .fno = gen_helper_gvec_usqadd_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_usqadd_vec,
+ .fno = gen_helper_gvec_usqadd_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_usqadd_vec,
+ .fno = gen_helper_gvec_usqadd_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 9167e4d0bd..9f948e033e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -9983,83 +9983,68 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
/* Remaining saturating accumulating ops */
static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
- bool is_q, int size, int rn, int rd)
+ bool is_q, unsigned size, int rn, int rd)
{
- bool is_double = (size == 3);
+ if (!is_scalar) {
+ gen_gvec_fn3(s, is_q, rd, rd, rn,
+ is_u ? gen_gvec_usqadd_qc : gen_gvec_suqadd_qc, size);
+ return;
+ }
- if (is_double) {
+ if (size == 3) {
TCGv_i64 tcg_rn = tcg_temp_new_i64();
TCGv_i64 tcg_rd = tcg_temp_new_i64();
- int pass;
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
- read_vec_element(s, tcg_rn, rn, pass, MO_64);
- read_vec_element(s, tcg_rd, rd, pass, MO_64);
+ read_vec_element(s, tcg_rn, rn, 0, MO_64);
+ read_vec_element(s, tcg_rd, rd, 0, MO_64);
- if (is_u) { /* USQADD */
- gen_helper_neon_uqadd_s64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- } else { /* SUQADD */
- gen_helper_neon_sqadd_u64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- }
- write_vec_element(s, tcg_rd, rd, pass, MO_64);
+ if (is_u) { /* USQADD */
+ gen_helper_neon_uqadd_s64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ } else { /* SUQADD */
+ gen_helper_neon_sqadd_u64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
}
- clear_vec_high(s, !is_scalar, rd);
+ write_vec_element(s, tcg_rd, rd, 0, MO_64);
+ clear_vec_high(s, false, rd);
} else {
TCGv_i32 tcg_rn = tcg_temp_new_i32();
TCGv_i32 tcg_rd = tcg_temp_new_i32();
- int pass, maxpasses;
- if (is_scalar) {
- maxpasses = 1;
- } else {
- maxpasses = is_q ? 4 : 2;
+ read_vec_element_i32(s, tcg_rn, rn, 0, size);
+ read_vec_element_i32(s, tcg_rd, rd, 0, size);
+
+ if (is_u) { /* USQADD */
+ switch (size) {
+ case 0:
+ gen_helper_neon_uqadd_s8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ case 1:
+ gen_helper_neon_uqadd_s16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ case 2:
+ gen_helper_neon_uqadd_s32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ } else { /* SUQADD */
+ switch (size) {
+ case 0:
+ gen_helper_neon_sqadd_u8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ case 1:
+ gen_helper_neon_sqadd_u16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ case 2:
+ gen_helper_neon_sqadd_u32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ break;
+ default:
+ g_assert_not_reached();
+ }
}
- for (pass = 0; pass < maxpasses; pass++) {
- if (is_scalar) {
- read_vec_element_i32(s, tcg_rn, rn, pass, size);
- read_vec_element_i32(s, tcg_rd, rd, pass, size);
- } else {
- read_vec_element_i32(s, tcg_rn, rn, pass, MO_32);
- read_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
- }
-
- if (is_u) { /* USQADD */
- switch (size) {
- case 0:
- gen_helper_neon_uqadd_s8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 1:
- gen_helper_neon_uqadd_s16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 2:
- gen_helper_neon_uqadd_s32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- default:
- g_assert_not_reached();
- }
- } else { /* SUQADD */
- switch (size) {
- case 0:
- gen_helper_neon_sqadd_u8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 1:
- gen_helper_neon_sqadd_u16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 2:
- gen_helper_neon_sqadd_u32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- default:
- g_assert_not_reached();
- }
- }
-
- if (is_scalar) {
- write_vec_element(s, tcg_constant_i64(0), rd, 0, MO_64);
- }
- write_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
- }
- clear_vec_high(s, is_q, rd);
+ write_vec_element(s, tcg_constant_i64(0), rd, 0, MO_64);
+ write_vec_element_i32(s, tcg_rd, rd, 0, MO_32);
+ clear_vec_high(s, false, rd);
}
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 56fea14edb..d8e96386be 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1555,6 +1555,14 @@ DO_SAT(gvec_sqsub_b, int, int8_t, int8_t, -, INT8_MIN, INT8_MAX)
DO_SAT(gvec_sqsub_h, int, int16_t, int16_t, -, INT16_MIN, INT16_MAX)
DO_SAT(gvec_sqsub_s, int64_t, int32_t, int32_t, -, INT32_MIN, INT32_MAX)
+DO_SAT(gvec_usqadd_b, int, uint8_t, int8_t, +, 0, UINT8_MAX)
+DO_SAT(gvec_usqadd_h, int, uint16_t, int16_t, +, 0, UINT16_MAX)
+DO_SAT(gvec_usqadd_s, int64_t, uint32_t, int32_t, +, 0, UINT32_MAX)
+
+DO_SAT(gvec_suqadd_b, int, int8_t, uint8_t, +, INT8_MIN, INT8_MAX)
+DO_SAT(gvec_suqadd_h, int, int16_t, uint16_t, +, INT16_MIN, INT16_MAX)
+DO_SAT(gvec_suqadd_s, int64_t, int32_t, uint32_t, +, INT32_MIN, INT32_MAX)
+
#undef DO_SAT
void HELPER(gvec_uqadd_d)(void *vd, void *vq, void *vn,
@@ -1645,6 +1653,62 @@ void HELPER(gvec_sqsub_d)(void *vd, void *vq, void *vn,
clear_tail(d, oprsz, simd_maxsz(desc));
}
+void HELPER(gvec_usqadd_d)(void *vd, void *vq, void *vn,
+ void *vm, uint32_t desc)
+{
+ intptr_t i, oprsz = simd_oprsz(desc);
+ uint64_t *d = vd, *n = vn, *m = vm;
+ bool q = false;
+
+ for (i = 0; i < oprsz / 8; i++) {
+ uint64_t nn = n[i];
+ int64_t mm = m[i];
+ uint64_t dd = nn + mm;
+
+ if (mm < 0) {
+ if (nn < (uint64_t)-mm) {
+ dd = 0;
+ q = true;
+ }
+ } else {
+ if (dd < nn) {
+ dd = UINT64_MAX;
+ q = true;
+ }
+ }
+ d[i] = dd;
+ }
+ if (q) {
+ uint32_t *qc = vq;
+ qc[0] = 1;
+ }
+ clear_tail(d, oprsz, simd_maxsz(desc));
+}
+
+void HELPER(gvec_suqadd_d)(void *vd, void *vq, void *vn,
+ void *vm, uint32_t desc)
+{
+ intptr_t i, oprsz = simd_oprsz(desc);
+ uint64_t *d = vd, *n = vn, *m = vm;
+ bool q = false;
+
+ for (i = 0; i < oprsz / 8; i++) {
+ int64_t nn = n[i];
+ uint64_t mm = m[i];
+ int64_t dd = nn + mm;
+
+ if (mm > (uint64_t)(INT64_MAX - nn)) {
+ dd = INT64_MAX;
+ q = true;
+ }
+ d[i] = dd;
+ }
+ if (q) {
+ uint32_t *qc = vq;
+ qc[0] = 1;
+ }
+ clear_tail(d, oprsz, simd_maxsz(desc));
+}
#define DO_SRA(NAME, TYPE) \
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (37 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:40 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB Richard Henderson
` (28 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
This eliminates the last uses of these neon helpers.
Incorporate the MO_64 expanders as an option to the vector expander.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 8 --
target/arm/tcg/translate-a64.h | 8 ++
target/arm/tcg/gengvec64.c | 71 ++++++++++++++
target/arm/tcg/neon_helper.c | 165 ---------------------------------
target/arm/tcg/translate-a64.c | 73 +++++----------
5 files changed, 103 insertions(+), 222 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index de2c5c9aef..c76158d6d3 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -274,14 +274,6 @@ DEF_HELPER_FLAGS_3(neon_qadd_u16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_s16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_u32, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_s32, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_uqadd_s8, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_uqadd_s16, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_uqadd_s32, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_uqadd_s64, TCG_CALL_NO_RWG, i64, env, i64, i64)
-DEF_HELPER_FLAGS_3(neon_sqadd_u8, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_sqadd_u16, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_sqadd_u32, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_sqadd_u64, TCG_CALL_NO_RWG, i64, env, i64, i64)
DEF_HELPER_3(neon_qsub_u8, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_s8, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_u16, i32, env, i32, i32)
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
index b5cb26f8a2..0fcf7cb63a 100644
--- a/target/arm/tcg/translate-a64.h
+++ b/target/arm/tcg/translate-a64.h
@@ -197,9 +197,17 @@ void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz);
void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz);
+
+void gen_suqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_suqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
+
+void gen_usqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_usqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
index 4b76e476a0..dad4c1853b 100644
--- a/target/arm/tcg/gengvec64.c
+++ b/target/arm/tcg/gengvec64.c
@@ -188,6 +188,38 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
+/*
+ * Set @res to the correctly saturated result.
+ * Set @qc non-zero if saturation occured.
+ */
+void gen_suqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ TCGv_i64 max = tcg_constant_i64((1ull << ((8 << esz) - 1)) - 1);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(t, a, b);
+ tcg_gen_smin_i64(res, t, max);
+ tcg_gen_xor_i64(t, t, res);
+ tcg_gen_or_i64(qc, qc, t);
+}
+
+void gen_suqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 max = tcg_constant_i64(INT64_MAX);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ /* Maximum value that can be added to @a without overflow. */
+ tcg_gen_sub_i64(t, max, a);
+
+ /* Constrain addend so that the next addition never overflows. */
+ tcg_gen_umin_i64(t, t, b);
+ tcg_gen_add_i64(res, a, t);
+
+ tcg_gen_xor_i64(t, t, b);
+ tcg_gen_or_i64(qc, qc, t);
+}
+
static void gen_suqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -231,6 +263,7 @@ void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_suqadd_vec,
+ .fni8 = gen_suqadd_d,
.fno = gen_helper_gvec_suqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
@@ -240,6 +273,43 @@ void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_usqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ TCGv_i64 max = tcg_constant_i64(MAKE_64BIT_MASK(0, 8 << esz));
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(tmp, a, b);
+ tcg_gen_smin_i64(res, tmp, max);
+ tcg_gen_smax_i64(res, res, zero);
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
+void gen_usqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 tmp = tcg_temp_new_i64();
+ TCGv_i64 tneg = tcg_temp_new_i64();
+ TCGv_i64 tpos = tcg_temp_new_i64();
+ TCGv_i64 max = tcg_constant_i64(UINT64_MAX);
+ TCGv_i64 zero = tcg_constant_i64(0);
+
+ tcg_gen_add_i64(tmp, a, b);
+
+ /* If @b is positive, saturate if (a + b) < a, aka unsigned overflow. */
+ tcg_gen_movcond_i64(TCG_COND_LTU, tpos, tmp, a, max, tmp);
+
+ /* If @b is negative, saturate if a < -b, ie subtraction is negative. */
+ tcg_gen_neg_i64(tneg, b);
+ tcg_gen_movcond_i64(TCG_COND_LTU, tneg, a, tneg, zero, tmp);
+
+ /* Select correct result from sign of @b. */
+ tcg_gen_movcond_i64(TCG_COND_LT, res, b, zero, tneg, tpos);
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
static void gen_usqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -286,6 +356,7 @@ void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_usqadd_vec,
+ .fni8 = gen_usqadd_d,
.fno = gen_helper_gvec_usqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index a0b51c8809..9505a5fd18 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -236,171 +236,6 @@ uint64_t HELPER(neon_qadd_s64)(CPUARMState *env, uint64_t src1, uint64_t src2)
return res;
}
-/* Unsigned saturating accumulate of signed value
- *
- * Op1/Rn is treated as signed
- * Op2/Rd is treated as unsigned
- *
- * Explicit casting is used to ensure the correct sign extension of
- * inputs. The result is treated as a unsigned value and saturated as such.
- *
- * We use a macro for the 8/16 bit cases which expects signed integers of va,
- * vb, and vr for interim calculation and an unsigned 32 bit result value r.
- */
-
-#define USATACC(bits, shift) \
- do { \
- va = sextract32(a, shift, bits); \
- vb = extract32(b, shift, bits); \
- vr = va + vb; \
- if (vr > UINT##bits##_MAX) { \
- SET_QC(); \
- vr = UINT##bits##_MAX; \
- } else if (vr < 0) { \
- SET_QC(); \
- vr = 0; \
- } \
- r = deposit32(r, shift, bits, vr); \
- } while (0)
-
-uint32_t HELPER(neon_uqadd_s8)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int16_t va, vb, vr;
- uint32_t r = 0;
-
- USATACC(8, 0);
- USATACC(8, 8);
- USATACC(8, 16);
- USATACC(8, 24);
- return r;
-}
-
-uint32_t HELPER(neon_uqadd_s16)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int32_t va, vb, vr;
- uint64_t r = 0;
-
- USATACC(16, 0);
- USATACC(16, 16);
- return r;
-}
-
-#undef USATACC
-
-uint32_t HELPER(neon_uqadd_s32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int64_t va = (int32_t)a;
- int64_t vb = (uint32_t)b;
- int64_t vr = va + vb;
- if (vr > UINT32_MAX) {
- SET_QC();
- vr = UINT32_MAX;
- } else if (vr < 0) {
- SET_QC();
- vr = 0;
- }
- return vr;
-}
-
-uint64_t HELPER(neon_uqadd_s64)(CPUARMState *env, uint64_t a, uint64_t b)
-{
- uint64_t res;
- res = a + b;
- /* We only need to look at the pattern of SIGN bits to detect
- * +ve/-ve saturation
- */
- if (~a & b & ~res & SIGNBIT64) {
- SET_QC();
- res = UINT64_MAX;
- } else if (a & ~b & res & SIGNBIT64) {
- SET_QC();
- res = 0;
- }
- return res;
-}
-
-/* Signed saturating accumulate of unsigned value
- *
- * Op1/Rn is treated as unsigned
- * Op2/Rd is treated as signed
- *
- * The result is treated as a signed value and saturated as such
- *
- * We use a macro for the 8/16 bit cases which expects signed integers of va,
- * vb, and vr for interim calculation and an unsigned 32 bit result value r.
- */
-
-#define SSATACC(bits, shift) \
- do { \
- va = extract32(a, shift, bits); \
- vb = sextract32(b, shift, bits); \
- vr = va + vb; \
- if (vr > INT##bits##_MAX) { \
- SET_QC(); \
- vr = INT##bits##_MAX; \
- } else if (vr < INT##bits##_MIN) { \
- SET_QC(); \
- vr = INT##bits##_MIN; \
- } \
- r = deposit32(r, shift, bits, vr); \
- } while (0)
-
-uint32_t HELPER(neon_sqadd_u8)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int16_t va, vb, vr;
- uint32_t r = 0;
-
- SSATACC(8, 0);
- SSATACC(8, 8);
- SSATACC(8, 16);
- SSATACC(8, 24);
- return r;
-}
-
-uint32_t HELPER(neon_sqadd_u16)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int32_t va, vb, vr;
- uint32_t r = 0;
-
- SSATACC(16, 0);
- SSATACC(16, 16);
-
- return r;
-}
-
-#undef SSATACC
-
-uint32_t HELPER(neon_sqadd_u32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- int64_t res;
- int64_t op1 = (uint32_t)a;
- int64_t op2 = (int32_t)b;
- res = op1 + op2;
- if (res > INT32_MAX) {
- SET_QC();
- res = INT32_MAX;
- } else if (res < INT32_MIN) {
- SET_QC();
- res = INT32_MIN;
- }
- return res;
-}
-
-uint64_t HELPER(neon_sqadd_u64)(CPUARMState *env, uint64_t a, uint64_t b)
-{
- uint64_t res;
- res = a + b;
- /* We only need to look at the pattern of SIGN bits to detect an overflow */
- if (((a & res)
- | (~b & res)
- | (a & ~b)) & SIGNBIT64) {
- SET_QC();
- res = INT64_MAX;
- }
- return res;
-}
-
-
#define NEON_USAT(dest, src1, src2, type) do { \
uint32_t tmp = (uint32_t)src1 - (uint32_t)src2; \
if (tmp != (type)tmp) { \
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 9f948e033e..781b224972 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -9985,67 +9985,42 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
bool is_q, unsigned size, int rn, int rd)
{
+ TCGv_i64 res, qc, a, b;
+
if (!is_scalar) {
gen_gvec_fn3(s, is_q, rd, rd, rn,
is_u ? gen_gvec_usqadd_qc : gen_gvec_suqadd_qc, size);
return;
}
- if (size == 3) {
- TCGv_i64 tcg_rn = tcg_temp_new_i64();
- TCGv_i64 tcg_rd = tcg_temp_new_i64();
+ res = tcg_temp_new_i64();
+ qc = tcg_temp_new_i64();
+ a = tcg_temp_new_i64();
+ b = tcg_temp_new_i64();
- read_vec_element(s, tcg_rn, rn, 0, MO_64);
- read_vec_element(s, tcg_rd, rd, 0, MO_64);
+ /* Read and extend scalar inputs to 64-bits. */
+ read_vec_element(s, a, rd, 0, size | (is_u ? 0 : MO_SIGN));
+ read_vec_element(s, b, rn, 0, size | (is_u ? MO_SIGN : 0));
+ tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- if (is_u) { /* USQADD */
- gen_helper_neon_uqadd_s64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- } else { /* SUQADD */
- gen_helper_neon_sqadd_u64(tcg_rd, tcg_env, tcg_rn, tcg_rd);
+ if (size == MO_64) {
+ if (is_u) {
+ gen_usqadd_d(res, qc, a, b);
+ } else {
+ gen_suqadd_d(res, qc, a, b);
}
- write_vec_element(s, tcg_rd, rd, 0, MO_64);
- clear_vec_high(s, false, rd);
} else {
- TCGv_i32 tcg_rn = tcg_temp_new_i32();
- TCGv_i32 tcg_rd = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_rn, rn, 0, size);
- read_vec_element_i32(s, tcg_rd, rd, 0, size);
-
- if (is_u) { /* USQADD */
- switch (size) {
- case 0:
- gen_helper_neon_uqadd_s8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 1:
- gen_helper_neon_uqadd_s16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 2:
- gen_helper_neon_uqadd_s32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- default:
- g_assert_not_reached();
- }
- } else { /* SUQADD */
- switch (size) {
- case 0:
- gen_helper_neon_sqadd_u8(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 1:
- gen_helper_neon_sqadd_u16(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- case 2:
- gen_helper_neon_sqadd_u32(tcg_rd, tcg_env, tcg_rn, tcg_rd);
- break;
- default:
- g_assert_not_reached();
- }
+ if (is_u) {
+ gen_usqadd_bhs(res, qc, a, b, size);
+ } else {
+ gen_suqadd_bhs(res, qc, a, b, size);
+ /* Truncate signed 64-bit result for writeback. */
+ tcg_gen_ext_i64(res, res, size);
}
-
- write_vec_element(s, tcg_constant_i64(0), rd, 0, MO_64);
- write_vec_element_i32(s, tcg_rd, rd, 0, MO_32);
- clear_vec_high(s, false, rd);
}
+
+ write_fp_dreg(s, rd, res);
+ tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
}
/* AdvSIMD scalar two reg misc
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (38 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:42 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree Richard Henderson
` (27 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
This eliminates the last uses of these neon helpers.
Incorporate the MO_64 expanders as an option to the vector expander.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 17 ----
target/arm/tcg/translate.h | 15 +++
target/arm/tcg/gengvec.c | 116 +++++++++++++++++++++++
target/arm/tcg/neon_helper.c | 162 ---------------------------------
target/arm/tcg/translate-a64.c | 67 ++++++++------
5 files changed, 169 insertions(+), 208 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index c76158d6d3..a14c040451 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -268,23 +268,6 @@ DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, ptr)
DEF_HELPER_FLAGS_3(check_hcr_el2_trap, TCG_CALL_NO_WG, void, env, i32, i32)
/* neon_helper.c */
-DEF_HELPER_FLAGS_3(neon_qadd_u8, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_qadd_s8, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_qadd_u16, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_qadd_s16, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_qadd_u32, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_3(neon_qadd_s32, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_u8, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_s8, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_u16, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_s16, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_u32, i32, env, i32, i32)
-DEF_HELPER_3(neon_qsub_s32, i32, env, i32, i32)
-DEF_HELPER_3(neon_qadd_u64, i64, env, i64, i64)
-DEF_HELPER_3(neon_qadd_s64, i64, env, i64, i64)
-DEF_HELPER_3(neon_qsub_u64, i64, env, i64, i64)
-DEF_HELPER_3(neon_qsub_s64, i64, env, i64, i64)
-
DEF_HELPER_2(neon_hadd_s8, i32, i32, i32)
DEF_HELPER_2(neon_hadd_u8, i32, i32, i32)
DEF_HELPER_2(neon_hadd_s16, i32, i32, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 3abdbedfe5..87439dcc61 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -466,12 +466,27 @@ void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
void gen_ushl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
+void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_uqadd_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
+void gen_sqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_sqadd_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
+void gen_uqsub_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_uqsub_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
+void gen_sqsub_bhs(TCGv_i64 res, TCGv_i64 qc,
+ TCGv_i64 a, TCGv_i64 b, MemOp esz);
+void gen_sqsub_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index bfe6885a01..66a514ba86 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1217,6 +1217,28 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ uint64_t max = MAKE_64BIT_MASK(0, 8 << esz);
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(tmp, a, b);
+ tcg_gen_umin_i64(res, tmp, tcg_constant_i64(max));
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
+void gen_uqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(t, a, b);
+ tcg_gen_movcond_i64(TCG_COND_LTU, res, t, a,
+ tcg_constant_i64(UINT64_MAX), t);
+ tcg_gen_xor_i64(t, t, res);
+ tcg_gen_or_i64(qc, qc, t);
+}
+
static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -1250,6 +1272,7 @@ void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.opt_opc = vecop_list,
.vece = MO_32 },
{ .fniv = gen_uqadd_vec,
+ .fni8 = gen_uqadd_d,
.fno = gen_helper_gvec_uqadd_d,
.write_aofs = true,
.opt_opc = vecop_list,
@@ -1259,6 +1282,41 @@ void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_sqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ int64_t max = MAKE_64BIT_MASK(0, (8 << esz) - 1);
+ int64_t min = -1ll - max;
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(tmp, a, b);
+ tcg_gen_smin_i64(res, tmp, tcg_constant_i64(max));
+ tcg_gen_smax_i64(res, res, tcg_constant_i64(min));
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
+void gen_sqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+ TCGv_i64 t2 = tcg_temp_new_i64();
+
+ tcg_gen_add_i64(t0, a, b);
+
+ /* Compute signed overflow indication into T1 */
+ tcg_gen_xor_i64(t1, a, b);
+ tcg_gen_xor_i64(t2, t0, a);
+ tcg_gen_andc_i64(t1, t2, t1);
+
+ /* Compute saturated value into T2 */
+ tcg_gen_sari_i64(t2, a, 63);
+ tcg_gen_xori_i64(t2, t2, INT64_MAX);
+
+ tcg_gen_movcond_i64(TCG_COND_LT, res, t1, tcg_constant_i64(0), t2, t0);
+ tcg_gen_xor_i64(t0, t0, res);
+ tcg_gen_or_i64(qc, qc, t0);
+}
+
static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -1292,6 +1350,7 @@ void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_sqadd_vec,
+ .fni8 = gen_sqadd_d,
.fno = gen_helper_gvec_sqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
@@ -1301,6 +1360,26 @@ void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_uqsub_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(tmp, a, b);
+ tcg_gen_smax_i64(res, tmp, tcg_constant_i64(0));
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
+void gen_uqsub_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t, a, b);
+ tcg_gen_movcond_i64(TCG_COND_LTU, res, a, b, tcg_constant_i64(0), t);
+ tcg_gen_xor_i64(t, t, res);
+ tcg_gen_or_i64(qc, qc, t);
+}
+
static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -1334,6 +1413,7 @@ void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_uqsub_vec,
+ .fni8 = gen_uqsub_d,
.fno = gen_helper_gvec_uqsub_d,
.opt_opc = vecop_list,
.write_aofs = true,
@@ -1343,6 +1423,41 @@ void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_sqsub_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
+{
+ int64_t max = MAKE_64BIT_MASK(0, (8 << esz) - 1);
+ int64_t min = -1ll - max;
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(tmp, a, b);
+ tcg_gen_smin_i64(res, tmp, tcg_constant_i64(max));
+ tcg_gen_smax_i64(res, res, tcg_constant_i64(min));
+ tcg_gen_xor_i64(tmp, tmp, res);
+ tcg_gen_or_i64(qc, qc, tmp);
+}
+
+void gen_sqsub_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+ TCGv_i64 t2 = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t0, a, b);
+
+ /* Compute signed overflow indication into T1 */
+ tcg_gen_xor_i64(t1, a, b);
+ tcg_gen_xor_i64(t2, t0, a);
+ tcg_gen_and_i64(t1, t1, t2);
+
+ /* Compute saturated value into T2 */
+ tcg_gen_sari_i64(t2, a, 63);
+ tcg_gen_xori_i64(t2, t2, INT64_MAX);
+
+ tcg_gen_movcond_i64(TCG_COND_LT, res, t1, tcg_constant_i64(0), t2, t0);
+ tcg_gen_xor_i64(t0, t0, res);
+ tcg_gen_or_i64(qc, qc, t0);
+}
+
static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
@@ -1376,6 +1491,7 @@ void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_sqsub_vec,
+ .fni8 = gen_sqsub_d,
.fno = gen_helper_gvec_sqsub_d,
.opt_opc = vecop_list,
.write_aofs = true,
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index 9505a5fd18..0af15e9f6e 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -155,168 +155,6 @@ uint32_t HELPER(glue(neon_,name))(uint32_t arg) \
return arg; \
}
-
-#define NEON_USAT(dest, src1, src2, type) do { \
- uint32_t tmp = (uint32_t)src1 + (uint32_t)src2; \
- if (tmp != (type)tmp) { \
- SET_QC(); \
- dest = ~0; \
- } else { \
- dest = tmp; \
- }} while(0)
-#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint8_t)
-NEON_VOP_ENV(qadd_u8, neon_u8, 4)
-#undef NEON_FN
-#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint16_t)
-NEON_VOP_ENV(qadd_u16, neon_u16, 2)
-#undef NEON_FN
-#undef NEON_USAT
-
-uint32_t HELPER(neon_qadd_u32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- uint32_t res = a + b;
- if (res < a) {
- SET_QC();
- res = ~0;
- }
- return res;
-}
-
-uint64_t HELPER(neon_qadd_u64)(CPUARMState *env, uint64_t src1, uint64_t src2)
-{
- uint64_t res;
-
- res = src1 + src2;
- if (res < src1) {
- SET_QC();
- res = ~(uint64_t)0;
- }
- return res;
-}
-
-#define NEON_SSAT(dest, src1, src2, type) do { \
- int32_t tmp = (uint32_t)src1 + (uint32_t)src2; \
- if (tmp != (type)tmp) { \
- SET_QC(); \
- if (src2 > 0) { \
- tmp = (1 << (sizeof(type) * 8 - 1)) - 1; \
- } else { \
- tmp = 1 << (sizeof(type) * 8 - 1); \
- } \
- } \
- dest = tmp; \
- } while(0)
-#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int8_t)
-NEON_VOP_ENV(qadd_s8, neon_s8, 4)
-#undef NEON_FN
-#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int16_t)
-NEON_VOP_ENV(qadd_s16, neon_s16, 2)
-#undef NEON_FN
-#undef NEON_SSAT
-
-uint32_t HELPER(neon_qadd_s32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- uint32_t res = a + b;
- if (((res ^ a) & SIGNBIT) && !((a ^ b) & SIGNBIT)) {
- SET_QC();
- res = ~(((int32_t)a >> 31) ^ SIGNBIT);
- }
- return res;
-}
-
-uint64_t HELPER(neon_qadd_s64)(CPUARMState *env, uint64_t src1, uint64_t src2)
-{
- uint64_t res;
-
- res = src1 + src2;
- if (((res ^ src1) & SIGNBIT64) && !((src1 ^ src2) & SIGNBIT64)) {
- SET_QC();
- res = ((int64_t)src1 >> 63) ^ ~SIGNBIT64;
- }
- return res;
-}
-
-#define NEON_USAT(dest, src1, src2, type) do { \
- uint32_t tmp = (uint32_t)src1 - (uint32_t)src2; \
- if (tmp != (type)tmp) { \
- SET_QC(); \
- dest = 0; \
- } else { \
- dest = tmp; \
- }} while(0)
-#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint8_t)
-NEON_VOP_ENV(qsub_u8, neon_u8, 4)
-#undef NEON_FN
-#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint16_t)
-NEON_VOP_ENV(qsub_u16, neon_u16, 2)
-#undef NEON_FN
-#undef NEON_USAT
-
-uint32_t HELPER(neon_qsub_u32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- uint32_t res = a - b;
- if (res > a) {
- SET_QC();
- res = 0;
- }
- return res;
-}
-
-uint64_t HELPER(neon_qsub_u64)(CPUARMState *env, uint64_t src1, uint64_t src2)
-{
- uint64_t res;
-
- if (src1 < src2) {
- SET_QC();
- res = 0;
- } else {
- res = src1 - src2;
- }
- return res;
-}
-
-#define NEON_SSAT(dest, src1, src2, type) do { \
- int32_t tmp = (uint32_t)src1 - (uint32_t)src2; \
- if (tmp != (type)tmp) { \
- SET_QC(); \
- if (src2 < 0) { \
- tmp = (1 << (sizeof(type) * 8 - 1)) - 1; \
- } else { \
- tmp = 1 << (sizeof(type) * 8 - 1); \
- } \
- } \
- dest = tmp; \
- } while(0)
-#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int8_t)
-NEON_VOP_ENV(qsub_s8, neon_s8, 4)
-#undef NEON_FN
-#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int16_t)
-NEON_VOP_ENV(qsub_s16, neon_s16, 2)
-#undef NEON_FN
-#undef NEON_SSAT
-
-uint32_t HELPER(neon_qsub_s32)(CPUARMState *env, uint32_t a, uint32_t b)
-{
- uint32_t res = a - b;
- if (((res ^ a) & SIGNBIT) && ((a ^ b) & SIGNBIT)) {
- SET_QC();
- res = ~(((int32_t)a >> 31) ^ SIGNBIT);
- }
- return res;
-}
-
-uint64_t HELPER(neon_qsub_s64)(CPUARMState *env, uint64_t src1, uint64_t src2)
-{
- uint64_t res;
-
- res = src1 - src2;
- if (((res ^ src1) & SIGNBIT64) && ((src1 ^ src2) & SIGNBIT64)) {
- SET_QC();
- res = ((int64_t)src1 >> 63) ^ ~SIGNBIT64;
- }
- return res;
-}
-
#define NEON_FN(dest, src1, src2) dest = (src1 + src2) >> 1
NEON_VOP(hadd_s8, neon_s8, 4)
NEON_VOP(hadd_u8, neon_u8, 4)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 781b224972..ca7ba6b1e8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -9291,21 +9291,28 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
* or scalar-three-reg-same groups.
*/
TCGCond cond;
+ TCGv_i64 qc;
switch (opcode) {
case 0x1: /* SQADD */
+ qc = tcg_temp_new_i64();
+ tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
if (u) {
- gen_helper_neon_qadd_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
+ gen_uqadd_d(tcg_rd, qc, tcg_rn, tcg_rm);
} else {
- gen_helper_neon_qadd_s64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
+ gen_sqadd_d(tcg_rd, qc, tcg_rn, tcg_rm);
}
+ tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
break;
case 0x5: /* SQSUB */
+ qc = tcg_temp_new_i64();
+ tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
if (u) {
- gen_helper_neon_qsub_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
+ gen_uqsub_d(tcg_rd, qc, tcg_rn, tcg_rm);
} else {
- gen_helper_neon_qsub_s64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
+ gen_sqsub_d(tcg_rd, qc, tcg_rn, tcg_rm);
}
+ tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
break;
case 0x6: /* CMGT, CMHI */
cond = u ? TCG_COND_GTU : TCG_COND_GT;
@@ -9425,35 +9432,16 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
* OPTME: special-purpose helpers would avoid doing some
* unnecessary work in the helper for the 8 and 16 bit cases.
*/
- NeonGenTwoOpEnvFn *genenvfn;
- TCGv_i32 tcg_rn = tcg_temp_new_i32();
- TCGv_i32 tcg_rm = tcg_temp_new_i32();
- TCGv_i32 tcg_rd32 = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_rn, rn, 0, size);
- read_vec_element_i32(s, tcg_rm, rm, 0, size);
+ NeonGenTwoOpEnvFn *genenvfn = NULL;
+ void (*genfn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp) = NULL;
switch (opcode) {
case 0x1: /* SQADD, UQADD */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qadd_s8, gen_helper_neon_qadd_u8 },
- { gen_helper_neon_qadd_s16, gen_helper_neon_qadd_u16 },
- { gen_helper_neon_qadd_s32, gen_helper_neon_qadd_u32 },
- };
- genenvfn = fns[size][u];
+ genfn = u ? gen_uqadd_bhs : gen_sqadd_bhs;
break;
- }
case 0x5: /* SQSUB, UQSUB */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qsub_s8, gen_helper_neon_qsub_u8 },
- { gen_helper_neon_qsub_s16, gen_helper_neon_qsub_u16 },
- { gen_helper_neon_qsub_s32, gen_helper_neon_qsub_u32 },
- };
- genenvfn = fns[size][u];
+ genfn = u ? gen_uqsub_bhs : gen_sqsub_bhs;
break;
- }
case 0x9: /* SQSHL, UQSHL */
{
static NeonGenTwoOpEnvFn * const fns[3][2] = {
@@ -9488,8 +9476,29 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
g_assert_not_reached();
}
- genenvfn(tcg_rd32, tcg_env, tcg_rn, tcg_rm);
- tcg_gen_extu_i32_i64(tcg_rd, tcg_rd32);
+ if (genenvfn) {
+ TCGv_i32 tcg_rn = tcg_temp_new_i32();
+ TCGv_i32 tcg_rm = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, tcg_rn, rn, 0, size);
+ read_vec_element_i32(s, tcg_rm, rm, 0, size);
+ genenvfn(tcg_rn, tcg_env, tcg_rn, tcg_rm);
+ tcg_gen_extu_i32_i64(tcg_rd, tcg_rn);
+ } else {
+ TCGv_i64 tcg_rn = tcg_temp_new_i64();
+ TCGv_i64 tcg_rm = tcg_temp_new_i64();
+ TCGv_i64 qc = tcg_temp_new_i64();
+
+ read_vec_element(s, tcg_rn, rn, 0, size | (u ? 0 : MO_SIGN));
+ read_vec_element(s, tcg_rm, rm, 0, size | (u ? 0 : MO_SIGN));
+ tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
+ genfn(tcg_rd, qc, tcg_rn, tcg_rm, size);
+ tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
+ if (!u) {
+ /* Truncate signed 64-bit result for writeback. */
+ tcg_gen_ext_i64(tcg_rd, tcg_rd, size);
+ }
+ }
}
write_fp_dreg(s, rd, tcg_rd);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (39 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:44 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD " Richard Henderson
` (26 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 11 ++++
target/arm/tcg/translate-a64.c | 100 +++++++++++++++++++--------------
2 files changed, 68 insertions(+), 43 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index f48adef5bb..19010af03b 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -44,6 +44,7 @@
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
+@rrr_e ........ esz:2 . rm:5 ...... rn:5 rd:5 &rrr_e
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
@@ -744,6 +745,11 @@ FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
+SQADD_s 0101 1110 ..1 ..... 00001 1 ..... ..... @rrr_e
+UQADD_s 0111 1110 ..1 ..... 00001 1 ..... ..... @rrr_e
+SQSUB_s 0101 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
+UQSUB_s 0111 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
+
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@@ -857,6 +863,11 @@ BSL_v 0.10 1110 011 ..... 00011 1 ..... ..... @qrrr_b
BIT_v 0.10 1110 101 ..... 00011 1 ..... ..... @qrrr_b
BIF_v 0.10 1110 111 ..... 00011 1 ..... ..... @qrrr_b
+SQADD_v 0.00 1110 ..1 ..... 00001 1 ..... ..... @qrrr_e
+UQADD_v 0.10 1110 ..1 ..... 00001 1 ..... ..... @qrrr_e
+SQSUB_v 0.00 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
+UQSUB_v 0.10 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index ca7ba6b1e8..2f7298811d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5060,6 +5060,43 @@ static const FPScalar f_scalar_frsqrts = {
};
TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
+static bool do_satacc_s(DisasContext *s, arg_rrr_e *a,
+ MemOp sgn_n, MemOp sgn_m,
+ void (*gen_bhs)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp),
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64))
+{
+ TCGv_i64 t0, t1, t2, qc;
+ MemOp esz = a->esz;
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ t0 = tcg_temp_new_i64();
+ t1 = tcg_temp_new_i64();
+ t2 = tcg_temp_new_i64();
+ qc = tcg_temp_new_i64();
+ read_vec_element(s, t1, a->rn, 0, esz | sgn_n);
+ read_vec_element(s, t2, a->rm, 0, esz | sgn_m);
+ tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
+
+ if (esz == MO_64) {
+ gen_d(t0, qc, t1, t2);
+ } else {
+ gen_bhs(t0, qc, t1, t2, esz);
+ tcg_gen_ext_i64(t0, t0, esz);
+ }
+
+ write_fp_dreg(s, a->rd, t0);
+ tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
+ return true;
+}
+
+TRANS(SQADD_s, do_satacc_s, a, MO_SIGN, MO_SIGN, gen_sqadd_bhs, gen_sqadd_d)
+TRANS(SQSUB_s, do_satacc_s, a, MO_SIGN, MO_SIGN, gen_sqsub_bhs, gen_sqsub_d)
+TRANS(UQADD_s, do_satacc_s, a, 0, 0, gen_uqadd_bhs, gen_uqadd_d)
+TRANS(UQSUB_s, do_satacc_s, a, 0, 0, gen_uqsub_bhs, gen_uqsub_d)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5298,6 +5335,11 @@ TRANS(BSL_v, do_bitsel, a->q, a->rd, a->rd, a->rn, a->rm)
TRANS(BIT_v, do_bitsel, a->q, a->rd, a->rm, a->rn, a->rd)
TRANS(BIF_v, do_bitsel, a->q, a->rd, a->rm, a->rd, a->rn)
+TRANS(SQADD_v, do_gvec_fn3, a, gen_gvec_sqadd_qc)
+TRANS(UQADD_v, do_gvec_fn3, a, gen_gvec_uqadd_qc)
+TRANS(SQSUB_v, do_gvec_fn3, a, gen_gvec_sqsub_qc)
+TRANS(UQSUB_v, do_gvec_fn3, a, gen_gvec_uqsub_qc)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9291,29 +9333,8 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
* or scalar-three-reg-same groups.
*/
TCGCond cond;
- TCGv_i64 qc;
switch (opcode) {
- case 0x1: /* SQADD */
- qc = tcg_temp_new_i64();
- tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- if (u) {
- gen_uqadd_d(tcg_rd, qc, tcg_rn, tcg_rm);
- } else {
- gen_sqadd_d(tcg_rd, qc, tcg_rn, tcg_rm);
- }
- tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- break;
- case 0x5: /* SQSUB */
- qc = tcg_temp_new_i64();
- tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- if (u) {
- gen_uqsub_d(tcg_rd, qc, tcg_rn, tcg_rm);
- } else {
- gen_sqsub_d(tcg_rd, qc, tcg_rn, tcg_rm);
- }
- tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- break;
case 0x6: /* CMGT, CMHI */
cond = u ? TCG_COND_GTU : TCG_COND_GT;
do_cmop:
@@ -9366,6 +9387,8 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
break;
default:
+ case 0x1: /* SQADD / UQADD */
+ case 0x5: /* SQSUB / UQSUB */
g_assert_not_reached();
}
}
@@ -9387,8 +9410,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
TCGv_i64 tcg_rd;
switch (opcode) {
- case 0x1: /* SQADD, UQADD */
- case 0x5: /* SQSUB, UQSUB */
case 0x9: /* SQSHL, UQSHL */
case 0xb: /* SQRSHL, UQRSHL */
break;
@@ -9410,6 +9431,8 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x1: /* SQADD, UQADD */
+ case 0x5: /* SQSUB, UQSUB */
unallocated_encoding(s);
return;
}
@@ -9436,12 +9459,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
void (*genfn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp) = NULL;
switch (opcode) {
- case 0x1: /* SQADD, UQADD */
- genfn = u ? gen_uqadd_bhs : gen_sqadd_bhs;
- break;
- case 0x5: /* SQSUB, UQSUB */
- genfn = u ? gen_uqsub_bhs : gen_sqsub_bhs;
- break;
case 0x9: /* SQSHL, UQSHL */
{
static NeonGenTwoOpEnvFn * const fns[3][2] = {
@@ -9473,6 +9490,8 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
break;
}
default:
+ case 0x1: /* SQADD, UQADD */
+ case 0x5: /* SQSUB, UQSUB */
g_assert_not_reached();
}
@@ -10933,6 +10952,11 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
return;
}
break;
+
+ case 0x01: /* SQADD, UQADD */
+ case 0x05: /* SQSUB, UQSUB */
+ unallocated_encoding(s);
+ return;
}
if (!fp_access_check(s)) {
@@ -10940,20 +10964,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x01: /* SQADD, UQADD */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uqadd_qc, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqadd_qc, size);
- }
- return;
- case 0x05: /* SQSUB, UQSUB */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uqsub_qc, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqsub_qc, size);
- }
- return;
case 0x08: /* SSHL, USHL */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_ushl, size);
@@ -11038,6 +11048,10 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
vec_full_reg_offset(s, rm),
is_q ? 16 : 8, vec_full_reg_size(s));
return;
+
+ case 0x01: /* SQADD, UQADD */
+ case 0x05: /* SQSUB, UQSUB */
+ g_assert_not_reached();
}
if (size == 3) {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (40 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:44 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 43/67] target/arm: Convert SSHL, USHL " Richard Henderson
` (25 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
These are faux 2-operand instructions, reading from rd.
Sort them next to the other three-operand same insns for clarity.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 8 +++++
target/arm/tcg/translate-a64.c | 64 ++++------------------------------
2 files changed, 14 insertions(+), 58 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 19010af03b..7c350ba833 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -45,6 +45,7 @@
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@rrr_e ........ esz:2 . rm:5 ...... rn:5 rd:5 &rrr_e
+@r2r_e ........ esz:2 . ..... ...... rm:5 rd:5 &rrr_e rn=%rd
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
@@ -60,6 +61,7 @@
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
+@qr2r_e . q:1 ...... esz:2 . ..... ...... rm:5 rd:5 &qrrr_e rn=%rd
@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
&qrrx_e esz=1 idx=%hlm
@@ -750,6 +752,9 @@ UQADD_s 0111 1110 ..1 ..... 00001 1 ..... ..... @rrr_e
SQSUB_s 0101 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
UQSUB_s 0111 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
+SUQADD_s 0101 1110 ..1 00000 00111 0 ..... ..... @r2r_e
+USQADD_s 0111 1110 ..1 00000 00111 0 ..... ..... @r2r_e
+
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@@ -868,6 +873,9 @@ UQADD_v 0.10 1110 ..1 ..... 00001 1 ..... ..... @qrrr_e
SQSUB_v 0.00 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
UQSUB_v 0.10 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
+SUQADD_v 0.00 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
+USQADD_v 0.10 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 2f7298811d..fbcf18f92a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5096,6 +5096,8 @@ TRANS(SQADD_s, do_satacc_s, a, MO_SIGN, MO_SIGN, gen_sqadd_bhs, gen_sqadd_d)
TRANS(SQSUB_s, do_satacc_s, a, MO_SIGN, MO_SIGN, gen_sqsub_bhs, gen_sqsub_d)
TRANS(UQADD_s, do_satacc_s, a, 0, 0, gen_uqadd_bhs, gen_uqadd_d)
TRANS(UQSUB_s, do_satacc_s, a, 0, 0, gen_uqsub_bhs, gen_uqsub_d)
+TRANS(SUQADD_s, do_satacc_s, a, MO_SIGN, 0, gen_suqadd_bhs, gen_suqadd_d)
+TRANS(USQADD_s, do_satacc_s, a, 0, MO_SIGN, gen_usqadd_bhs, gen_usqadd_d)
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
@@ -5339,6 +5341,8 @@ TRANS(SQADD_v, do_gvec_fn3, a, gen_gvec_sqadd_qc)
TRANS(UQADD_v, do_gvec_fn3, a, gen_gvec_uqadd_qc)
TRANS(SQSUB_v, do_gvec_fn3, a, gen_gvec_sqsub_qc)
TRANS(UQSUB_v, do_gvec_fn3, a, gen_gvec_uqsub_qc)
+TRANS(SUQADD_v, do_gvec_fn3, a, gen_gvec_suqadd_qc)
+TRANS(USQADD_v, do_gvec_fn3, a, gen_gvec_usqadd_qc)
/*
* Advanced SIMD scalar/vector x indexed element
@@ -10009,48 +10013,6 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
clear_vec_high(s, is_q, rd);
}
-/* Remaining saturating accumulating ops */
-static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
- bool is_q, unsigned size, int rn, int rd)
-{
- TCGv_i64 res, qc, a, b;
-
- if (!is_scalar) {
- gen_gvec_fn3(s, is_q, rd, rd, rn,
- is_u ? gen_gvec_usqadd_qc : gen_gvec_suqadd_qc, size);
- return;
- }
-
- res = tcg_temp_new_i64();
- qc = tcg_temp_new_i64();
- a = tcg_temp_new_i64();
- b = tcg_temp_new_i64();
-
- /* Read and extend scalar inputs to 64-bits. */
- read_vec_element(s, a, rd, 0, size | (is_u ? 0 : MO_SIGN));
- read_vec_element(s, b, rn, 0, size | (is_u ? MO_SIGN : 0));
- tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
-
- if (size == MO_64) {
- if (is_u) {
- gen_usqadd_d(res, qc, a, b);
- } else {
- gen_suqadd_d(res, qc, a, b);
- }
- } else {
- if (is_u) {
- gen_usqadd_bhs(res, qc, a, b, size);
- } else {
- gen_suqadd_bhs(res, qc, a, b, size);
- /* Truncate signed 64-bit result for writeback. */
- tcg_gen_ext_i64(res, res, size);
- }
- }
-
- write_fp_dreg(s, rd, res);
- tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
-}
-
/* AdvSIMD scalar two reg misc
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
* +-----+---+-----------+------+-----------+--------+-----+------+------+
@@ -10070,12 +10032,6 @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
TCGv_ptr tcg_fpstatus;
switch (opcode) {
- case 0x3: /* USQADD / SUQADD*/
- if (!fp_access_check(s)) {
- return;
- }
- handle_2misc_satacc(s, true, u, false, size, rn, rd);
- return;
case 0x7: /* SQABS / SQNEG */
break;
case 0xa: /* CMLT */
@@ -10175,6 +10131,7 @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x3: /* USQADD / SUQADD */
unallocated_encoding(s);
return;
}
@@ -11666,16 +11623,6 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
return;
}
break;
- case 0x3: /* SUQADD, USQADD */
- if (size == 3 && !is_q) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- handle_2misc_satacc(s, false, u, is_q, size, rn, rd);
- return;
case 0x7: /* SQABS, SQNEG */
if (size == 3 && !is_q) {
unallocated_encoding(s);
@@ -11850,6 +11797,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
break;
}
default:
+ case 0x3: /* SUQADD, USQADD */
unallocated_encoding(s);
return;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 43/67] target/arm: Convert SSHL, USHL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (41 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:46 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec Richard Henderson
` (24 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 7 ++++++
target/arm/tcg/translate-a64.c | 40 +++++++++++++++++++++-------------
2 files changed, 32 insertions(+), 15 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7c350ba833..ea897d6732 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -42,6 +42,7 @@
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
+@rrr_d ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=3
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@rrr_e ........ esz:2 . rm:5 ...... rn:5 rd:5 &rrr_e
@@ -755,6 +756,9 @@ UQSUB_s 0111 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
SUQADD_s 0101 1110 ..1 00000 00111 0 ..... ..... @r2r_e
USQADD_s 0111 1110 ..1 00000 00111 0 ..... ..... @r2r_e
+SSHL_s 0101 1110 111 ..... 01000 1 ..... ..... @rrr_d
+USHL_s 0111 1110 111 ..... 01000 1 ..... ..... @rrr_d
+
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@@ -876,6 +880,9 @@ UQSUB_v 0.10 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
SUQADD_v 0.00 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
USQADD_v 0.10 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
+SSHL_v 0.00 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
+USHL_v 0.10 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index fbcf18f92a..8d39a9663e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5099,6 +5099,24 @@ TRANS(UQSUB_s, do_satacc_s, a, 0, 0, gen_uqsub_bhs, gen_uqsub_d)
TRANS(SUQADD_s, do_satacc_s, a, MO_SIGN, 0, gen_suqadd_bhs, gen_suqadd_d)
TRANS(USQADD_s, do_satacc_s, a, 0, MO_SIGN, gen_usqadd_bhs, gen_usqadd_d)
+static bool do_int3_scalar_d(DisasContext *s, arg_rrr_e *a,
+ void (*fn)(TCGv_i64, TCGv_i64, TCGv_i64))
+{
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t0, a->rn, 0, MO_64);
+ read_vec_element(s, t1, a->rm, 0, MO_64);
+ fn(t0, t0, t1);
+ write_fp_dreg(s, a->rd, t0);
+ }
+ return true;
+}
+
+TRANS(SSHL_s, do_int3_scalar_d, a, gen_sshl_i64)
+TRANS(USHL_s, do_int3_scalar_d, a, gen_ushl_i64)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5344,6 +5362,10 @@ TRANS(UQSUB_v, do_gvec_fn3, a, gen_gvec_uqsub_qc)
TRANS(SUQADD_v, do_gvec_fn3, a, gen_gvec_suqadd_qc)
TRANS(USQADD_v, do_gvec_fn3, a, gen_gvec_usqadd_qc)
+TRANS(SSHL_v, do_gvec_fn3, a, gen_gvec_sshl)
+TRANS(USHL_v, do_gvec_fn3, a, gen_gvec_ushl)
+
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9355,13 +9377,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
gen_cmtst_i64(tcg_rd, tcg_rn, tcg_rm);
break;
- case 0x8: /* SSHL, USHL */
- if (u) {
- gen_ushl_i64(tcg_rd, tcg_rn, tcg_rm);
- } else {
- gen_sshl_i64(tcg_rd, tcg_rn, tcg_rm);
- }
- break;
case 0x9: /* SQSHL, UQSHL */
if (u) {
gen_helper_neon_qshl_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
@@ -9393,6 +9408,7 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
default:
case 0x1: /* SQADD / UQADD */
case 0x5: /* SQSUB / UQSUB */
+ case 0x8: /* SSHL, USHL */
g_assert_not_reached();
}
}
@@ -9417,7 +9433,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x9: /* SQSHL, UQSHL */
case 0xb: /* SQRSHL, UQRSHL */
break;
- case 0x8: /* SSHL, USHL */
case 0xa: /* SRSHL, URSHL */
case 0x6: /* CMGT, CMHI */
case 0x7: /* CMGE, CMHS */
@@ -9437,6 +9452,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
default:
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
+ case 0x8: /* SSHL, USHL */
unallocated_encoding(s);
return;
}
@@ -10921,13 +10937,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x08: /* SSHL, USHL */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_ushl, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sshl, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11008,6 +11017,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
+ case 0x08: /* SSHL, USHL */
g_assert_not_reached();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (42 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 43/67] target/arm: Convert SSHL, USHL " Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:51 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree Richard Henderson
` (23 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 10 +++++++++
target/arm/tcg/translate.h | 4 ++++
target/arm/tcg/neon-dp.decode | 10 ++-------
target/arm/tcg/gengvec.c | 22 +++++++++++++++++++
target/arm/tcg/neon_helper.c | 38 ++++++++++++++++++++++++++++++++-
target/arm/tcg/translate-a64.c | 17 ++++++---------
target/arm/tcg/translate-neon.c | 6 ++----
7 files changed, 84 insertions(+), 23 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index a14c040451..25eb7bf5df 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -327,6 +327,16 @@ DEF_HELPER_3(neon_qrshl_s32, i32, env, i32, i32)
DEF_HELPER_3(neon_qrshl_u64, i64, env, i64, i64)
DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
+DEF_HELPER_FLAGS_4(gvec_srshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_srshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_srshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_srshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_urshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_urshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_urshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_urshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
DEF_HELPER_2(neon_add_u8, i32, i32, i32)
DEF_HELPER_2(neon_add_u16, i32, i32, i32)
DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 87439dcc61..ea63ffc47b 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -459,6 +459,10 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/neon-dp.decode b/target/arm/tcg/neon-dp.decode
index fd3a01bfa0..8525c65c0d 100644
--- a/target/arm/tcg/neon-dp.decode
+++ b/target/arm/tcg/neon-dp.decode
@@ -117,14 +117,8 @@ VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
}
-{
- VRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
- VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
-}
-{
- VRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
- VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
-}
+VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
+VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
{
VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 66a514ba86..d9a9132722 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1217,6 +1217,28 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[] = {
+ gen_helper_gvec_srshl_b, gen_helper_gvec_srshl_h,
+ gen_helper_gvec_srshl_s, gen_helper_gvec_srshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[] = {
+ gen_helper_gvec_urshl_b, gen_helper_gvec_urshl_h,
+ gen_helper_gvec_urshl_s, gen_helper_gvec_urshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
uint64_t max = MAKE_64BIT_MASK(0, 8 << esz);
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index 0af15e9f6e..516ecc1dcb 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -6,10 +6,11 @@
*
* This code is licensed under the GNU GPL v2.
*/
-#include "qemu/osdep.h"
+#include "qemu/osdep.h"
#include "cpu.h"
#include "exec/helper-proto.h"
+#include "tcg/tcg-gvec-desc.h"
#include "fpu/softfloat.h"
#include "vec_internal.h"
@@ -117,6 +118,17 @@ NEON_VOP_BODY(vtype, n)
uint32_t HELPER(glue(neon_,name))(CPUARMState *env, uint32_t arg1, uint32_t arg2) \
NEON_VOP_BODY(vtype, n)
+#define NEON_GVEC_VOP2(name, vtype) \
+void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
+{ \
+ intptr_t i, opr_sz = simd_oprsz(desc); \
+ vtype *d = vd, *n = vn, *m = vm; \
+ for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
+ NEON_FN(d[i], n[i], m[i]); \
+ } \
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
+}
+
/* Pairwise operations. */
/* For 32-bit elements each segment only contains a single element, so
the elementwise and pairwise operations are the same. */
@@ -263,11 +275,23 @@ NEON_VOP(shl_s16, neon_s16, 2)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
NEON_VOP(rshl_s8, neon_s8, 4)
+NEON_GVEC_VOP2(gvec_srshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
NEON_VOP(rshl_s16, neon_s16, 2)
+NEON_GVEC_VOP2(gvec_srshl_h, int16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, true, NULL))
+NEON_GVEC_VOP2(gvec_srshl_s, int32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_d(src1, (int8_t)src2, true, NULL))
+NEON_GVEC_VOP2(gvec_srshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_rshl_s32)(uint32_t val, uint32_t shift)
@@ -283,11 +307,23 @@ uint64_t HELPER(neon_rshl_s64)(uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
NEON_VOP(rshl_u8, neon_u8, 4)
+NEON_GVEC_VOP2(gvec_urshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
NEON_VOP(rshl_u16, neon_u16, 2)
+NEON_GVEC_VOP2(gvec_urshl_h, uint16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, true, NULL))
+NEON_GVEC_VOP2(gvec_urshl_s, int32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_d(src1, (int8_t)src2, true, NULL))
+NEON_GVEC_VOP2(gvec_urshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_rshl_u32)(uint32_t val, uint32_t shift)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8d39a9663e..2dffda36a8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10937,6 +10937,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x0a: /* SRSHL, URSHL */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_urshl, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_srshl, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11087,16 +11094,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
genenvfn = fns[size][u];
break;
}
- case 0xa: /* SRSHL, URSHL */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_rshl_s8, gen_helper_neon_rshl_u8 },
- { gen_helper_neon_rshl_s16, gen_helper_neon_rshl_u16 },
- { gen_helper_neon_rshl_s32, gen_helper_neon_rshl_u32 },
- };
- genfn = fns[size][u];
- break;
- }
case 0xb: /* SQRSHL, UQRSHL */
{
static NeonGenTwoOpEnvFn * const fns[3][2] = {
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 18b048611b..337488bbf1 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -794,6 +794,8 @@ DO_3SAME(VQADD_S, gen_gvec_sqadd_qc)
DO_3SAME(VQADD_U, gen_gvec_uqadd_qc)
DO_3SAME(VQSUB_S, gen_gvec_sqsub_qc)
DO_3SAME(VQSUB_U, gen_gvec_uqsub_qc)
+DO_3SAME(VRSHL_S, gen_gvec_srshl)
+DO_3SAME(VRSHL_U, gen_gvec_urshl)
/* These insns are all gvec_bitsel but with the inputs in various orders. */
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
@@ -929,8 +931,6 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
} \
DO_3SAME_64(INSN, gen_##INSN##_elt)
-DO_3SAME_64(VRSHL_S64, gen_helper_neon_rshl_s64)
-DO_3SAME_64(VRSHL_U64, gen_helper_neon_rshl_u64)
DO_3SAME_64_ENV(VQSHL_S64, gen_helper_neon_qshl_s64)
DO_3SAME_64_ENV(VQSHL_U64, gen_helper_neon_qshl_u64)
DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
@@ -999,8 +999,6 @@ DO_3SAME_32(VHSUB_S, hsub_s)
DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
DO_3SAME_32(VRHADD_U, rhadd_u)
-DO_3SAME_32(VRSHL_S, rshl_s)
-DO_3SAME_32(VRSHL_U, rshl_u)
DO_3SAME_32_ENV(VQSHL_S, qshl_s)
DO_3SAME_32_ENV(VQSHL_U, qshl_u)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (43 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec Richard Henderson
@ 2024-05-24 23:20 ` Richard Henderson
2024-05-28 15:51 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec Richard Henderson
` (22 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:20 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++++
target/arm/tcg/translate-a64.c | 22 +++++++---------------
2 files changed, 11 insertions(+), 15 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index ea897d6732..9e02776036 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -758,6 +758,8 @@ USQADD_s 0111 1110 ..1 00000 00111 0 ..... ..... @r2r_e
SSHL_s 0101 1110 111 ..... 01000 1 ..... ..... @rrr_d
USHL_s 0111 1110 111 ..... 01000 1 ..... ..... @rrr_d
+SRSHL_s 0101 1110 111 ..... 01010 1 ..... ..... @rrr_d
+URSHL_s 0111 1110 111 ..... 01010 1 ..... ..... @rrr_d
### Advanced SIMD scalar pairwise
@@ -882,6 +884,8 @@ USQADD_v 0.10 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
SSHL_v 0.00 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
USHL_v 0.10 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
+SRSHL_v 0.00 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
+URSHL_v 0.10 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 2dffda36a8..24f2025997 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5116,6 +5116,8 @@ static bool do_int3_scalar_d(DisasContext *s, arg_rrr_e *a,
TRANS(SSHL_s, do_int3_scalar_d, a, gen_sshl_i64)
TRANS(USHL_s, do_int3_scalar_d, a, gen_ushl_i64)
+TRANS(SRSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_s64)
+TRANS(URSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_u64)
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
@@ -5364,6 +5366,8 @@ TRANS(USQADD_v, do_gvec_fn3, a, gen_gvec_usqadd_qc)
TRANS(SSHL_v, do_gvec_fn3, a, gen_gvec_sshl)
TRANS(USHL_v, do_gvec_fn3, a, gen_gvec_ushl)
+TRANS(SRSHL_v, do_gvec_fn3, a, gen_gvec_srshl)
+TRANS(URSHL_v, do_gvec_fn3, a, gen_gvec_urshl)
/*
@@ -9384,13 +9388,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
gen_helper_neon_qshl_s64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
}
break;
- case 0xa: /* SRSHL, URSHL */
- if (u) {
- gen_helper_neon_rshl_u64(tcg_rd, tcg_rn, tcg_rm);
- } else {
- gen_helper_neon_rshl_s64(tcg_rd, tcg_rn, tcg_rm);
- }
- break;
case 0xb: /* SQRSHL, UQRSHL */
if (u) {
gen_helper_neon_qrshl_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
@@ -9409,6 +9406,7 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
case 0x1: /* SQADD / UQADD */
case 0x5: /* SQSUB / UQSUB */
case 0x8: /* SSHL, USHL */
+ case 0xa: /* SRSHL, URSHL */
g_assert_not_reached();
}
}
@@ -9433,7 +9431,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x9: /* SQSHL, UQSHL */
case 0xb: /* SQRSHL, UQRSHL */
break;
- case 0xa: /* SRSHL, URSHL */
case 0x6: /* CMGT, CMHI */
case 0x7: /* CMGE, CMHS */
case 0x11: /* CMTST, CMEQ */
@@ -9453,6 +9450,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
case 0x8: /* SSHL, USHL */
+ case 0xa: /* SRSHL, URSHL */
unallocated_encoding(s);
return;
}
@@ -10937,13 +10935,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x0a: /* SRSHL, URSHL */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_urshl, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_srshl, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11025,6 +11016,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
case 0x08: /* SSHL, USHL */
+ case 0x0a: /* SRSHL, URSHL */
g_assert_not_reached();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (44 preceding siblings ...)
2024-05-24 23:20 ` [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:53 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree Richard Henderson
` (21 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 8 ++++++++
target/arm/tcg/translate.h | 4 ++++
target/arm/tcg/neon-dp.decode | 10 ++-------
target/arm/tcg/gengvec.c | 24 ++++++++++++++++++++++
target/arm/tcg/neon_helper.c | 36 +++++++++++++++++++++++++++++++++
target/arm/tcg/translate-a64.c | 17 +++++++---------
target/arm/tcg/translate-neon.c | 6 ++----
7 files changed, 83 insertions(+), 22 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 25eb7bf5df..f345087ddb 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -326,6 +326,14 @@ DEF_HELPER_3(neon_qrshl_u32, i32, env, i32, i32)
DEF_HELPER_3(neon_qrshl_s32, i32, env, i32, i32)
DEF_HELPER_3(neon_qrshl_u64, i64, env, i64, i64)
DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
+DEF_HELPER_FLAGS_5(neon_sqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index ea63ffc47b..6c6d4d49e7 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -463,6 +463,10 @@ void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_neon_sqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/neon-dp.decode b/target/arm/tcg/neon-dp.decode
index 8525c65c0d..6d4996b8d8 100644
--- a/target/arm/tcg/neon-dp.decode
+++ b/target/arm/tcg/neon-dp.decode
@@ -109,14 +109,8 @@ VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
@3same_64_rev .... ... . . . 11 .... .... .... . q:1 . . .... \
&3same vm=%vn_dp vn=%vm_dp vd=%vd_dp size=3
-{
- VQSHL_S64_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
- VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
-}
-{
- VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
- VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
-}
+VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
+VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
{
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index d9a9132722..773dbf41d3 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1239,6 +1239,30 @@ void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
+void gen_neon_sqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[] = {
+ gen_helper_neon_sqshl_b, gen_helper_neon_sqshl_h,
+ gen_helper_neon_sqshl_s, gen_helper_neon_sqshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
+ opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[] = {
+ gen_helper_neon_uqshl_b, gen_helper_neon_uqshl_h,
+ gen_helper_neon_uqshl_s, gen_helper_neon_uqshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
+ opr_sz, max_sz, 0, fns[vece]);
+}
+
void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
uint64_t max = MAKE_64BIT_MASK(0, 8 << esz);
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index 516ecc1dcb..88301f0dcb 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -129,6 +129,18 @@ void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
clear_tail(d, opr_sz, simd_maxsz(desc)); \
}
+#define NEON_GVEC_VOP2_ENV(name, vtype) \
+void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
+{ \
+ intptr_t i, opr_sz = simd_oprsz(desc); \
+ vtype *d = vd, *n = vn, *m = vm; \
+ CPUARMState *env = venv; \
+ for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
+ NEON_FN(d[i], n[i], m[i]); \
+ } \
+ clear_tail(d, opr_sz, simd_maxsz(desc)); \
+}
+
/* Pairwise operations. */
/* For 32-bit elements each segment only contains a single element, so
the elementwise and pairwise operations are the same. */
@@ -339,11 +351,23 @@ uint64_t HELPER(neon_rshl_u64)(uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
NEON_VOP_ENV(qshl_u8, neon_u8, 4)
+NEON_GVEC_VOP2_ENV(neon_uqshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
NEON_VOP_ENV(qshl_u16, neon_u16, 2)
+NEON_GVEC_VOP2_ENV(neon_uqshl_h, uint16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, false, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_uqshl_s, uint32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_d(src1, (int8_t)src2, false, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_uqshl_d, uint64_t)
#undef NEON_FN
uint32_t HELPER(neon_qshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
@@ -359,11 +383,23 @@ uint64_t HELPER(neon_qshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
NEON_VOP_ENV(qshl_s8, neon_s8, 4)
+NEON_GVEC_VOP2_ENV(neon_sqshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
NEON_VOP_ENV(qshl_s16, neon_s16, 2)
+NEON_GVEC_VOP2_ENV(neon_sqshl_h, int16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, false, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_sqshl_s, int32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_d(src1, (int8_t)src2, false, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_sqshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_qshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 24f2025997..50b653bb4d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10935,6 +10935,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x09: /* SQSHL, UQSHL */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_uqshl, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_sqshl, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11076,16 +11083,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
genfn = fns[size][u];
break;
}
- case 0x9: /* SQSHL, UQSHL */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qshl_s8, gen_helper_neon_qshl_u8 },
- { gen_helper_neon_qshl_s16, gen_helper_neon_qshl_u16 },
- { gen_helper_neon_qshl_s32, gen_helper_neon_qshl_u32 },
- };
- genenvfn = fns[size][u];
- break;
- }
case 0xb: /* SQRSHL, UQRSHL */
{
static NeonGenTwoOpEnvFn * const fns[3][2] = {
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 337488bbf1..a3eec47092 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -796,6 +796,8 @@ DO_3SAME(VQSUB_S, gen_gvec_sqsub_qc)
DO_3SAME(VQSUB_U, gen_gvec_uqsub_qc)
DO_3SAME(VRSHL_S, gen_gvec_srshl)
DO_3SAME(VRSHL_U, gen_gvec_urshl)
+DO_3SAME(VQSHL_S, gen_neon_sqshl)
+DO_3SAME(VQSHL_U, gen_neon_uqshl)
/* These insns are all gvec_bitsel but with the inputs in various orders. */
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
@@ -931,8 +933,6 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
} \
DO_3SAME_64(INSN, gen_##INSN##_elt)
-DO_3SAME_64_ENV(VQSHL_S64, gen_helper_neon_qshl_s64)
-DO_3SAME_64_ENV(VQSHL_U64, gen_helper_neon_qshl_u64)
DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
@@ -1000,8 +1000,6 @@ DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
DO_3SAME_32(VRHADD_U, rhadd_u)
-DO_3SAME_32_ENV(VQSHL_S, qshl_s)
-DO_3SAME_32_ENV(VQSHL_U, qshl_u)
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (45 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:53 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec Richard Henderson
` (20 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++
target/arm/tcg/translate-a64.c | 74 ++++++++++++++++++++++------------
2 files changed, 53 insertions(+), 25 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 9e02776036..85caf37948 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -760,6 +760,8 @@ SSHL_s 0101 1110 111 ..... 01000 1 ..... ..... @rrr_d
USHL_s 0111 1110 111 ..... 01000 1 ..... ..... @rrr_d
SRSHL_s 0101 1110 111 ..... 01010 1 ..... ..... @rrr_d
URSHL_s 0111 1110 111 ..... 01010 1 ..... ..... @rrr_d
+SQSHL_s 0101 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
+UQSHL_s 0111 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
### Advanced SIMD scalar pairwise
@@ -886,6 +888,8 @@ SSHL_v 0.00 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
USHL_v 0.10 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
SRSHL_v 0.00 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
URSHL_v 0.10 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
+SQSHL_v 0.00 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
+UQSHL_v 0.10 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 50b653bb4d..f8d2760bea 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5119,6 +5119,49 @@ TRANS(USHL_s, do_int3_scalar_d, a, gen_ushl_i64)
TRANS(SRSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_s64)
TRANS(URSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_u64)
+typedef struct ENVScalar2 {
+ NeonGenTwoOpEnvFn *gen_bhs[3];
+ NeonGenTwo64OpEnvFn *gen_d;
+} ENVScalar2;
+
+static bool do_env_scalar2(DisasContext *s, arg_rrr_e *a, const ENVScalar2 *f)
+{
+ if (!fp_access_check(s)) {
+ return true;
+ }
+ if (a->esz == MO_64) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rm);
+ f->gen_d(t0, tcg_env, t0, t1);
+ write_fp_dreg(s, a->rd, t0);
+ } else {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, a->esz);
+ read_vec_element_i32(s, t1, a->rm, 0, a->esz);
+ f->gen_bhs[a->esz](t0, tcg_env, t0, t1);
+ write_fp_sreg(s, a->rd, t0);
+ }
+ return true;
+}
+
+static const ENVScalar2 f_scalar_sqshl = {
+ { gen_helper_neon_qshl_s8,
+ gen_helper_neon_qshl_s16,
+ gen_helper_neon_qshl_s32 },
+ gen_helper_neon_qshl_s64,
+};
+TRANS(SQSHL_s, do_env_scalar2, a, &f_scalar_sqshl)
+
+static const ENVScalar2 f_scalar_uqshl = {
+ { gen_helper_neon_qshl_u8,
+ gen_helper_neon_qshl_u16,
+ gen_helper_neon_qshl_u32 },
+ gen_helper_neon_qshl_u64,
+};
+TRANS(UQSHL_s, do_env_scalar2, a, &f_scalar_uqshl)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5368,6 +5411,8 @@ TRANS(SSHL_v, do_gvec_fn3, a, gen_gvec_sshl)
TRANS(USHL_v, do_gvec_fn3, a, gen_gvec_ushl)
TRANS(SRSHL_v, do_gvec_fn3, a, gen_gvec_srshl)
TRANS(URSHL_v, do_gvec_fn3, a, gen_gvec_urshl)
+TRANS(SQSHL_v, do_gvec_fn3, a, gen_neon_sqshl)
+TRANS(UQSHL_v, do_gvec_fn3, a, gen_neon_uqshl)
/*
@@ -9381,13 +9426,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
gen_cmtst_i64(tcg_rd, tcg_rn, tcg_rm);
break;
- case 0x9: /* SQSHL, UQSHL */
- if (u) {
- gen_helper_neon_qshl_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
- } else {
- gen_helper_neon_qshl_s64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
- }
- break;
case 0xb: /* SQRSHL, UQRSHL */
if (u) {
gen_helper_neon_qrshl_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
@@ -9406,6 +9444,7 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
case 0x1: /* SQADD / UQADD */
case 0x5: /* SQSUB / UQSUB */
case 0x8: /* SSHL, USHL */
+ case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
g_assert_not_reached();
}
@@ -9428,7 +9467,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
TCGv_i64 tcg_rd;
switch (opcode) {
- case 0x9: /* SQSHL, UQSHL */
case 0xb: /* SQRSHL, UQRSHL */
break;
case 0x6: /* CMGT, CMHI */
@@ -9450,6 +9488,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
case 0x8: /* SSHL, USHL */
+ case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
unallocated_encoding(s);
return;
@@ -9477,16 +9516,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
void (*genfn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp) = NULL;
switch (opcode) {
- case 0x9: /* SQSHL, UQSHL */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qshl_s8, gen_helper_neon_qshl_u8 },
- { gen_helper_neon_qshl_s16, gen_helper_neon_qshl_u16 },
- { gen_helper_neon_qshl_s32, gen_helper_neon_qshl_u32 },
- };
- genenvfn = fns[size][u];
- break;
- }
case 0xb: /* SQRSHL, UQRSHL */
{
static NeonGenTwoOpEnvFn * const fns[3][2] = {
@@ -9510,6 +9539,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
default:
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
+ case 0x9: /* SQSHL, UQSHL */
g_assert_not_reached();
}
@@ -10935,13 +10965,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x09: /* SQSHL, UQSHL */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_uqshl, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_sqshl, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11023,6 +11046,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
case 0x08: /* SSHL, USHL */
+ case 0x09: /* SQSHL, UQSHL */
case 0x0a: /* SRSHL, URSHL */
g_assert_not_reached();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (46 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:54 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree Richard Henderson
` (19 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 8 ++++++
target/arm/tcg/translate.h | 4 +++
target/arm/tcg/neon-dp.decode | 17 ++----------
target/arm/tcg/gengvec.c | 24 ++++++++++++++++
target/arm/tcg/neon_helper.c | 24 ++++++++++++++++
target/arm/tcg/translate-a64.c | 17 +++++-------
target/arm/tcg/translate-neon.c | 49 ++-------------------------------
7 files changed, 71 insertions(+), 72 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index f345087ddb..9a89c9cea7 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -334,6 +334,14 @@ DEF_HELPER_FLAGS_5(neon_uqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_uqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 6c6d4d49e7..048cb45ebe 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -467,6 +467,10 @@ void gen_neon_sqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_neon_sqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_neon_uqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/neon-dp.decode b/target/arm/tcg/neon-dp.decode
index 6d4996b8d8..788578c8fa 100644
--- a/target/arm/tcg/neon-dp.decode
+++ b/target/arm/tcg/neon-dp.decode
@@ -102,25 +102,12 @@ VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same_rev
VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
-
-# Insns operating on 64-bit elements (size!=0b11 handled elsewhere)
-# The _rev suffix indicates that Vn and Vm are reversed (as explained
-# by the comment for the @3same_rev format).
-@3same_64_rev .... ... . . . 11 .... .... .... . q:1 . . .... \
- &3same vm=%vn_dp vn=%vm_dp vd=%vd_dp size=3
-
VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
-{
- VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
- VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
-}
-{
- VQRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
- VQRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_rev
-}
+VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
+VQRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_rev
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 773dbf41d3..51e66ccf5f 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1263,6 +1263,30 @@ void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
opr_sz, max_sz, 0, fns[vece]);
}
+void gen_neon_sqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[] = {
+ gen_helper_neon_sqrshl_b, gen_helper_neon_sqrshl_h,
+ gen_helper_neon_sqrshl_s, gen_helper_neon_sqrshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
+ opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_neon_uqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[] = {
+ gen_helper_neon_uqrshl_b, gen_helper_neon_uqrshl_h,
+ gen_helper_neon_uqrshl_s, gen_helper_neon_uqrshl_d,
+ };
+ tcg_debug_assert(vece <= MO_64);
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
+ opr_sz, max_sz, 0, fns[vece]);
+}
+
void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
uint64_t max = MAKE_64BIT_MASK(0, 8 << esz);
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index 88301f0dcb..b29a7c725f 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -435,11 +435,23 @@ uint64_t HELPER(neon_qshlu_s64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_u8, neon_u8, 4)
+NEON_GVEC_VOP2_ENV(neon_uqrshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_u16, neon_u16, 2)
+NEON_GVEC_VOP2_ENV(neon_uqrshl_h, uint16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, true, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_uqrshl_s, uint32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_uqrshl_d(src1, (int8_t)src2, true, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_uqrshl_d, uint64_t)
#undef NEON_FN
uint32_t HELPER(neon_qrshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
@@ -455,11 +467,23 @@ uint64_t HELPER(neon_qrshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_s8, neon_s8, 4)
+NEON_GVEC_VOP2_ENV(neon_sqrshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_s16, neon_s16, 2)
+NEON_GVEC_VOP2_ENV(neon_sqrshl_h, int16_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, true, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_sqrshl_s, int32_t)
+#undef NEON_FN
+
+#define NEON_FN(dest, src1, src2) \
+ (dest = do_sqrshl_d(src1, (int8_t)src2, true, env->vfp.qc))
+NEON_GVEC_VOP2_ENV(neon_sqrshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_qrshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index f8d2760bea..b0004e2c6f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10965,6 +10965,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x0b: /* SQRSHL, UQRSHL */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_uqrshl, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_sqrshl, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11107,16 +11114,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
genfn = fns[size][u];
break;
}
- case 0xb: /* SQRSHL, UQRSHL */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qrshl_s8, gen_helper_neon_qrshl_u8 },
- { gen_helper_neon_qrshl_s16, gen_helper_neon_qrshl_u16 },
- { gen_helper_neon_qrshl_s32, gen_helper_neon_qrshl_u32 },
- };
- genenvfn = fns[size][u];
- break;
- }
default:
g_assert_not_reached();
}
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index a3eec47092..5f1576393e 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -798,6 +798,8 @@ DO_3SAME(VRSHL_S, gen_gvec_srshl)
DO_3SAME(VRSHL_U, gen_gvec_urshl)
DO_3SAME(VQSHL_S, gen_neon_sqshl)
DO_3SAME(VQSHL_U, gen_neon_uqshl)
+DO_3SAME(VQRSHL_S, gen_neon_sqrshl)
+DO_3SAME(VQRSHL_U, gen_neon_uqrshl)
/* These insns are all gvec_bitsel but with the inputs in various orders. */
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
@@ -916,26 +918,6 @@ DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
-#define DO_3SAME_64(INSN, FUNC) \
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
- uint32_t rn_ofs, uint32_t rm_ofs, \
- uint32_t oprsz, uint32_t maxsz) \
- { \
- static const GVecGen3 op = { .fni8 = FUNC }; \
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &op); \
- } \
- DO_3SAME(INSN, gen_##INSN##_3s)
-
-#define DO_3SAME_64_ENV(INSN, FUNC) \
- static void gen_##INSN##_elt(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m) \
- { \
- FUNC(d, tcg_env, n, m); \
- } \
- DO_3SAME_64(INSN, gen_##INSN##_elt)
-
-DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
-DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
-
#define DO_3SAME_32(INSN, FUNC) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
uint32_t rn_ofs, uint32_t rm_ofs, \
@@ -969,30 +951,6 @@ DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
FUNC(d, tcg_env, n, m); \
}
-#define DO_3SAME_32_ENV(INSN, FUNC) \
- WRAP_ENV_FN(gen_##INSN##_tramp8, gen_helper_neon_##FUNC##8); \
- WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##16); \
- WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##32); \
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
- uint32_t rn_ofs, uint32_t rm_ofs, \
- uint32_t oprsz, uint32_t maxsz) \
- { \
- static const GVecGen3 ops[4] = { \
- { .fni4 = gen_##INSN##_tramp8 }, \
- { .fni4 = gen_##INSN##_tramp16 }, \
- { .fni4 = gen_##INSN##_tramp32 }, \
- { 0 }, \
- }; \
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
- } \
- static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
- { \
- if (a->size > 2) { \
- return false; \
- } \
- return do_3same(s, a, gen_##INSN##_3s); \
- }
-
DO_3SAME_32(VHADD_S, hadd_s)
DO_3SAME_32(VHADD_U, hadd_u)
DO_3SAME_32(VHSUB_S, hsub_s)
@@ -1000,9 +958,6 @@ DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
DO_3SAME_32(VRHADD_U, rhadd_u)
-DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
-DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
-
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (47 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:55 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) " Richard Henderson
` (18 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 +++
target/arm/tcg/translate-a64.c | 48 ++++++++++++++++------------------
2 files changed, 26 insertions(+), 26 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 85caf37948..96ce35ad40 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -762,6 +762,8 @@ SRSHL_s 0101 1110 111 ..... 01010 1 ..... ..... @rrr_d
URSHL_s 0111 1110 111 ..... 01010 1 ..... ..... @rrr_d
SQSHL_s 0101 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
UQSHL_s 0111 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
+SQRSHL_s 0101 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
+UQRSHL_s 0111 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
### Advanced SIMD scalar pairwise
@@ -890,6 +892,8 @@ SRSHL_v 0.00 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
URSHL_v 0.10 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
SQSHL_v 0.00 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
UQSHL_v 0.10 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
+SQRSHL_v 0.00 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
+UQRSHL_v 0.10 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index b0004e2c6f..b76682cabf 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5162,6 +5162,22 @@ static const ENVScalar2 f_scalar_uqshl = {
};
TRANS(UQSHL_s, do_env_scalar2, a, &f_scalar_uqshl)
+static const ENVScalar2 f_scalar_sqrshl = {
+ { gen_helper_neon_qrshl_s8,
+ gen_helper_neon_qrshl_s16,
+ gen_helper_neon_qrshl_s32 },
+ gen_helper_neon_qrshl_s64,
+};
+TRANS(SQRSHL_s, do_env_scalar2, a, &f_scalar_sqrshl)
+
+static const ENVScalar2 f_scalar_uqrshl = {
+ { gen_helper_neon_qrshl_u8,
+ gen_helper_neon_qrshl_u16,
+ gen_helper_neon_qrshl_u32 },
+ gen_helper_neon_qrshl_u64,
+};
+TRANS(UQRSHL_s, do_env_scalar2, a, &f_scalar_uqrshl)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5413,6 +5429,8 @@ TRANS(SRSHL_v, do_gvec_fn3, a, gen_gvec_srshl)
TRANS(URSHL_v, do_gvec_fn3, a, gen_gvec_urshl)
TRANS(SQSHL_v, do_gvec_fn3, a, gen_neon_sqshl)
TRANS(UQSHL_v, do_gvec_fn3, a, gen_neon_uqshl)
+TRANS(SQRSHL_v, do_gvec_fn3, a, gen_neon_sqrshl)
+TRANS(UQRSHL_v, do_gvec_fn3, a, gen_neon_uqrshl)
/*
@@ -9426,13 +9444,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
gen_cmtst_i64(tcg_rd, tcg_rn, tcg_rm);
break;
- case 0xb: /* SQRSHL, UQRSHL */
- if (u) {
- gen_helper_neon_qrshl_u64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
- } else {
- gen_helper_neon_qrshl_s64(tcg_rd, tcg_env, tcg_rn, tcg_rm);
- }
- break;
case 0x10: /* ADD, SUB */
if (u) {
tcg_gen_sub_i64(tcg_rd, tcg_rn, tcg_rm);
@@ -9446,6 +9457,7 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
case 0x8: /* SSHL, USHL */
case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
+ case 0xb: /* SQRSHL, UQRSHL */
g_assert_not_reached();
}
}
@@ -9467,8 +9479,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
TCGv_i64 tcg_rd;
switch (opcode) {
- case 0xb: /* SQRSHL, UQRSHL */
- break;
case 0x6: /* CMGT, CMHI */
case 0x7: /* CMGE, CMHS */
case 0x11: /* CMTST, CMEQ */
@@ -9490,6 +9500,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x8: /* SSHL, USHL */
case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
+ case 0xb: /* SQRSHL, UQRSHL */
unallocated_encoding(s);
return;
}
@@ -9516,16 +9527,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
void (*genfn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp) = NULL;
switch (opcode) {
- case 0xb: /* SQRSHL, UQRSHL */
- {
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
- { gen_helper_neon_qrshl_s8, gen_helper_neon_qrshl_u8 },
- { gen_helper_neon_qrshl_s16, gen_helper_neon_qrshl_u16 },
- { gen_helper_neon_qrshl_s32, gen_helper_neon_qrshl_u32 },
- };
- genenvfn = fns[size][u];
- break;
- }
case 0x16: /* SQDMULH, SQRDMULH */
{
static NeonGenTwoOpEnvFn * const fns[2][2] = {
@@ -9540,6 +9541,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
case 0x9: /* SQSHL, UQSHL */
+ case 0xb: /* SQRSHL, UQRSHL */
g_assert_not_reached();
}
@@ -10965,13 +10967,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x0b: /* SQRSHL, UQRSHL */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_uqrshl, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_neon_sqrshl, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11055,6 +11050,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x08: /* SSHL, USHL */
case 0x09: /* SQSHL, UQSHL */
case 0x0a: /* SRSHL, URSHL */
+ case 0x0b: /* SQRSHL, UQRSHL */
g_assert_not_reached();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (48 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:56 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ " Richard Henderson
` (17 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 6 ++++++
target/arm/tcg/translate-a64.c | 34 +++++++++++-----------------------
2 files changed, 17 insertions(+), 23 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 96ce35ad40..44383b4fc7 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -765,6 +765,9 @@ UQSHL_s 0111 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
SQRSHL_s 0101 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
UQRSHL_s 0111 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
+ADD_s 0101 1110 111 ..... 10000 1 ..... ..... @rrr_d
+SUB_s 0111 1110 111 ..... 10000 1 ..... ..... @rrr_d
+
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@@ -895,6 +898,9 @@ UQSHL_v 0.10 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
SQRSHL_v 0.00 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
UQRSHL_v 0.10 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
+ADD_v 0.00 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
+SUB_v 0.10 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index b76682cabf..77a64923e7 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5118,6 +5118,8 @@ TRANS(SSHL_s, do_int3_scalar_d, a, gen_sshl_i64)
TRANS(USHL_s, do_int3_scalar_d, a, gen_ushl_i64)
TRANS(SRSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_s64)
TRANS(URSHL_s, do_int3_scalar_d, a, gen_helper_neon_rshl_u64)
+TRANS(ADD_s, do_int3_scalar_d, a, tcg_gen_add_i64)
+TRANS(SUB_s, do_int3_scalar_d, a, tcg_gen_sub_i64)
typedef struct ENVScalar2 {
NeonGenTwoOpEnvFn *gen_bhs[3];
@@ -5432,6 +5434,8 @@ TRANS(UQSHL_v, do_gvec_fn3, a, gen_neon_uqshl)
TRANS(SQRSHL_v, do_gvec_fn3, a, gen_neon_sqrshl)
TRANS(UQRSHL_v, do_gvec_fn3, a, gen_neon_uqrshl)
+TRANS(ADD_v, do_gvec_fn3, a, tcg_gen_gvec_add)
+TRANS(SUB_v, do_gvec_fn3, a, tcg_gen_gvec_sub)
/*
* Advanced SIMD scalar/vector x indexed element
@@ -9444,13 +9448,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
gen_cmtst_i64(tcg_rd, tcg_rn, tcg_rm);
break;
- case 0x10: /* ADD, SUB */
- if (u) {
- tcg_gen_sub_i64(tcg_rd, tcg_rn, tcg_rm);
- } else {
- tcg_gen_add_i64(tcg_rd, tcg_rn, tcg_rm);
- }
- break;
default:
case 0x1: /* SQADD / UQADD */
case 0x5: /* SQSUB / UQSUB */
@@ -9458,6 +9455,7 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
case 0xb: /* SQRSHL, UQRSHL */
+ case 0x10: /* ADD, SUB */
g_assert_not_reached();
}
}
@@ -9482,7 +9480,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x6: /* CMGT, CMHI */
case 0x7: /* CMGE, CMHS */
case 0x11: /* CMTST, CMEQ */
- case 0x10: /* ADD, SUB (vector) */
if (size != 3) {
unallocated_encoding(s);
return;
@@ -9501,6 +9498,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
case 0xb: /* SQRSHL, UQRSHL */
+ case 0x10: /* ADD, SUB (vector) */
unallocated_encoding(s);
return;
}
@@ -10958,6 +10956,11 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
+ case 0x08: /* SSHL, USHL */
+ case 0x09: /* SQSHL, UQSHL */
+ case 0x0a: /* SRSHL, URSHL */
+ case 0x0b: /* SQRSHL, UQRSHL */
+ case 0x10: /* ADD, SUB */
unallocated_encoding(s);
return;
}
@@ -10995,13 +10998,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_saba, size);
}
return;
- case 0x10: /* ADD, SUB */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_sub, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_add, size);
- }
- return;
case 0x13: /* MUL, PMUL */
if (!u) { /* MUL */
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_mul, size);
@@ -11044,14 +11040,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
vec_full_reg_offset(s, rm),
is_q ? 16 : 8, vec_full_reg_size(s));
return;
-
- case 0x01: /* SQADD, UQADD */
- case 0x05: /* SQSUB, UQSUB */
- case 0x08: /* SSHL, USHL */
- case 0x09: /* SQSHL, UQSHL */
- case 0x0a: /* SRSHL, URSHL */
- case 0x0b: /* SQRSHL, UQRSHL */
- g_assert_not_reached();
}
if (size == 3) {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (49 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:57 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64} Richard Henderson
` (16 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 12 +++
target/arm/tcg/translate-a64.c | 132 ++++++++++++---------------------
2 files changed, 60 insertions(+), 84 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 44383b4fc7..3061e26242 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -767,6 +767,12 @@ UQRSHL_s 0111 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
ADD_s 0101 1110 111 ..... 10000 1 ..... ..... @rrr_d
SUB_s 0111 1110 111 ..... 10000 1 ..... ..... @rrr_d
+CMGT_s 0101 1110 111 ..... 00110 1 ..... ..... @rrr_d
+CMHI_s 0111 1110 111 ..... 00110 1 ..... ..... @rrr_d
+CMGE_s 0101 1110 111 ..... 00111 1 ..... ..... @rrr_d
+CMHS_s 0111 1110 111 ..... 00111 1 ..... ..... @rrr_d
+CMTST_s 0101 1110 111 ..... 10001 1 ..... ..... @rrr_d
+CMEQ_s 0111 1110 111 ..... 10001 1 ..... ..... @rrr_d
### Advanced SIMD scalar pairwise
@@ -900,6 +906,12 @@ UQRSHL_v 0.10 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
ADD_v 0.00 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
SUB_v 0.10 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
+CMGT_v 0.00 1110 ..1 ..... 00110 1 ..... ..... @qrrr_e
+CMHI_v 0.10 1110 ..1 ..... 00110 1 ..... ..... @qrrr_e
+CMGE_v 0.00 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
+CMHS_v 0.10 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
+CMTST_v 0.00 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
+CMEQ_v 0.10 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 77a64923e7..3c6cfc2952 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5180,6 +5180,24 @@ static const ENVScalar2 f_scalar_uqrshl = {
};
TRANS(UQRSHL_s, do_env_scalar2, a, &f_scalar_uqrshl)
+static bool do_cmop_d(DisasContext *s, arg_rrr_e *a, TCGCond cond)
+{
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rm);
+ tcg_gen_negsetcond_i64(cond, t0, t0, t1);
+ write_fp_dreg(s, a->rd, t0);
+ }
+ return true;
+}
+
+TRANS(CMGT_s, do_cmop_d, a, TCG_COND_GT)
+TRANS(CMHI_s, do_cmop_d, a, TCG_COND_GTU)
+TRANS(CMGE_s, do_cmop_d, a, TCG_COND_GE)
+TRANS(CMHS_s, do_cmop_d, a, TCG_COND_GEU)
+TRANS(CMEQ_s, do_cmop_d, a, TCG_COND_EQ)
+TRANS(CMTST_s, do_cmop_d, a, TCG_COND_TSTNE)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5437,6 +5455,28 @@ TRANS(UQRSHL_v, do_gvec_fn3, a, gen_neon_uqrshl)
TRANS(ADD_v, do_gvec_fn3, a, tcg_gen_gvec_add)
TRANS(SUB_v, do_gvec_fn3, a, tcg_gen_gvec_sub)
+static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
+{
+ if (a->esz == MO_64 && !a->q) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_cmp(cond, a->esz,
+ vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm),
+ a->q ? 16 : 8, vec_full_reg_size(s));
+ }
+ return true;
+}
+
+TRANS(CMGT_v, do_cmop_v, a, TCG_COND_GT)
+TRANS(CMHI_v, do_cmop_v, a, TCG_COND_GTU)
+TRANS(CMGE_v, do_cmop_v, a, TCG_COND_GE)
+TRANS(CMHS_v, do_cmop_v, a, TCG_COND_GEU)
+TRANS(CMEQ_v, do_cmop_v, a, TCG_COND_EQ)
+TRANS(CMTST_v, do_gvec_fn3, a, gen_gvec_cmtst)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9421,45 +9461,6 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
}
}
-static void handle_3same_64(DisasContext *s, int opcode, bool u,
- TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
-{
- /* Handle 64x64->64 opcodes which are shared between the scalar
- * and vector 3-same groups. We cover every opcode where size == 3
- * is valid in either the three-reg-same (integer, not pairwise)
- * or scalar-three-reg-same groups.
- */
- TCGCond cond;
-
- switch (opcode) {
- case 0x6: /* CMGT, CMHI */
- cond = u ? TCG_COND_GTU : TCG_COND_GT;
- do_cmop:
- /* 64 bit integer comparison, result = test ? -1 : 0. */
- tcg_gen_negsetcond_i64(cond, tcg_rd, tcg_rn, tcg_rm);
- break;
- case 0x7: /* CMGE, CMHS */
- cond = u ? TCG_COND_GEU : TCG_COND_GE;
- goto do_cmop;
- case 0x11: /* CMTST, CMEQ */
- if (u) {
- cond = TCG_COND_EQ;
- goto do_cmop;
- }
- gen_cmtst_i64(tcg_rd, tcg_rn, tcg_rm);
- break;
- default:
- case 0x1: /* SQADD / UQADD */
- case 0x5: /* SQSUB / UQSUB */
- case 0x8: /* SSHL, USHL */
- case 0x9: /* SQSHL, UQSHL */
- case 0xa: /* SRSHL, URSHL */
- case 0xb: /* SQRSHL, UQRSHL */
- case 0x10: /* ADD, SUB */
- g_assert_not_reached();
- }
-}
-
/* AdvSIMD scalar three same
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+--------+---+------+------+
@@ -9477,14 +9478,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
TCGv_i64 tcg_rd;
switch (opcode) {
- case 0x6: /* CMGT, CMHI */
- case 0x7: /* CMGE, CMHS */
- case 0x11: /* CMTST, CMEQ */
- if (size != 3) {
- unallocated_encoding(s);
- return;
- }
- break;
case 0x16: /* SQDMULH, SQRDMULH (vector) */
if (size != 1 && size != 2) {
unallocated_encoding(s);
@@ -9494,11 +9487,14 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
default:
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
+ case 0x6: /* CMGT, CMHI */
+ case 0x7: /* CMGE, CMHS */
case 0x8: /* SSHL, USHL */
case 0x9: /* SQSHL, UQSHL */
case 0xa: /* SRSHL, URSHL */
case 0xb: /* SQRSHL, UQRSHL */
case 0x10: /* ADD, SUB (vector) */
+ case 0x11: /* CMTST, CMEQ */
unallocated_encoding(s);
return;
}
@@ -9510,10 +9506,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
tcg_rd = tcg_temp_new_i64();
if (size == 3) {
- TCGv_i64 tcg_rn = read_fp_dreg(s, rn);
- TCGv_i64 tcg_rm = read_fp_dreg(s, rm);
-
- handle_3same_64(s, opcode, u, tcg_rd, tcg_rn, tcg_rm);
+ g_assert_not_reached();
} else {
/* Do a single operation on the lowest element in the vector.
* We use the standard Neon helpers and rely on 0 OP 0 == 0 with
@@ -10919,7 +10912,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
int pass;
- TCGCond cond;
switch (opcode) {
case 0x13: /* MUL, PMUL */
@@ -10956,11 +10948,14 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
+ case 0x06: /* CMGT, CMHI */
+ case 0x07: /* CMGE, CMHS */
case 0x08: /* SSHL, USHL */
case 0x09: /* SQSHL, UQSHL */
case 0x0a: /* SRSHL, URSHL */
case 0x0b: /* SQRSHL, UQRSHL */
case 0x10: /* ADD, SUB */
+ case 0x11: /* CMTST, CMEQ */
unallocated_encoding(s);
return;
}
@@ -11021,41 +11016,10 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
gen_gvec_op3_qc(s, is_q, rd, rn, rm, fns[size - 1][u]);
}
return;
- case 0x11:
- if (!u) { /* CMTST */
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_cmtst, size);
- return;
- }
- /* else CMEQ */
- cond = TCG_COND_EQ;
- goto do_gvec_cmp;
- case 0x06: /* CMGT, CMHI */
- cond = u ? TCG_COND_GTU : TCG_COND_GT;
- goto do_gvec_cmp;
- case 0x07: /* CMGE, CMHS */
- cond = u ? TCG_COND_GEU : TCG_COND_GE;
- do_gvec_cmp:
- tcg_gen_gvec_cmp(cond, size, vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm),
- is_q ? 16 : 8, vec_full_reg_size(s));
- return;
}
if (size == 3) {
- assert(is_q);
- for (pass = 0; pass < 2; pass++) {
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
-
- handle_3same_64(s, opcode, u, tcg_res, tcg_op1, tcg_op2);
-
- write_vec_element(s, tcg_res, rd, pass, MO_64);
- }
+ g_assert_not_reached();
} else {
for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
TCGv_i32 tcg_op1 = tcg_temp_new_i32();
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64}
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (50 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:58 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec Richard Henderson
` (15 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/gengvec.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 51e66ccf5f..1d6bc6021d 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -933,14 +933,12 @@ void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
/* CMTST : test is "if (X & Y != 0)". */
static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
- tcg_gen_and_i32(d, a, b);
- tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
+ tcg_gen_negsetcond_i32(TCG_COND_TSTNE, d, a, b);
}
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
- tcg_gen_and_i64(d, a, b);
- tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
+ tcg_gen_negsetcond_i64(TCG_COND_TSTNE, d, a, b);
}
static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (51 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64} Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 15:59 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec Richard Henderson
` (14 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/gengvec.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 1d6bc6021d..1895c3b19f 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -943,9 +943,7 @@ void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
- tcg_gen_and_vec(vece, d, a, b);
- tcg_gen_dupi_vec(vece, a, 0);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
+ tcg_gen_cmp_vec(TCG_COND_TSTNE, vece, d, a, b);
}
void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (52 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:01 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree Richard Henderson
` (13 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 6 --
target/arm/tcg/translate.h | 5 ++
target/arm/tcg/gengvec.c | 144 ++++++++++++++++++++++++++++++++
target/arm/tcg/neon_helper.c | 27 ------
target/arm/tcg/translate-a64.c | 17 ++--
target/arm/tcg/translate-neon.c | 4 +-
6 files changed, 158 insertions(+), 45 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 9a89c9cea7..b26bfcb079 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -268,12 +268,6 @@ DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, ptr)
DEF_HELPER_FLAGS_3(check_hcr_el2_trap, TCG_CALL_NO_WG, void, env, i32, i32)
/* neon_helper.c */
-DEF_HELPER_2(neon_hadd_s8, i32, i32, i32)
-DEF_HELPER_2(neon_hadd_u8, i32, i32, i32)
-DEF_HELPER_2(neon_hadd_s16, i32, i32, i32)
-DEF_HELPER_2(neon_hadd_u16, i32, i32, i32)
-DEF_HELPER_2(neon_hadd_s32, s32, s32, s32)
-DEF_HELPER_2(neon_hadd_u32, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s8, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_u8, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s16, i32, i32, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 048cb45ebe..dd99d76bf2 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -472,6 +472,11 @@ void gen_neon_sqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_neon_uqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_shadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 1895c3b19f..0627cec6b2 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1852,3 +1852,147 @@ void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_debug_assert(vece <= MO_32);
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
+
+static void gen_shadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_and_i64(t, a, b);
+ tcg_gen_vec_sar8i_i64(a, a, 1);
+ tcg_gen_vec_sar8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_add8_i64(d, a, b);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_shadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_and_i64(t, a, b);
+ tcg_gen_vec_sar16i_i64(a, a, 1);
+ tcg_gen_vec_sar16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_add16_i64(d, a, b);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_shadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_and_i32(t, a, b);
+ tcg_gen_sari_i32(a, a, 1);
+ tcg_gen_sari_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_add_i32(d, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_shadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_and_vec(vece, t, a, b);
+ tcg_gen_sari_vec(vece, a, a, 1);
+ tcg_gen_sari_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_add_vec(vece, d, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_shadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 g[] = {
+ { .fni8 = gen_shadd8_i64,
+ .fniv = gen_shadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shadd16_i64,
+ .fniv = gen_shadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shadd_i32,
+ .fniv = gen_shadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
+
+static void gen_uhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_and_i64(t, a, b);
+ tcg_gen_vec_shr8i_i64(a, a, 1);
+ tcg_gen_vec_shr8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_add8_i64(d, a, b);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_uhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_and_i64(t, a, b);
+ tcg_gen_vec_shr16i_i64(a, a, 1);
+ tcg_gen_vec_shr16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_add16_i64(d, a, b);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_uhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_and_i32(t, a, b);
+ tcg_gen_shri_i32(a, a, 1);
+ tcg_gen_shri_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_add_i32(d, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_uhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_and_vec(vece, t, a, b);
+ tcg_gen_shri_vec(vece, a, a, 1);
+ tcg_gen_shri_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_add_vec(vece, d, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 g[] = {
+ { .fni8 = gen_uhadd8_i64,
+ .fniv = gen_uhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_uhadd16_i64,
+ .fniv = gen_uhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_uhadd_i32,
+ .fniv = gen_uhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index b29a7c725f..defd28a6f7 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -179,33 +179,6 @@ uint32_t HELPER(glue(neon_,name))(uint32_t arg) \
return arg; \
}
-#define NEON_FN(dest, src1, src2) dest = (src1 + src2) >> 1
-NEON_VOP(hadd_s8, neon_s8, 4)
-NEON_VOP(hadd_u8, neon_u8, 4)
-NEON_VOP(hadd_s16, neon_s16, 2)
-NEON_VOP(hadd_u16, neon_u16, 2)
-#undef NEON_FN
-
-int32_t HELPER(neon_hadd_s32)(int32_t src1, int32_t src2)
-{
- int32_t dest;
-
- dest = (src1 >> 1) + (src2 >> 1);
- if (src1 & src2 & 1)
- dest++;
- return dest;
-}
-
-uint32_t HELPER(neon_hadd_u32)(uint32_t src1, uint32_t src2)
-{
- uint32_t dest;
-
- dest = (src1 >> 1) + (src2 >> 1);
- if (src1 & src2 & 1)
- dest++;
- return dest;
-}
-
#define NEON_FN(dest, src1, src2) dest = (src1 + src2 + 1) >> 1
NEON_VOP(rhadd_s8, neon_s8, 4)
NEON_VOP(rhadd_u8, neon_u8, 4)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 3c6cfc2952..5f3423513d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10965,6 +10965,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x00: /* SHADD, UHADD */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uhadd, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_shadd, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11032,16 +11039,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (opcode) {
- case 0x0: /* SHADD, UHADD */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_hadd_s8, gen_helper_neon_hadd_u8 },
- { gen_helper_neon_hadd_s16, gen_helper_neon_hadd_u16 },
- { gen_helper_neon_hadd_s32, gen_helper_neon_hadd_u32 },
- };
- genfn = fns[size][u];
- break;
- }
case 0x2: /* SRHADD, URHADD */
{
static NeonGenTwoOpFn * const fns[3][2] = {
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 5f1576393e..29e5c4a0a3 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -841,6 +841,8 @@ DO_3SAME_NO_SZ_3(VPMAX_S, gen_gvec_smaxp)
DO_3SAME_NO_SZ_3(VPMIN_S, gen_gvec_sminp)
DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
+DO_3SAME_NO_SZ_3(VHADD_S, gen_gvec_shadd)
+DO_3SAME_NO_SZ_3(VHADD_U, gen_gvec_uhadd)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -951,8 +953,6 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
FUNC(d, tcg_env, n, m); \
}
-DO_3SAME_32(VHADD_S, hadd_s)
-DO_3SAME_32(VHADD_U, hadd_u)
DO_3SAME_32(VHSUB_S, hsub_s)
DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (53 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:01 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec Richard Henderson
` (12 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 2 ++
target/arm/tcg/translate-a64.c | 11 +++--------
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 3061e26242..e33d91fd0a 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -912,6 +912,8 @@ CMGE_v 0.00 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
CMHS_v 0.10 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
CMTST_v 0.00 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
CMEQ_v 0.10 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
+SHADD_v 0.00 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
+UHADD_v 0.10 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5f3423513d..00c04425c1 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5454,6 +5454,8 @@ TRANS(UQRSHL_v, do_gvec_fn3, a, gen_neon_uqrshl)
TRANS(ADD_v, do_gvec_fn3, a, tcg_gen_gvec_add)
TRANS(SUB_v, do_gvec_fn3, a, tcg_gen_gvec_sub)
+TRANS(SHADD_v, do_gvec_fn3_no64, a, gen_gvec_shadd)
+TRANS(UHADD_v, do_gvec_fn3_no64, a, gen_gvec_uhadd)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -10920,7 +10922,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
return;
}
/* fall through */
- case 0x0: /* SHADD, UHADD */
case 0x2: /* SRHADD, URHADD */
case 0x4: /* SHSUB, UHSUB */
case 0xc: /* SMAX, UMAX */
@@ -10946,6 +10947,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
break;
+ case 0x0: /* SHADD, UHADD */
case 0x01: /* SQADD, UQADD */
case 0x05: /* SQSUB, UQSUB */
case 0x06: /* CMGT, CMHI */
@@ -10965,13 +10967,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x00: /* SHADD, UHADD */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uhadd, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_shadd, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (54 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:02 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree Richard Henderson
` (11 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 6 --
target/arm/tcg/translate.h | 4 +
target/arm/tcg/gengvec.c | 144 ++++++++++++++++++++++++++++++++
target/arm/tcg/neon_helper.c | 27 ------
target/arm/tcg/translate-a64.c | 17 ++--
target/arm/tcg/translate-neon.c | 4 +-
6 files changed, 157 insertions(+), 45 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index b26bfcb079..b95f24ed0a 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -274,12 +274,6 @@ DEF_HELPER_2(neon_rhadd_s16, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_u16, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s32, s32, s32, s32)
DEF_HELPER_2(neon_rhadd_u32, i32, i32, i32)
-DEF_HELPER_2(neon_hsub_s8, i32, i32, i32)
-DEF_HELPER_2(neon_hsub_u8, i32, i32, i32)
-DEF_HELPER_2(neon_hsub_s16, i32, i32, i32)
-DEF_HELPER_2(neon_hsub_u16, i32, i32, i32)
-DEF_HELPER_2(neon_hsub_s32, s32, s32, s32)
-DEF_HELPER_2(neon_hsub_u32, i32, i32, i32)
DEF_HELPER_2(neon_pmin_u8, i32, i32, i32)
DEF_HELPER_2(neon_pmin_s8, i32, i32, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index dd99d76bf2..315e0afd04 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -476,6 +476,10 @@ void gen_gvec_shadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_shsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 0627cec6b2..6a54ad2d21 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1996,3 +1996,147 @@ void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_debug_assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
+
+static void gen_shsub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_andc_i64(t, b, a);
+ tcg_gen_vec_sar8i_i64(a, a, 1);
+ tcg_gen_vec_sar8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_sub8_i64(d, a, b);
+ tcg_gen_vec_sub8_i64(d, d, t);
+}
+
+static void gen_shsub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_andc_i64(t, b, a);
+ tcg_gen_vec_sar16i_i64(a, a, 1);
+ tcg_gen_vec_sar16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_sub16_i64(d, a, b);
+ tcg_gen_vec_sub16_i64(d, d, t);
+}
+
+static void gen_shsub_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_andc_i32(t, b, a);
+ tcg_gen_sari_i32(a, a, 1);
+ tcg_gen_sari_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_sub_i32(d, a, b);
+ tcg_gen_sub_i32(d, d, t);
+}
+
+static void gen_shsub_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_andc_vec(vece, t, b, a);
+ tcg_gen_sari_vec(vece, a, a, 1);
+ tcg_gen_sari_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_sub_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_shsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sari_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen3 g[4] = {
+ { .fni8 = gen_shsub8_i64,
+ .fniv = gen_shsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shsub16_i64,
+ .fniv = gen_shsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shsub_i32,
+ .fniv = gen_shsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
+
+static void gen_uhsub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_andc_i64(t, b, a);
+ tcg_gen_vec_shr8i_i64(a, a, 1);
+ tcg_gen_vec_shr8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_sub8_i64(d, a, b);
+ tcg_gen_vec_sub8_i64(d, d, t);
+}
+
+static void gen_uhsub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_andc_i64(t, b, a);
+ tcg_gen_vec_shr16i_i64(a, a, 1);
+ tcg_gen_vec_shr16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_sub16_i64(d, a, b);
+ tcg_gen_vec_sub16_i64(d, d, t);
+}
+
+static void gen_uhsub_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_andc_i32(t, b, a);
+ tcg_gen_shri_i32(a, a, 1);
+ tcg_gen_shri_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_sub_i32(d, a, b);
+ tcg_gen_sub_i32(d, d, t);
+}
+
+static void gen_uhsub_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_andc_vec(vece, t, b, a);
+ tcg_gen_shri_vec(vece, a, a, 1);
+ tcg_gen_shri_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_sub_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen3 g[4] = {
+ { .fni8 = gen_uhsub8_i64,
+ .fniv = gen_uhsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_uhsub16_i64,
+ .fniv = gen_uhsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_uhsub_i32,
+ .fniv = gen_uhsub_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index defd28a6f7..d1641a5252 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -206,33 +206,6 @@ uint32_t HELPER(neon_rhadd_u32)(uint32_t src1, uint32_t src2)
return dest;
}
-#define NEON_FN(dest, src1, src2) dest = (src1 - src2) >> 1
-NEON_VOP(hsub_s8, neon_s8, 4)
-NEON_VOP(hsub_u8, neon_u8, 4)
-NEON_VOP(hsub_s16, neon_s16, 2)
-NEON_VOP(hsub_u16, neon_u16, 2)
-#undef NEON_FN
-
-int32_t HELPER(neon_hsub_s32)(int32_t src1, int32_t src2)
-{
- int32_t dest;
-
- dest = (src1 >> 1) - (src2 >> 1);
- if ((~src1) & src2 & 1)
- dest--;
- return dest;
-}
-
-uint32_t HELPER(neon_hsub_u32)(uint32_t src1, uint32_t src2)
-{
- uint32_t dest;
-
- dest = (src1 >> 1) - (src2 >> 1);
- if ((~src1) & src2 & 1)
- dest--;
- return dest;
-}
-
#define NEON_FN(dest, src1, src2) dest = (src1 < src2) ? src1 : src2
NEON_POP(pmin_s8, neon_s8, 4)
NEON_POP(pmin_u8, neon_u8, 4)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 00c04425c1..63f7a59f94 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10967,6 +10967,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x04: /* SHSUB, UHSUB */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uhsub, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_shsub, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11044,16 +11051,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
genfn = fns[size][u];
break;
}
- case 0x4: /* SHSUB, UHSUB */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_hsub_s8, gen_helper_neon_hsub_u8 },
- { gen_helper_neon_hsub_s16, gen_helper_neon_hsub_u16 },
- { gen_helper_neon_hsub_s32, gen_helper_neon_hsub_u32 },
- };
- genfn = fns[size][u];
- break;
- }
default:
g_assert_not_reached();
}
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 29e5c4a0a3..d59d5804c5 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -843,6 +843,8 @@ DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
DO_3SAME_NO_SZ_3(VHADD_S, gen_gvec_shadd)
DO_3SAME_NO_SZ_3(VHADD_U, gen_gvec_uhadd)
+DO_3SAME_NO_SZ_3(VHSUB_S, gen_gvec_shsub)
+DO_3SAME_NO_SZ_3(VHSUB_U, gen_gvec_uhsub)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -953,8 +955,6 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
FUNC(d, tcg_env, n, m); \
}
-DO_3SAME_32(VHSUB_S, hsub_s)
-DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
DO_3SAME_32(VRHADD_U, rhadd_u)
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (55 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:02 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec Richard Henderson
` (10 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 2 ++
target/arm/tcg/translate-a64.c | 11 +++--------
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index e33d91fd0a..b1bbcb144e 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -914,6 +914,8 @@ CMTST_v 0.00 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
CMEQ_v 0.10 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
SHADD_v 0.00 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
UHADD_v 0.10 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
+SHSUB_v 0.00 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
+UHSUB_v 0.10 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 63f7a59f94..6571b999f4 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5456,6 +5456,8 @@ TRANS(ADD_v, do_gvec_fn3, a, tcg_gen_gvec_add)
TRANS(SUB_v, do_gvec_fn3, a, tcg_gen_gvec_sub)
TRANS(SHADD_v, do_gvec_fn3_no64, a, gen_gvec_shadd)
TRANS(UHADD_v, do_gvec_fn3_no64, a, gen_gvec_uhadd)
+TRANS(SHSUB_v, do_gvec_fn3_no64, a, gen_gvec_shsub)
+TRANS(UHSUB_v, do_gvec_fn3_no64, a, gen_gvec_uhsub)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -10923,7 +10925,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
/* fall through */
case 0x2: /* SRHADD, URHADD */
- case 0x4: /* SHSUB, UHSUB */
case 0xc: /* SMAX, UMAX */
case 0xd: /* SMIN, UMIN */
case 0xe: /* SABD, UABD */
@@ -10949,6 +10950,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x0: /* SHADD, UHADD */
case 0x01: /* SQADD, UQADD */
+ case 0x04: /* SHSUB, UHSUB */
case 0x05: /* SQSUB, UQSUB */
case 0x06: /* CMGT, CMHI */
case 0x07: /* CMGE, CMHS */
@@ -10967,13 +10969,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x04: /* SHSUB, UHSUB */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uhsub, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_shsub, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (56 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:03 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree Richard Henderson
` (9 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 7 --
target/arm/tcg/translate.h | 4 +
target/arm/tcg/gengvec.c | 144 ++++++++++++++++++++++++++++++++
target/arm/tcg/neon_helper.c | 27 ------
target/arm/tcg/translate-a64.c | 48 ++---------
target/arm/tcg/translate-neon.c | 26 +-----
6 files changed, 158 insertions(+), 98 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index b95f24ed0a..85f9302563 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -268,13 +268,6 @@ DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, ptr)
DEF_HELPER_FLAGS_3(check_hcr_el2_trap, TCG_CALL_NO_WG, void, env, i32, i32)
/* neon_helper.c */
-DEF_HELPER_2(neon_rhadd_s8, i32, i32, i32)
-DEF_HELPER_2(neon_rhadd_u8, i32, i32, i32)
-DEF_HELPER_2(neon_rhadd_s16, i32, i32, i32)
-DEF_HELPER_2(neon_rhadd_u16, i32, i32, i32)
-DEF_HELPER_2(neon_rhadd_s32, s32, s32, s32)
-DEF_HELPER_2(neon_rhadd_u32, i32, i32, i32)
-
DEF_HELPER_2(neon_pmin_u8, i32, i32, i32)
DEF_HELPER_2(neon_pmin_s8, i32, i32, i32)
DEF_HELPER_2(neon_pmin_u16, i32, i32, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 315e0afd04..3b1e68b779 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -480,6 +480,10 @@ void gen_gvec_shsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_srhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 6a54ad2d21..32caabd126 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -2140,3 +2140,147 @@ void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
+
+static void gen_srhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_or_i64(t, a, b);
+ tcg_gen_vec_sar8i_i64(a, a, 1);
+ tcg_gen_vec_sar8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_add8_i64(d, a, b);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_srhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_or_i64(t, a, b);
+ tcg_gen_vec_sar16i_i64(a, a, 1);
+ tcg_gen_vec_sar16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_add16_i64(d, a, b);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_srhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_or_i32(t, a, b);
+ tcg_gen_sari_i32(a, a, 1);
+ tcg_gen_sari_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_add_i32(d, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_srhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_or_vec(vece, t, a, b);
+ tcg_gen_sari_vec(vece, a, a, 1);
+ tcg_gen_sari_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_add_vec(vece, d, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_srhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 g[] = {
+ { .fni8 = gen_srhadd8_i64,
+ .fniv = gen_srhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_srhadd16_i64,
+ .fniv = gen_srhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_srhadd_i32,
+ .fniv = gen_srhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
+
+static void gen_urhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_or_i64(t, a, b);
+ tcg_gen_vec_shr8i_i64(a, a, 1);
+ tcg_gen_vec_shr8i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_add8_i64(d, a, b);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_urhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_or_i64(t, a, b);
+ tcg_gen_vec_shr16i_i64(a, a, 1);
+ tcg_gen_vec_shr16i_i64(b, b, 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_add16_i64(d, a, b);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_urhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_or_i32(t, a, b);
+ tcg_gen_shri_i32(a, a, 1);
+ tcg_gen_shri_i32(b, b, 1);
+ tcg_gen_andi_i32(t, t, 1);
+ tcg_gen_add_i32(d, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_urhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_or_vec(vece, t, a, b);
+ tcg_gen_shri_vec(vece, a, a, 1);
+ tcg_gen_shri_vec(vece, b, b, 1);
+ tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
+ tcg_gen_add_vec(vece, d, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 g[] = {
+ { .fni8 = gen_urhadd8_i64,
+ .fniv = gen_urhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_urhadd16_i64,
+ .fniv = gen_urhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_urhadd_i32,
+ .fniv = gen_urhadd_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ };
+ assert(vece <= MO_32);
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
+}
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index d1641a5252..082bfd88ad 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -179,33 +179,6 @@ uint32_t HELPER(glue(neon_,name))(uint32_t arg) \
return arg; \
}
-#define NEON_FN(dest, src1, src2) dest = (src1 + src2 + 1) >> 1
-NEON_VOP(rhadd_s8, neon_s8, 4)
-NEON_VOP(rhadd_u8, neon_u8, 4)
-NEON_VOP(rhadd_s16, neon_s16, 2)
-NEON_VOP(rhadd_u16, neon_u16, 2)
-#undef NEON_FN
-
-int32_t HELPER(neon_rhadd_s32)(int32_t src1, int32_t src2)
-{
- int32_t dest;
-
- dest = (src1 >> 1) + (src2 >> 1);
- if ((src1 | src2) & 1)
- dest++;
- return dest;
-}
-
-uint32_t HELPER(neon_rhadd_u32)(uint32_t src1, uint32_t src2)
-{
- uint32_t dest;
-
- dest = (src1 >> 1) + (src2 >> 1);
- if ((src1 | src2) & 1)
- dest++;
- return dest;
-}
-
#define NEON_FN(dest, src1, src2) dest = (src1 < src2) ? src1 : src2
NEON_POP(pmin_s8, neon_s8, 4)
NEON_POP(pmin_u8, neon_u8, 4)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 6571b999f4..40aa7a9d57 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -10915,7 +10915,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
int rm = extract32(insn, 16, 5);
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
- int pass;
switch (opcode) {
case 0x13: /* MUL, PMUL */
@@ -10969,6 +10968,13 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
+ case 0x02: /* SRHADD, URHADD */
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_urhadd, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_srhadd, size);
+ }
+ return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
@@ -11021,45 +11027,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
return;
}
-
- if (size == 3) {
- g_assert_not_reached();
- } else {
- for (pass = 0; pass < (is_q ? 4 : 2); pass++) {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
- NeonGenTwoOpFn *genfn = NULL;
- NeonGenTwoOpEnvFn *genenvfn = NULL;
-
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
-
- switch (opcode) {
- case 0x2: /* SRHADD, URHADD */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_rhadd_s8, gen_helper_neon_rhadd_u8 },
- { gen_helper_neon_rhadd_s16, gen_helper_neon_rhadd_u16 },
- { gen_helper_neon_rhadd_s32, gen_helper_neon_rhadd_u32 },
- };
- genfn = fns[size][u];
- break;
- }
- default:
- g_assert_not_reached();
- }
-
- if (genenvfn) {
- genenvfn(tcg_res, tcg_env, tcg_op1, tcg_op2);
- } else {
- genfn(tcg_res, tcg_op1, tcg_op2);
- }
-
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
- }
- }
- clear_vec_high(s, is_q, rd);
+ g_assert_not_reached();
}
/* AdvSIMD three same
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index d59d5804c5..f9a8753906 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -845,6 +845,8 @@ DO_3SAME_NO_SZ_3(VHADD_S, gen_gvec_shadd)
DO_3SAME_NO_SZ_3(VHADD_U, gen_gvec_uhadd)
DO_3SAME_NO_SZ_3(VHSUB_S, gen_gvec_shsub)
DO_3SAME_NO_SZ_3(VHSUB_U, gen_gvec_uhsub)
+DO_3SAME_NO_SZ_3(VRHADD_S, gen_gvec_srhadd)
+DO_3SAME_NO_SZ_3(VRHADD_U, gen_gvec_urhadd)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -922,27 +924,6 @@ DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
-#define DO_3SAME_32(INSN, FUNC) \
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
- uint32_t rn_ofs, uint32_t rm_ofs, \
- uint32_t oprsz, uint32_t maxsz) \
- { \
- static const GVecGen3 ops[4] = { \
- { .fni4 = gen_helper_neon_##FUNC##8 }, \
- { .fni4 = gen_helper_neon_##FUNC##16 }, \
- { .fni4 = gen_helper_neon_##FUNC##32 }, \
- { 0 }, \
- }; \
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
- } \
- static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
- { \
- if (a->size > 2) { \
- return false; \
- } \
- return do_3same(s, a, gen_##INSN##_3s); \
- }
-
/*
* Some helper functions need to be passed the tcg_env. In order
* to use those with the gvec APIs like tcg_gen_gvec_3() we need
@@ -955,9 +936,6 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
FUNC(d, tcg_env, n, m); \
}
-DO_3SAME_32(VRHADD_S, rhadd_s)
-DO_3SAME_32(VRHADD_U, rhadd_u)
-
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (57 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:04 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN " Richard Henderson
` (8 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 2 ++
target/arm/tcg/translate-a64.c | 11 +++--------
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index b1bbcb144e..1c448b4f7c 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -916,6 +916,8 @@ SHADD_v 0.00 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
UHADD_v 0.10 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
SHSUB_v 0.00 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
UHSUB_v 0.10 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
+SRHADD_v 0.00 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
+URHADD_v 0.10 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 40aa7a9d57..9ef5de6755 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5458,6 +5458,8 @@ TRANS(SHADD_v, do_gvec_fn3_no64, a, gen_gvec_shadd)
TRANS(UHADD_v, do_gvec_fn3_no64, a, gen_gvec_uhadd)
TRANS(SHSUB_v, do_gvec_fn3_no64, a, gen_gvec_shsub)
TRANS(UHSUB_v, do_gvec_fn3_no64, a, gen_gvec_uhsub)
+TRANS(SRHADD_v, do_gvec_fn3_no64, a, gen_gvec_srhadd)
+TRANS(URHADD_v, do_gvec_fn3_no64, a, gen_gvec_urhadd)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -10923,7 +10925,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
return;
}
/* fall through */
- case 0x2: /* SRHADD, URHADD */
case 0xc: /* SMAX, UMAX */
case 0xd: /* SMIN, UMIN */
case 0xe: /* SABD, UABD */
@@ -10949,6 +10950,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x0: /* SHADD, UHADD */
case 0x01: /* SQADD, UQADD */
+ case 0x02: /* SRHADD, URHADD */
case 0x04: /* SHSUB, UHSUB */
case 0x05: /* SQSUB, UQSUB */
case 0x06: /* CMGT, CMHI */
@@ -10968,13 +10970,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x02: /* SRHADD, URHADD */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_urhadd, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_srhadd, size);
- }
- return;
case 0x0c: /* SMAX, UMAX */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (58 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:04 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD " Richard Henderson
` (7 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++++
target/arm/tcg/translate-a64.c | 22 ++++++----------------
2 files changed, 10 insertions(+), 16 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1c448b4f7c..bc98963bc5 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -918,6 +918,10 @@ SHSUB_v 0.00 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
UHSUB_v 0.10 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
SRHADD_v 0.00 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
URHADD_v 0.10 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
+SMAX_v 0.00 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
+UMAX_v 0.10 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
+SMIN_v 0.00 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
+UMIN_v 0.10 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 9ef5de6755..db6f59df17 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5460,6 +5460,10 @@ TRANS(SHSUB_v, do_gvec_fn3_no64, a, gen_gvec_shsub)
TRANS(UHSUB_v, do_gvec_fn3_no64, a, gen_gvec_uhsub)
TRANS(SRHADD_v, do_gvec_fn3_no64, a, gen_gvec_srhadd)
TRANS(URHADD_v, do_gvec_fn3_no64, a, gen_gvec_urhadd)
+TRANS(SMAX_v, do_gvec_fn3_no64, a, tcg_gen_gvec_smax)
+TRANS(UMAX_v, do_gvec_fn3_no64, a, tcg_gen_gvec_umax)
+TRANS(SMIN_v, do_gvec_fn3_no64, a, tcg_gen_gvec_smin)
+TRANS(UMIN_v, do_gvec_fn3_no64, a, tcg_gen_gvec_umin)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -10925,8 +10929,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
return;
}
/* fall through */
- case 0xc: /* SMAX, UMAX */
- case 0xd: /* SMIN, UMIN */
case 0xe: /* SABD, UABD */
case 0xf: /* SABA, UABA */
case 0x12: /* MLA, MLS */
@@ -10959,6 +10961,8 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x09: /* SQSHL, UQSHL */
case 0x0a: /* SRSHL, URSHL */
case 0x0b: /* SQRSHL, UQRSHL */
+ case 0x0c: /* SMAX, UMAX */
+ case 0x0d: /* SMIN, UMIN */
case 0x10: /* ADD, SUB */
case 0x11: /* CMTST, CMEQ */
unallocated_encoding(s);
@@ -10970,20 +10974,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x0c: /* SMAX, UMAX */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_smax, size);
- }
- return;
- case 0x0d: /* SMIN, UMIN */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umin, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_smin, size);
- }
- return;
case 0xe: /* SABD, UABD */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uabd, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (59 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:05 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 62/67] target/arm: Convert MUL, PMUL " Richard Henderson
` (6 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++++
target/arm/tcg/translate-a64.c | 22 ++++++----------------
2 files changed, 10 insertions(+), 16 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index bc98963bc5..07b604ec30 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -922,6 +922,10 @@ SMAX_v 0.00 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
UMAX_v 0.10 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
SMIN_v 0.00 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
UMIN_v 0.10 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
+SABD_v 0.00 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
+UABD_v 0.10 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
+SABA_v 0.00 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
+UABA_v 0.10 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index db6f59df17..61afbc434f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5464,6 +5464,10 @@ TRANS(SMAX_v, do_gvec_fn3_no64, a, tcg_gen_gvec_smax)
TRANS(UMAX_v, do_gvec_fn3_no64, a, tcg_gen_gvec_umax)
TRANS(SMIN_v, do_gvec_fn3_no64, a, tcg_gen_gvec_smin)
TRANS(UMIN_v, do_gvec_fn3_no64, a, tcg_gen_gvec_umin)
+TRANS(SABA_v, do_gvec_fn3_no64, a, gen_gvec_saba)
+TRANS(UABA_v, do_gvec_fn3_no64, a, gen_gvec_uaba)
+TRANS(SABD_v, do_gvec_fn3_no64, a, gen_gvec_sabd)
+TRANS(UABD_v, do_gvec_fn3_no64, a, gen_gvec_uabd)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -10929,8 +10933,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
return;
}
/* fall through */
- case 0xe: /* SABD, UABD */
- case 0xf: /* SABA, UABA */
case 0x12: /* MLA, MLS */
if (size == 3) {
unallocated_encoding(s);
@@ -10963,6 +10965,8 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x0b: /* SQRSHL, UQRSHL */
case 0x0c: /* SMAX, UMAX */
case 0x0d: /* SMIN, UMIN */
+ case 0x0e: /* SABD, UABD */
+ case 0x0f: /* SABA, UABA */
case 0x10: /* ADD, SUB */
case 0x11: /* CMTST, CMEQ */
unallocated_encoding(s);
@@ -10974,20 +10978,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0xe: /* SABD, UABD */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uabd, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sabd, size);
- }
- return;
- case 0xf: /* SABA, UABA */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_uaba, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_saba, size);
- }
- return;
case 0x13: /* MUL, PMUL */
if (!u) { /* MUL */
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_mul, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 62/67] target/arm: Convert MUL, PMUL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (60 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:06 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 63/67] target/arm: Convert MLA, MLS " Richard Henderson
` (5 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 5 ++++
target/arm/tcg/translate-a64.c | 51 +++++++++++++---------------------
2 files changed, 25 insertions(+), 31 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 07b604ec30..3ea0643370 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -926,6 +926,8 @@ SABD_v 0.00 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
UABD_v 0.10 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
SABA_v 0.00 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
UABA_v 0.10 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
+MUL_v 0.00 1110 ..1 ..... 10011 1 ..... ..... @qrrr_e
+PMUL_v 0.10 1110 001 ..... 10011 1 ..... ..... @qrrr_b
### Advanced SIMD scalar x indexed element
@@ -967,3 +969,6 @@ FMLAL_vi 0.00 1111 10 .. .... 0000 . 0 ..... ..... @qrrx_h
FMLSL_vi 0.00 1111 10 .. .... 0100 . 0 ..... ..... @qrrx_h
FMLAL2_vi 0.10 1111 10 .. .... 1000 . 0 ..... ..... @qrrx_h
FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
+
+MUL_vi 0.00 1111 01 .. .... 1000 . 0 ..... ..... @qrrx_h
+MUL_vi 0.00 1111 10 . ..... 1000 . 0 ..... ..... @qrrx_s
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 61afbc434f..1909d1426c 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5468,6 +5468,8 @@ TRANS(SABA_v, do_gvec_fn3_no64, a, gen_gvec_saba)
TRANS(UABA_v, do_gvec_fn3_no64, a, gen_gvec_uaba)
TRANS(SABD_v, do_gvec_fn3_no64, a, gen_gvec_sabd)
TRANS(UABD_v, do_gvec_fn3_no64, a, gen_gvec_uabd)
+TRANS(MUL_v, do_gvec_fn3_no64, a, tcg_gen_gvec_mul)
+TRANS(PMUL_v, do_gvec_op3_ool, a, 0, gen_helper_gvec_pmul_b)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -5694,6 +5696,22 @@ TRANS_FEAT(FMLSL_vi, aa64_fhm, do_fmlal_idx, a, true, false)
TRANS_FEAT(FMLAL2_vi, aa64_fhm, do_fmlal_idx, a, false, true)
TRANS_FEAT(FMLSL2_vi, aa64_fhm, do_fmlal_idx, a, true, true)
+static bool do_int3_vector_idx(DisasContext *s, arg_qrrx_e *a,
+ gen_helper_gvec_3 * const fns[2])
+{
+ assert(a->esz == MO_16 || a->esz == MO_32);
+ if (fp_access_check(s)) {
+ gen_gvec_op3_ool(s, a->q, a->rd, a->rn, a->rm, a->idx, fns[a->esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_3 * const f_vector_idx_mul[2] = {
+ gen_helper_gvec_mul_idx_h,
+ gen_helper_gvec_mul_idx_s,
+};
+TRANS(MUL_vi, do_int3_vector_idx, a, f_vector_idx_mul)
+
/*
* Advanced SIMD scalar pairwise
*/
@@ -10927,12 +10945,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
int rd = extract32(insn, 0, 5);
switch (opcode) {
- case 0x13: /* MUL, PMUL */
- if (u && size != 0) {
- unallocated_encoding(s);
- return;
- }
- /* fall through */
case 0x12: /* MLA, MLS */
if (size == 3) {
unallocated_encoding(s);
@@ -10969,6 +10981,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x0f: /* SABA, UABA */
case 0x10: /* ADD, SUB */
case 0x11: /* CMTST, CMEQ */
+ case 0x13: /* MUL, PMUL */
unallocated_encoding(s);
return;
}
@@ -10978,13 +10991,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x13: /* MUL, PMUL */
- if (!u) { /* MUL */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_mul, size);
- } else { /* PMUL */
- gen_gvec_op3_ool(s, is_q, rd, rn, rm, 0, gen_helper_gvec_pmul_b);
- }
- return;
case 0x12: /* MLA, MLS */
if (u) {
gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mls, size);
@@ -12198,7 +12204,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
TCGv_ptr fpst;
switch (16 * u + opcode) {
- case 0x08: /* MUL */
case 0x10: /* MLA */
case 0x14: /* MLS */
if (is_scalar) {
@@ -12285,6 +12290,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x01: /* FMLA */
case 0x04: /* FMLSL */
case 0x05: /* FMLS */
+ case 0x08: /* MUL */
case 0x09: /* FMUL */
case 0x18: /* FMLAL2 */
case 0x19: /* FMULX */
@@ -12407,22 +12413,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
return;
- case 0x08: /* MUL */
- if (!is_long && !is_scalar) {
- static gen_helper_gvec_3 * const fns[3] = {
- gen_helper_gvec_mul_idx_h,
- gen_helper_gvec_mul_idx_s,
- gen_helper_gvec_mul_idx_d,
- };
- tcg_gen_gvec_3_ool(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm),
- is_q ? 16 : 8, vec_full_reg_size(s),
- index, fns[size - 1]);
- return;
- }
- break;
-
case 0x10: /* MLA */
if (!is_long && !is_scalar) {
static gen_helper_gvec_4 * const fns[3] = {
@@ -12491,7 +12481,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32);
switch (16 * u + opcode) {
- case 0x08: /* MUL */
case 0x10: /* MLA */
case 0x14: /* MLS */
{
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 63/67] target/arm: Convert MLA, MLS to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (61 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 62/67] target/arm: Convert MUL, PMUL " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:07 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector) Richard Henderson
` (4 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 8 ++++
target/arm/tcg/translate-a64.c | 77 ++++++++++------------------------
2 files changed, 31 insertions(+), 54 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 3ea0643370..2dea68a0a9 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -928,6 +928,8 @@ SABA_v 0.00 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
UABA_v 0.10 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
MUL_v 0.00 1110 ..1 ..... 10011 1 ..... ..... @qrrr_e
PMUL_v 0.10 1110 001 ..... 10011 1 ..... ..... @qrrr_b
+MLA_v 0.00 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
+MLS_v 0.10 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
@@ -972,3 +974,9 @@ FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
MUL_vi 0.00 1111 01 .. .... 1000 . 0 ..... ..... @qrrx_h
MUL_vi 0.00 1111 10 . ..... 1000 . 0 ..... ..... @qrrx_s
+
+MLA_vi 0.10 1111 01 .. .... 0000 . 0 ..... ..... @qrrx_h
+MLA_vi 0.10 1111 10 . ..... 0000 . 0 ..... ..... @qrrx_s
+
+MLS_vi 0.10 1111 01 .. .... 0100 . 0 ..... ..... @qrrx_h
+MLS_vi 0.10 1111 10 . ..... 0100 . 0 ..... ..... @qrrx_s
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1909d1426c..c4601cde2f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5470,6 +5470,8 @@ TRANS(SABD_v, do_gvec_fn3_no64, a, gen_gvec_sabd)
TRANS(UABD_v, do_gvec_fn3_no64, a, gen_gvec_uabd)
TRANS(MUL_v, do_gvec_fn3_no64, a, tcg_gen_gvec_mul)
TRANS(PMUL_v, do_gvec_op3_ool, a, 0, gen_helper_gvec_pmul_b)
+TRANS(MLA_v, do_gvec_fn3_no64, a, gen_gvec_mla)
+TRANS(MLS_v, do_gvec_fn3_no64, a, gen_gvec_mls)
static bool do_cmop_v(DisasContext *s, arg_qrrr_e *a, TCGCond cond)
{
@@ -5712,6 +5714,24 @@ static gen_helper_gvec_3 * const f_vector_idx_mul[2] = {
};
TRANS(MUL_vi, do_int3_vector_idx, a, f_vector_idx_mul)
+static bool do_mla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool sub)
+{
+ static gen_helper_gvec_4 * const fns[2][2] = {
+ { gen_helper_gvec_mla_idx_h, gen_helper_gvec_mls_idx_h },
+ { gen_helper_gvec_mla_idx_s, gen_helper_gvec_mls_idx_s },
+ };
+
+ assert(a->esz == MO_16 || a->esz == MO_32);
+ if (fp_access_check(s)) {
+ gen_gvec_op4_ool(s, a->q, a->rd, a->rn, a->rm, a->rd,
+ a->idx, fns[a->esz - 1][sub]);
+ }
+ return true;
+}
+
+TRANS(MLA_vi, do_mla_vector_idx, a, false)
+TRANS(MLS_vi, do_mla_vector_idx, a, true)
+
/*
* Advanced SIMD scalar pairwise
*/
@@ -10945,12 +10965,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
int rd = extract32(insn, 0, 5);
switch (opcode) {
- case 0x12: /* MLA, MLS */
- if (size == 3) {
- unallocated_encoding(s);
- return;
- }
- break;
case 0x16: /* SQDMULH, SQRDMULH */
if (size == 0 || size == 3) {
unallocated_encoding(s);
@@ -10981,6 +10995,7 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
case 0x0f: /* SABA, UABA */
case 0x10: /* ADD, SUB */
case 0x11: /* CMTST, CMEQ */
+ case 0x12: /* MLA, MLS */
case 0x13: /* MUL, PMUL */
unallocated_encoding(s);
return;
@@ -10991,13 +11006,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
}
switch (opcode) {
- case 0x12: /* MLA, MLS */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mls, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_mla, size);
- }
- return;
case 0x16: /* SQDMULH, SQRDMULH */
{
static gen_helper_gvec_3_ptr * const fns[2][2] = {
@@ -12204,13 +12212,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
TCGv_ptr fpst;
switch (16 * u + opcode) {
- case 0x10: /* MLA */
- case 0x14: /* MLS */
- if (is_scalar) {
- unallocated_encoding(s);
- return;
- }
- break;
case 0x02: /* SMLAL, SMLAL2 */
case 0x12: /* UMLAL, UMLAL2 */
case 0x06: /* SMLSL, SMLSL2 */
@@ -12292,6 +12293,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x05: /* FMLS */
case 0x08: /* MUL */
case 0x09: /* FMUL */
+ case 0x10: /* MLA */
+ case 0x14: /* MLS */
case 0x18: /* FMLAL2 */
case 0x19: /* FMULX */
case 0x1c: /* FMLSL2 */
@@ -12412,40 +12415,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
: gen_helper_gvec_fcmlah_idx);
}
return;
-
- case 0x10: /* MLA */
- if (!is_long && !is_scalar) {
- static gen_helper_gvec_4 * const fns[3] = {
- gen_helper_gvec_mla_idx_h,
- gen_helper_gvec_mla_idx_s,
- gen_helper_gvec_mla_idx_d,
- };
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm),
- vec_full_reg_offset(s, rd),
- is_q ? 16 : 8, vec_full_reg_size(s),
- index, fns[size - 1]);
- return;
- }
- break;
-
- case 0x14: /* MLS */
- if (!is_long && !is_scalar) {
- static gen_helper_gvec_4 * const fns[3] = {
- gen_helper_gvec_mls_idx_h,
- gen_helper_gvec_mls_idx_s,
- gen_helper_gvec_mls_idx_d,
- };
- tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm),
- vec_full_reg_offset(s, rd),
- is_q ? 16 : 8, vec_full_reg_size(s),
- index, fns[size - 1]);
- return;
- }
- break;
}
if (size == 3) {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector)
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (62 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 63/67] target/arm: Convert MLA, MLS " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:08 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree Richard Henderson
` (3 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
We already have a gvec helper for the operations, but we aren't
using it on the aa32 neon side. Create a unified expander for
use by both aa32 and aa64 translators.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate.h | 4 ++++
target/arm/tcg/gengvec.c | 20 ++++++++++++++++++++
target/arm/tcg/translate-a64.c | 23 ++++-------------------
target/arm/tcg/translate-neon.c | 23 +++--------------------
4 files changed, 31 insertions(+), 39 deletions(-)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 3b1e68b779..aba21f730f 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -539,6 +539,10 @@ void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_sqdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_sqrdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 32caabd126..462c185f9a 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -34,6 +34,26 @@ static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
opr_sz, max_sz, 0, fn);
}
+void gen_gvec_sqdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_neon_sqdmulh_h, gen_helper_neon_sqdmulh_s
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
+void gen_gvec_sqrdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_neon_sqrdmulh_h, gen_helper_neon_sqrdmulh_s
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index c4601cde2f..c673b95ec7 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -724,19 +724,6 @@ static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
}
-/* Expand a 3-operand + qc + operation using an out-of-line helper. */
-static void gen_gvec_op3_qc(DisasContext *s, bool is_q, int rd, int rn,
- int rm, gen_helper_gvec_3_ptr *fn)
-{
- TCGv_ptr qc_ptr = tcg_temp_new_ptr();
-
- tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), qc_ptr,
- is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
-}
-
/* Expand a 4-operand operation using an out-of-line helper. */
static void gen_gvec_op4_ool(DisasContext *s, bool is_q, int rd, int rn,
int rm, int ra, int data, gen_helper_gvec_4 *fn)
@@ -11007,12 +10994,10 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
switch (opcode) {
case 0x16: /* SQDMULH, SQRDMULH */
- {
- static gen_helper_gvec_3_ptr * const fns[2][2] = {
- { gen_helper_neon_sqdmulh_h, gen_helper_neon_sqrdmulh_h },
- { gen_helper_neon_sqdmulh_s, gen_helper_neon_sqrdmulh_s },
- };
- gen_gvec_op3_qc(s, is_q, rd, rn, rm, fns[size - 1][u]);
+ if (u) {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqrdmulh_qc, size);
+ } else {
+ gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqdmulh_qc, size);
}
return;
}
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index f9a8753906..915c9e56db 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -937,28 +937,11 @@ DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
}
#define DO_3SAME_VQDMULH(INSN, FUNC) \
- WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
- WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
- static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
- uint32_t rn_ofs, uint32_t rm_ofs, \
- uint32_t oprsz, uint32_t maxsz) \
- { \
- static const GVecGen3 ops[2] = { \
- { .fni4 = gen_##INSN##_tramp16 }, \
- { .fni4 = gen_##INSN##_tramp32 }, \
- }; \
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece - 1]); \
- } \
static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
- { \
- if (a->size != 1 && a->size != 2) { \
- return false; \
- } \
- return do_3same(s, a, gen_##INSN##_3s); \
- }
+ { return a->size >= 1 && a->size <= 2 && do_3same(s, a, FUNC); }
-DO_3SAME_VQDMULH(VQDMULH, qdmulh)
-DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
+DO_3SAME_VQDMULH(VQDMULH, gen_gvec_sqdmulh_qc)
+DO_3SAME_VQDMULH(VQRDMULH, gen_gvec_sqrdmulh_qc)
#define WRAP_FP_GVEC(WRAPNAME, FPST, FUNC) \
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (63 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector) Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:10 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB " Richard Henderson
` (2 subsequent siblings)
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
These are the last instructions within disas_simd_three_reg_same
and disas_simd_scalar_three_reg_same, so remove them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/helper.h | 10 ++
target/arm/tcg/a64.decode | 18 +++
target/arm/tcg/translate-a64.c | 276 ++++++++++-----------------------
target/arm/tcg/vec_helper.c | 64 ++++++++
4 files changed, 172 insertions(+), 196 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 85f9302563..24feecee9b 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -968,6 +968,16 @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqdmulh_idx_h, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqdmulh_idx_s, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_idx_h, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(neon_sqrdmulh_idx_s, TCG_CALL_NO_RWG,
+ void, ptr, ptr, ptr, ptr, i32)
+
DEF_HELPER_FLAGS_4(sve2_sqdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(sve2_sqdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(sve2_sqdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 2dea68a0a9..f7f897f9fc 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -774,6 +774,9 @@ CMHS_s 0111 1110 111 ..... 00111 1 ..... ..... @rrr_d
CMTST_s 0101 1110 111 ..... 10001 1 ..... ..... @rrr_d
CMEQ_s 0111 1110 111 ..... 10001 1 ..... ..... @rrr_d
+SQDMULH_s 0101 1110 ..1 ..... 10110 1 ..... ..... @rrr_e
+SQRDMULH_s 0111 1110 ..1 ..... 10110 1 ..... ..... @rrr_e
+
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@@ -931,6 +934,9 @@ PMUL_v 0.10 1110 001 ..... 10011 1 ..... ..... @qrrr_b
MLA_v 0.00 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
MLS_v 0.10 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
+SQDMULH_v 0.00 1110 ..1 ..... 10110 1 ..... ..... @qrrr_e
+SQRDMULH_v 0.10 1110 ..1 ..... 10110 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
@@ -949,6 +955,12 @@ FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+SQDMULH_si 0101 1111 01 .. .... 1100 . 0 ..... ..... @rrx_h
+SQDMULH_si 0101 1111 10 .. .... 1100 . 0 ..... ..... @rrx_s
+
+SQRDMULH_si 0101 1111 01 .. .... 1101 . 0 ..... ..... @rrx_h
+SQRDMULH_si 0101 1111 10 . ..... 1101 . 0 ..... ..... @rrx_s
+
### Advanced SIMD vector x indexed element
FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
@@ -980,3 +992,9 @@ MLA_vi 0.10 1111 10 . ..... 0000 . 0 ..... ..... @qrrx_s
MLS_vi 0.10 1111 01 .. .... 0100 . 0 ..... ..... @qrrx_h
MLS_vi 0.10 1111 10 . ..... 0100 . 0 ..... ..... @qrrx_s
+
+SQDMULH_vi 0.00 1111 01 .. .... 1100 . 0 ..... ..... @qrrx_h
+SQDMULH_vi 0.00 1111 10 . ..... 1100 . 0 ..... ..... @qrrx_s
+
+SQRDMULH_vi 0.00 1111 01 .. .... 1101 . 0 ..... ..... @qrrx_h
+SQRDMULH_vi 0.00 1111 10 . ..... 1101 . 0 ..... ..... @qrrx_s
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index c673b95ec7..14226c56cf 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1350,6 +1350,14 @@ static bool do_gvec_fn3_no64(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
return true;
}
+static bool do_gvec_fn3_no8_no64(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
+{
+ if (a->esz == MO_8) {
+ return false;
+ }
+ return do_gvec_fn3_no64(s, a, fn);
+}
+
static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
{
if (!a->q && a->esz == MO_64) {
@@ -5167,6 +5175,25 @@ static const ENVScalar2 f_scalar_uqrshl = {
};
TRANS(UQRSHL_s, do_env_scalar2, a, &f_scalar_uqrshl)
+static bool do_env_scalar2_hs(DisasContext *s, arg_rrr_e *a,
+ const ENVScalar2 *f)
+{
+ if (a->esz == MO_16 || a->esz == MO_32) {
+ return do_env_scalar2(s, a, f);
+ }
+ return false;
+}
+
+static const ENVScalar2 f_scalar_sqdmulh = {
+ { NULL, gen_helper_neon_qdmulh_s16, gen_helper_neon_qdmulh_s32 }
+};
+TRANS(SQDMULH_s, do_env_scalar2_hs, a, &f_scalar_sqdmulh)
+
+static const ENVScalar2 f_scalar_sqrdmulh = {
+ { NULL, gen_helper_neon_qrdmulh_s16, gen_helper_neon_qrdmulh_s32 }
+};
+TRANS(SQRDMULH_s, do_env_scalar2_hs, a, &f_scalar_sqrdmulh)
+
static bool do_cmop_d(DisasContext *s, arg_rrr_e *a, TCGCond cond)
{
if (fp_access_check(s)) {
@@ -5482,6 +5509,9 @@ TRANS(CMHS_v, do_cmop_v, a, TCG_COND_GEU)
TRANS(CMEQ_v, do_cmop_v, a, TCG_COND_EQ)
TRANS(CMTST_v, do_gvec_fn3, a, gen_gvec_cmtst)
+TRANS(SQDMULH_v, do_gvec_fn3_no8_no64, a, gen_gvec_sqdmulh_qc)
+TRANS(SQRDMULH_v, do_gvec_fn3_no8_no64, a, gen_gvec_sqrdmulh_qc)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5589,6 +5619,27 @@ static bool do_fmla_scalar_idx(DisasContext *s, arg_rrx_e *a, bool neg)
TRANS(FMLA_si, do_fmla_scalar_idx, a, false)
TRANS(FMLS_si, do_fmla_scalar_idx, a, true)
+static bool do_env_scalar2_idx_hs(DisasContext *s, arg_rrx_e *a,
+ const ENVScalar2 *f)
+{
+ if (a->esz < MO_16 || a->esz > MO_32) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, a->esz);
+ read_vec_element_i32(s, t1, a->rm, a->idx, a->esz);
+ f->gen_bhs[a->esz](t0, tcg_env, t0, t1);
+ write_fp_sreg(s, a->rd, t0);
+ }
+ return true;
+}
+
+TRANS(SQDMULH_si, do_env_scalar2_idx_hs, a, &f_scalar_sqdmulh)
+TRANS(SQRDMULH_si, do_env_scalar2_idx_hs, a, &f_scalar_sqrdmulh)
+
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5719,6 +5770,33 @@ static bool do_mla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool sub)
TRANS(MLA_vi, do_mla_vector_idx, a, false)
TRANS(MLS_vi, do_mla_vector_idx, a, true)
+static bool do_int3_qc_vector_idx(DisasContext *s, arg_qrrx_e *a,
+ gen_helper_gvec_4 * const fns[2])
+{
+ assert(a->esz == MO_16 || a->esz == MO_32);
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm),
+ offsetof(CPUARMState, vfp.qc),
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ a->idx, fns[a->esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_4 * const f_vector_idx_sqdmulh[2] = {
+ gen_helper_neon_sqdmulh_idx_h,
+ gen_helper_neon_sqdmulh_idx_s,
+};
+TRANS(SQDMULH_vi, do_int3_qc_vector_idx, a, f_vector_idx_sqdmulh)
+
+static gen_helper_gvec_4 * const f_vector_idx_sqrdmulh[2] = {
+ gen_helper_neon_sqrdmulh_idx_h,
+ gen_helper_neon_sqrdmulh_idx_s,
+};
+TRANS(SQRDMULH_vi, do_int3_qc_vector_idx, a, f_vector_idx_sqrdmulh)
+
/*
* Advanced SIMD scalar pairwise
*/
@@ -9500,109 +9578,6 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
}
}
-/* AdvSIMD scalar three same
- * 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
- * +-----+---+-----------+------+---+------+--------+---+------+------+
- * | 0 1 | U | 1 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
- * +-----+---+-----------+------+---+------+--------+---+------+------+
- */
-static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int opcode = extract32(insn, 11, 5);
- int rm = extract32(insn, 16, 5);
- int size = extract32(insn, 22, 2);
- bool u = extract32(insn, 29, 1);
- TCGv_i64 tcg_rd;
-
- switch (opcode) {
- case 0x16: /* SQDMULH, SQRDMULH (vector) */
- if (size != 1 && size != 2) {
- unallocated_encoding(s);
- return;
- }
- break;
- default:
- case 0x1: /* SQADD, UQADD */
- case 0x5: /* SQSUB, UQSUB */
- case 0x6: /* CMGT, CMHI */
- case 0x7: /* CMGE, CMHS */
- case 0x8: /* SSHL, USHL */
- case 0x9: /* SQSHL, UQSHL */
- case 0xa: /* SRSHL, URSHL */
- case 0xb: /* SQRSHL, UQRSHL */
- case 0x10: /* ADD, SUB (vector) */
- case 0x11: /* CMTST, CMEQ */
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- tcg_rd = tcg_temp_new_i64();
-
- if (size == 3) {
- g_assert_not_reached();
- } else {
- /* Do a single operation on the lowest element in the vector.
- * We use the standard Neon helpers and rely on 0 OP 0 == 0 with
- * no side effects for all these operations.
- * OPTME: special-purpose helpers would avoid doing some
- * unnecessary work in the helper for the 8 and 16 bit cases.
- */
- NeonGenTwoOpEnvFn *genenvfn = NULL;
- void (*genfn)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_i64, MemOp) = NULL;
-
- switch (opcode) {
- case 0x16: /* SQDMULH, SQRDMULH */
- {
- static NeonGenTwoOpEnvFn * const fns[2][2] = {
- { gen_helper_neon_qdmulh_s16, gen_helper_neon_qrdmulh_s16 },
- { gen_helper_neon_qdmulh_s32, gen_helper_neon_qrdmulh_s32 },
- };
- assert(size == 1 || size == 2);
- genenvfn = fns[size - 1][u];
- break;
- }
- default:
- case 0x1: /* SQADD, UQADD */
- case 0x5: /* SQSUB, UQSUB */
- case 0x9: /* SQSHL, UQSHL */
- case 0xb: /* SQRSHL, UQRSHL */
- g_assert_not_reached();
- }
-
- if (genenvfn) {
- TCGv_i32 tcg_rn = tcg_temp_new_i32();
- TCGv_i32 tcg_rm = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_rn, rn, 0, size);
- read_vec_element_i32(s, tcg_rm, rm, 0, size);
- genenvfn(tcg_rn, tcg_env, tcg_rn, tcg_rm);
- tcg_gen_extu_i32_i64(tcg_rd, tcg_rn);
- } else {
- TCGv_i64 tcg_rn = tcg_temp_new_i64();
- TCGv_i64 tcg_rm = tcg_temp_new_i64();
- TCGv_i64 qc = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_rn, rn, 0, size | (u ? 0 : MO_SIGN));
- read_vec_element(s, tcg_rm, rm, 0, size | (u ? 0 : MO_SIGN));
- tcg_gen_ld_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- genfn(tcg_rd, qc, tcg_rn, tcg_rm, size);
- tcg_gen_st_i64(qc, tcg_env, offsetof(CPUARMState, vfp.qc));
- if (!u) {
- /* Truncate signed 64-bit result for writeback. */
- tcg_gen_ext_i64(tcg_rd, tcg_rd, size);
- }
- }
- }
-
- write_fp_dreg(s, rd, tcg_rd);
-}
-
/* AdvSIMD scalar three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+---+--------+---+----+----+
@@ -10940,94 +10915,6 @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
}
}
-/* Integer op subgroup of C3.6.16. */
-static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
-{
- int is_q = extract32(insn, 30, 1);
- int u = extract32(insn, 29, 1);
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 11, 5);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- switch (opcode) {
- case 0x16: /* SQDMULH, SQRDMULH */
- if (size == 0 || size == 3) {
- unallocated_encoding(s);
- return;
- }
- break;
- default:
- if (size == 3 && !is_q) {
- unallocated_encoding(s);
- return;
- }
- break;
-
- case 0x0: /* SHADD, UHADD */
- case 0x01: /* SQADD, UQADD */
- case 0x02: /* SRHADD, URHADD */
- case 0x04: /* SHSUB, UHSUB */
- case 0x05: /* SQSUB, UQSUB */
- case 0x06: /* CMGT, CMHI */
- case 0x07: /* CMGE, CMHS */
- case 0x08: /* SSHL, USHL */
- case 0x09: /* SQSHL, UQSHL */
- case 0x0a: /* SRSHL, URSHL */
- case 0x0b: /* SQRSHL, UQRSHL */
- case 0x0c: /* SMAX, UMAX */
- case 0x0d: /* SMIN, UMIN */
- case 0x0e: /* SABD, UABD */
- case 0x0f: /* SABA, UABA */
- case 0x10: /* ADD, SUB */
- case 0x11: /* CMTST, CMEQ */
- case 0x12: /* MLA, MLS */
- case 0x13: /* MUL, PMUL */
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- switch (opcode) {
- case 0x16: /* SQDMULH, SQRDMULH */
- if (u) {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqrdmulh_qc, size);
- } else {
- gen_gvec_fn3(s, is_q, rd, rn, rm, gen_gvec_sqdmulh_qc, size);
- }
- return;
- }
- g_assert_not_reached();
-}
-
-/* AdvSIMD three same
- * 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
- * +---+---+---+-----------+------+---+------+--------+---+------+------+
- * | 0 | Q | U | 0 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
- * +---+---+---+-----------+------+---+------+--------+---+------+------+
- */
-static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 11, 5);
-
- switch (opcode) {
- default:
- disas_simd_3same_int(s, insn);
- break;
- case 0x3: /* logic ops */
- case 0x14: /* SMAXP, UMAXP */
- case 0x15: /* SMINP, UMINP */
- case 0x17: /* ADDP */
- case 0x18 ... 0x31: /* floating point ops */
- unallocated_encoding(s);
- break;
- }
-}
-
/* AdvSIMD three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +---+---+---+-----------+------+---+------+---+--------+---+----+----+
@@ -12214,9 +12101,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x0b: /* SQDMULL, SQDMULL2 */
is_long = true;
break;
- case 0x0c: /* SQDMULH */
- case 0x0d: /* SQRDMULH */
- break;
case 0x1d: /* SQRDMLAH */
case 0x1f: /* SQRDMLSH */
if (!dc_isar_feature(aa64_rdm, s)) {
@@ -12278,6 +12162,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x05: /* FMLS */
case 0x08: /* MUL */
case 0x09: /* FMUL */
+ case 0x0c: /* SQDMULH */
+ case 0x0d: /* SQRDMULH */
case 0x10: /* MLA */
case 0x14: /* MLS */
case 0x18: /* FMLAL2 */
@@ -12683,7 +12569,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
*/
static const AArch64DecodeTable data_proc_simd[] = {
/* pattern , mask , fn */
- { 0x0e200400, 0x9f200400, disas_simd_three_reg_same },
{ 0x0e008400, 0x9f208400, disas_simd_three_reg_same_extra },
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
@@ -12695,7 +12580,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x0e000000, 0xbf208c00, disas_simd_tb },
{ 0x0e000800, 0xbf208c00, disas_simd_zip_trn },
{ 0x2e000000, 0xbf208400, disas_simd_ext },
- { 0x5e200400, 0xdf200400, disas_simd_scalar_three_reg_same },
{ 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index d8e96386be..b05922b425 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -311,6 +311,38 @@ void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
clear_tail(d, opr_sz, simd_maxsz(desc));
}
+void HELPER(neon_sqdmulh_idx_h)(void *vd, void *vn, void *vm,
+ void *vq, uint32_t desc)
+{
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
+ int idx = simd_data(desc);
+ int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
+
+ for (i = 0; i < opr_sz / 2; i += 16 / 2) {
+ int16_t mm = m[i];
+ for (j = 0; j < 16 / 2; ++j) {
+ d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, false, vq);
+ }
+ }
+ clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
+void HELPER(neon_sqrdmulh_idx_h)(void *vd, void *vn, void *vm,
+ void *vq, uint32_t desc)
+{
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
+ int idx = simd_data(desc);
+ int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
+
+ for (i = 0; i < opr_sz / 2; i += 16 / 2) {
+ int16_t mm = m[i];
+ for (j = 0; j < 16 / 2; ++j) {
+ d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, true, vq);
+ }
+ }
+ clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
void HELPER(sve2_sqrdmlah_h)(void *vd, void *vn, void *vm,
void *va, uint32_t desc)
{
@@ -474,6 +506,38 @@ void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
clear_tail(d, opr_sz, simd_maxsz(desc));
}
+void HELPER(neon_sqdmulh_idx_s)(void *vd, void *vn, void *vm,
+ void *vq, uint32_t desc)
+{
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
+ int idx = simd_data(desc);
+ int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
+
+ for (i = 0; i < opr_sz / 4; i += 16 / 4) {
+ int32_t mm = m[i];
+ for (j = 0; j < 16 / 4; ++j) {
+ d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, false, vq);
+ }
+ }
+ clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
+void HELPER(neon_sqrdmulh_idx_s)(void *vd, void *vn, void *vm,
+ void *vq, uint32_t desc)
+{
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
+ int idx = simd_data(desc);
+ int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
+
+ for (i = 0; i < opr_sz / 4; i += 16 / 4) {
+ int32_t mm = m[i];
+ for (j = 0; j < 16 / 4; ++j) {
+ d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, true, vq);
+ }
+ }
+ clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
void HELPER(sve2_sqrdmlah_s)(void *vd, void *vn, void *vm,
void *va, uint32_t desc)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (64 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:15 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 67/67] target/arm: Convert FCSEL " Richard Henderson
2024-05-28 16:20 ` [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Peter Maydell
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
These are the only instructions in the 3 source scalar class.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 10 ++
target/arm/tcg/translate-a64.c | 233 ++++++++++++---------------------
2 files changed, 93 insertions(+), 150 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index f7f897f9fc..6f6cd805b7 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -32,6 +32,7 @@
&rr_e rd rn esz
&rrr_e rd rn rm esz
&rrx_e rd rn rm idx esz
+&rrrr_e rd rn rm ra esz
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
&qrrx_e q rd rn rm idx esz
@@ -998,3 +999,12 @@ SQDMULH_vi 0.00 1111 10 . ..... 1100 . 0 ..... ..... @qrrx_s
SQRDMULH_vi 0.00 1111 01 .. .... 1101 . 0 ..... ..... @qrrx_h
SQRDMULH_vi 0.00 1111 10 . ..... 1101 . 0 ..... ..... @qrrx_s
+
+# Floating-point data-processing (3 source)
+
+@rrrr_hsd .... .... .. . rm:5 . ra:5 rn:5 rd:5 &rrrr_e esz=%esz_hsd
+
+FMADD 0001 1111 .. 0 ..... 0 ..... ..... ..... @rrrr_hsd
+FMSUB 0001 1111 .. 0 ..... 1 ..... ..... ..... @rrrr_hsd
+FNMADD 0001 1111 .. 1 ..... 0 ..... ..... ..... @rrrr_hsd
+FNMSUB 0001 1111 .. 1 ..... 1 ..... ..... ..... @rrrr_hsd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 14226c56cf..3c2963ebaa 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5866,6 +5866,88 @@ static bool trans_ADDP_s(DisasContext *s, arg_rr_e *a)
return true;
}
+/*
+ * Floating-point data-processing (3 source)
+ */
+
+static bool do_fmadd(DisasContext *s, arg_rrrr_e *a, bool neg_a, bool neg_n)
+{
+ TCGv_ptr fpst;
+
+ /*
+ * These are fused multiply-add. Note that doing the negations here
+ * as separate steps is correct: an input NaN should come out with
+ * its sign bit flipped if it is a negated-input.
+ */
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 tn = read_fp_dreg(s, a->rn);
+ TCGv_i64 tm = read_fp_dreg(s, a->rm);
+ TCGv_i64 ta = read_fp_dreg(s, a->ra);
+
+ if (neg_a) {
+ gen_vfp_negd(ta, ta);
+ }
+ if (neg_n) {
+ gen_vfp_negd(tn, tn);
+ }
+ fpst = fpstatus_ptr(FPST_FPCR);
+ gen_helper_vfp_muladdd(ta, tn, tm, ta, fpst);
+ write_fp_dreg(s, a->rd, ta);
+ }
+ break;
+
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 tn = read_fp_sreg(s, a->rn);
+ TCGv_i32 tm = read_fp_sreg(s, a->rm);
+ TCGv_i32 ta = read_fp_sreg(s, a->ra);
+
+ if (neg_a) {
+ gen_vfp_negs(ta, ta);
+ }
+ if (neg_n) {
+ gen_vfp_negs(tn, tn);
+ }
+ fpst = fpstatus_ptr(FPST_FPCR);
+ gen_helper_vfp_muladds(ta, tn, tm, ta, fpst);
+ write_fp_sreg(s, a->rd, ta);
+ }
+ break;
+
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 tn = read_fp_hreg(s, a->rn);
+ TCGv_i32 tm = read_fp_hreg(s, a->rm);
+ TCGv_i32 ta = read_fp_hreg(s, a->ra);
+
+ if (neg_a) {
+ gen_vfp_negh(ta, ta);
+ }
+ if (neg_n) {
+ gen_vfp_negh(tn, tn);
+ }
+ fpst = fpstatus_ptr(FPST_FPCR_F16);
+ gen_helper_advsimd_muladdh(ta, tn, tm, ta, fpst);
+ write_fp_sreg(s, a->rd, ta);
+ }
+ break;
+
+ default:
+ return false;
+ }
+ return true;
+}
+
+TRANS(FMADD, do_fmadd, a, false, false)
+TRANS(FNMADD, do_fmadd, a, true, true)
+TRANS(FMSUB, do_fmadd, a, false, true)
+TRANS(FNMSUB, do_fmadd, a, true, false)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -7665,152 +7747,6 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
}
}
-/* Floating-point data-processing (3 source) - single precision */
-static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
- int rd, int rn, int rm, int ra)
-{
- TCGv_i32 tcg_op1, tcg_op2, tcg_op3;
- TCGv_i32 tcg_res = tcg_temp_new_i32();
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
-
- tcg_op1 = read_fp_sreg(s, rn);
- tcg_op2 = read_fp_sreg(s, rm);
- tcg_op3 = read_fp_sreg(s, ra);
-
- /* These are fused multiply-add, and must be done as one
- * floating point operation with no rounding between the
- * multiplication and addition steps.
- * NB that doing the negations here as separate steps is
- * correct : an input NaN should come out with its sign bit
- * flipped if it is a negated-input.
- */
- if (o1 == true) {
- gen_vfp_negs(tcg_op3, tcg_op3);
- }
-
- if (o0 != o1) {
- gen_vfp_negs(tcg_op1, tcg_op1);
- }
-
- gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (3 source) - double precision */
-static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
- int rd, int rn, int rm, int ra)
-{
- TCGv_i64 tcg_op1, tcg_op2, tcg_op3;
- TCGv_i64 tcg_res = tcg_temp_new_i64();
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
-
- tcg_op1 = read_fp_dreg(s, rn);
- tcg_op2 = read_fp_dreg(s, rm);
- tcg_op3 = read_fp_dreg(s, ra);
-
- /* These are fused multiply-add, and must be done as one
- * floating point operation with no rounding between the
- * multiplication and addition steps.
- * NB that doing the negations here as separate steps is
- * correct : an input NaN should come out with its sign bit
- * flipped if it is a negated-input.
- */
- if (o1 == true) {
- gen_vfp_negd(tcg_op3, tcg_op3);
- }
-
- if (o0 != o1) {
- gen_vfp_negd(tcg_op1, tcg_op1);
- }
-
- gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
-
- write_fp_dreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (3 source) - half precision */
-static void handle_fp_3src_half(DisasContext *s, bool o0, bool o1,
- int rd, int rn, int rm, int ra)
-{
- TCGv_i32 tcg_op1, tcg_op2, tcg_op3;
- TCGv_i32 tcg_res = tcg_temp_new_i32();
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR_F16);
-
- tcg_op1 = read_fp_hreg(s, rn);
- tcg_op2 = read_fp_hreg(s, rm);
- tcg_op3 = read_fp_hreg(s, ra);
-
- /* These are fused multiply-add, and must be done as one
- * floating point operation with no rounding between the
- * multiplication and addition steps.
- * NB that doing the negations here as separate steps is
- * correct : an input NaN should come out with its sign bit
- * flipped if it is a negated-input.
- */
- if (o1 == true) {
- tcg_gen_xori_i32(tcg_op3, tcg_op3, 0x8000);
- }
-
- if (o0 != o1) {
- tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
- }
-
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating point data-processing (3 source)
- * 31 30 29 28 24 23 22 21 20 16 15 14 10 9 5 4 0
- * +---+---+---+-----------+------+----+------+----+------+------+------+
- * | M | 0 | S | 1 1 1 1 1 | type | o1 | Rm | o0 | Ra | Rn | Rd |
- * +---+---+---+-----------+------+----+------+----+------+------+------+
- */
-static void disas_fp_3src(DisasContext *s, uint32_t insn)
-{
- int mos = extract32(insn, 29, 3);
- int type = extract32(insn, 22, 2);
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int ra = extract32(insn, 10, 5);
- int rm = extract32(insn, 16, 5);
- bool o0 = extract32(insn, 15, 1);
- bool o1 = extract32(insn, 21, 1);
-
- if (mos) {
- unallocated_encoding(s);
- return;
- }
-
- switch (type) {
- case 0:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_3src_single(s, o0, o1, rd, rn, rm, ra);
- break;
- case 1:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
- break;
- case 3:
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_3src_half(s, o0, o1, rd, rn, rm, ra);
- break;
- default:
- unallocated_encoding(s);
- }
-}
-
/* Floating point immediate
* 31 30 29 28 24 23 22 21 20 13 12 10 9 5 4 0
* +---+---+---+-----------+------+---+------------+-------+------+------+
@@ -8254,10 +8190,7 @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
*/
static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
{
- if (extract32(insn, 24, 1)) {
- /* Floating point data-processing (3 source) */
- disas_fp_3src(s, insn);
- } else if (extract32(insn, 21, 1) == 0) {
+ if (extract32(insn, 21, 1) == 0) {
/* Floating point to fixed point conversions */
disas_fp_fixed_conv(s, insn);
} else {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* [PATCH v2 67/67] target/arm: Convert FCSEL to decodetree
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (65 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB " Richard Henderson
@ 2024-05-24 23:21 ` Richard Henderson
2024-05-28 16:16 ` Peter Maydell
2024-05-28 16:20 ` [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Peter Maydell
67 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-24 23:21 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-arm
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++
target/arm/tcg/translate-a64.c | 108 ++++++++++++++-------------------
2 files changed, 49 insertions(+), 63 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 6f6cd805b7..5dadbc74d7 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -1000,6 +1000,10 @@ SQDMULH_vi 0.00 1111 10 . ..... 1100 . 0 ..... ..... @qrrx_s
SQRDMULH_vi 0.00 1111 01 .. .... 1101 . 0 ..... ..... @qrrx_h
SQRDMULH_vi 0.00 1111 10 . ..... 1101 . 0 ..... ..... @qrrx_s
+# Floating-point conditional select
+
+FCSEL 0001 1110 .. 1 rm:5 cond:4 11 rn:5 rd:5 esz=%esz_hsd
+
# Floating-point data-processing (3 source)
@rrrr_hsd .... .... .. . rm:5 . ra:5 rn:5 rd:5 &rrrr_e esz=%esz_hsd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 3c2963ebaa..845aaa2bfb 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5866,6 +5866,50 @@ static bool trans_ADDP_s(DisasContext *s, arg_rr_e *a)
return true;
}
+/*
+ * Floating-point conditional select
+ */
+
+static bool trans_FCSEL(DisasContext *s, arg_FCSEL *a)
+{
+ TCGv_i64 t_true, t_false;
+ DisasCompare64 c;
+
+ switch (a->esz) {
+ case MO_32:
+ case MO_64:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ return false;
+ }
+
+ if (!fp_access_check(s)) {
+ return true;
+ }
+
+ /* Zero extend sreg & hreg inputs to 64 bits now. */
+ t_true = tcg_temp_new_i64();
+ t_false = tcg_temp_new_i64();
+ read_vec_element(s, t_true, a->rn, 0, a->esz);
+ read_vec_element(s, t_false, a->rm, 0, a->esz);
+
+ a64_test_cc(&c, a->cond);
+ tcg_gen_movcond_i64(c.cond, t_true, c.value, tcg_constant_i64(0),
+ t_true, t_false);
+
+ /*
+ * Note that sregs & hregs write back zeros to the high bits,
+ * and we've already done the zero-extension.
+ */
+ write_fp_dreg(s, a->rd, t_true);
+ return true;
+}
+
/*
* Floating-point data-processing (3 source)
*/
@@ -7332,68 +7376,6 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
}
}
-/* Floating point conditional select
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
- * +---+---+---+-----------+------+---+------+------+-----+------+------+
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 1 1 | Rn | Rd |
- * +---+---+---+-----------+------+---+------+------+-----+------+------+
- */
-static void disas_fp_csel(DisasContext *s, uint32_t insn)
-{
- unsigned int mos, type, rm, cond, rn, rd;
- TCGv_i64 t_true, t_false;
- DisasCompare64 c;
- MemOp sz;
-
- mos = extract32(insn, 29, 3);
- type = extract32(insn, 22, 2);
- rm = extract32(insn, 16, 5);
- cond = extract32(insn, 12, 4);
- rn = extract32(insn, 5, 5);
- rd = extract32(insn, 0, 5);
-
- if (mos) {
- unallocated_encoding(s);
- return;
- }
-
- switch (type) {
- case 0:
- sz = MO_32;
- break;
- case 1:
- sz = MO_64;
- break;
- case 3:
- sz = MO_16;
- if (dc_isar_feature(aa64_fp16, s)) {
- break;
- }
- /* fallthru */
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- /* Zero extend sreg & hreg inputs to 64 bits now. */
- t_true = tcg_temp_new_i64();
- t_false = tcg_temp_new_i64();
- read_vec_element(s, t_true, rn, 0, sz);
- read_vec_element(s, t_false, rm, 0, sz);
-
- a64_test_cc(&c, cond);
- tcg_gen_movcond_i64(c.cond, t_true, c.value, tcg_constant_i64(0),
- t_true, t_false);
-
- /* Note that sregs & hregs write back zeros to the high bits,
- and we've already done the zero-extension. */
- write_fp_dreg(s, rd, t_true);
-}
-
/* Floating-point data-processing (1 source) - half precision */
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
{
@@ -8205,7 +8187,7 @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
break;
case 3:
/* Floating point conditional select */
- disas_fp_csel(s, insn);
+ unallocated_encoding(s); /* in decodetree */
break;
case 0:
switch (ctz32(extract32(insn, 12, 4))) {
--
2.34.1
^ permalink raw reply related [flat|nested] 114+ messages in thread
* Re: [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16
2024-05-24 23:20 ` [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16 Richard Henderson
@ 2024-05-28 12:22 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 12:22 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:23, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Load and zero-extend float16 into a TCGv_i32 before
> all scalar operations.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16)
2024-05-24 23:20 ` [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Richard Henderson
@ 2024-05-28 12:25 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 12:25 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:22, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> All of these insns have "if sz == '1' then UNDEFINED" in their pseudocode.
> Fixes a RISU miscompare for invalid insn 0x5ef0c87a.
>
> Fixes: 5c36d89567c ("arm/translate-a64: add all FP16 ops in simd_scalar_pairwise")
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI
2024-05-24 23:20 ` [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI Richard Henderson
@ 2024-05-28 12:34 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 12:34 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:30, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The decode of FMOV (vector, immediate, half-precision) vs
> invalid cases of MOVI are incorrect.
>
> Fixes RISU mismatch for invalid insn 0x2f01fd31.
>
> Fixes: 70b4e6a4457 ("arm/translate-a64: add FP16 FMOV to simd_mod_imm")
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer)
2024-05-24 23:20 ` [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Richard Henderson
@ 2024-05-28 12:48 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 12:48 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:23, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Fixes RISU mismatch for "fcvtzs h31, h0, #14".
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/translate-a64.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
> index 4126aaa27e..d97acdbaf9 100644
> --- a/target/arm/tcg/translate-a64.c
> +++ b/target/arm/tcg/translate-a64.c
> @@ -8707,6 +8707,9 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
> read_vec_element_i32(s, tcg_op, rn, pass, size);
> fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
> if (is_scalar) {
> + if (size == MO_16 && !is_u) {
> + tcg_gen_ext16u_i32(tcg_op, tcg_op);
> + }
> write_fp_sreg(s, rd, tcg_op);
> } else {
> write_vec_element_i32(s, tcg_op, rd, pass, size);
> --
> 2.34.1
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32
2024-05-24 23:20 ` [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32 Richard Henderson
@ 2024-05-28 13:13 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 13:13 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:23, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> This fixes a bug in that neither PLI nor PLDW are present in ARMv6T2,
> but are introduced with ARMv7 and ARMv7MP respectively.
> For clarity, do not use NOP for PLD.
>
> Note that there is no PLDW (literal) -- bit 5 of the first word
> is not decoded, and is PLD (literal). Confirmed on neoverse-n1
> host which does *not* trap on the (0) bit in the decode.
Handling of "(0)" and "(1)" bits in decode is CONSTRAINED UNPREDICTABLE
(see Arm ARM DDI0487K.a F1.7.2): implementations might:
* UNDEF
* NOP
* ignore the bit and execute the insn
* set any destination registers to UNKNOWN values
Usually (but not always) in QEMU we opt to UNDEF. If I'm
reading the decode lines correctly here in this case we're
opting to ignore the bit because we have both:
+ PLD 1111 1000 -001 1111 1111 ------------ # (literal)
+ PLD 1111 1000 -011 1111 1111 ------------ # (literal)
And this isn't a change in this commit because the NOP lines
we used to have also meant we ignored the bit.
With a tweaked commit message
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI
2024-05-24 23:20 ` [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI Richard Henderson
@ 2024-05-28 13:18 ` Peter Maydell
2024-05-28 17:36 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 13:18 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:25, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> For all, rm == 15 is invalid.
> Prior to v8, thumb with rm == 13 is invalid.
> For PLDW, rn == 15 is invalid.
> Fixes a RISU mismatch for the HINTSPACE pattern in t32.risu
> compared to a neoverse-n1 host.
These are UNPREDICTABLE cases, not invalid. In general
we don't try to match a specific implementation's
UNPREDICTABLE choices.
I think we're better off avoiding the mismatch by improving
the risu patterns to avoid the UNPREDICTABLE cases.
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB
2024-05-24 23:20 ` [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB Richard Henderson
@ 2024-05-28 15:24 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:24 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:28, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> No need for a full comparison; xor produces non-zero bits
> for QC just fine.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec
2024-05-24 23:20 ` [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec Richard Henderson
@ 2024-05-28 15:37 ` Peter Maydell
2024-05-28 17:41 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:37 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:32, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 16 +++++
> target/arm/tcg/translate-a64.h | 6 ++
> target/arm/tcg/gengvec64.c | 106 +++++++++++++++++++++++++++++++
> target/arm/tcg/translate-a64.c | 113 ++++++++++++++-------------------
> target/arm/tcg/vec_helper.c | 64 +++++++++++++++++++
> 5 files changed, 241 insertions(+), 64 deletions(-)
> diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
> index 093b498b13..4b76e476a0 100644
> --- a/target/arm/tcg/gengvec64.c
> +++ b/target/arm/tcg/gengvec64.c
> @@ -188,3 +188,109 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
> tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
> }
>
> +static void gen_suqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
> + TCGv_vec a, TCGv_vec b)
> +{
> + TCGv_vec max =
> + tcg_constant_vec_matching(t, vece, (1ull << ((8 << vece) - 1)) - 1);
> + TCGv_vec u = tcg_temp_new_vec_matching(t);
> +
> + /* Maximum value that can be added to @a without overflow. */
> + tcg_gen_sub_vec(vece, u, max, a);
> +
> + /* Constrain addend so that the next addition never overflows. */
> + tcg_gen_umin_vec(vece, u, u, b);
> + tcg_gen_add_vec(vece, t, u, a);
> +
> + /* Compute QC by comparing the adjusted @b. */
> + tcg_gen_xor_vec(vece, u, u, b);
> + tcg_gen_or_vec(vece, qc, qc, u);
With this kind of code where we wind up doing a vector op
into vfp.qc, is there anything somewhere that asserts that
we don't try to do it with a vector length bigger than
sizeof(vfp.qc) (i.e. 128) ?
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD
2024-05-24 23:20 ` [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD Richard Henderson
@ 2024-05-28 15:40 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:40 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:23, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> This eliminates the last uses of these neon helpers.
> Incorporate the MO_64 expanders as an option to the vector expander.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 8 --
> target/arm/tcg/translate-a64.h | 8 ++
> target/arm/tcg/gengvec64.c | 71 ++++++++++++++
> target/arm/tcg/neon_helper.c | 165 ---------------------------------
> target/arm/tcg/translate-a64.c | 73 +++++----------
> 5 files changed, 103 insertions(+), 222 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB
2024-05-24 23:20 ` [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB Richard Henderson
@ 2024-05-28 15:42 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:42 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> This eliminates the last uses of these neon helpers.
> Incorporate the MO_64 expanders as an option to the vector expander.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree
2024-05-24 23:20 ` [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree Richard Henderson
@ 2024-05-28 15:44 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:44 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:28, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 11 ++++
> target/arm/tcg/translate-a64.c | 100 +++++++++++++++++++--------------
> 2 files changed, 68 insertions(+), 43 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD to decodetree
2024-05-24 23:20 ` [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD " Richard Henderson
@ 2024-05-28 15:44 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:44 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:26, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> These are faux 2-operand instructions, reading from rd.
> Sort them next to the other three-operand same insns for clarity.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 8 +++++
> target/arm/tcg/translate-a64.c | 64 ++++------------------------------
> 2 files changed, 14 insertions(+), 58 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 43/67] target/arm: Convert SSHL, USHL to decodetree
2024-05-24 23:20 ` [PATCH v2 43/67] target/arm: Convert SSHL, USHL " Richard Henderson
@ 2024-05-28 15:46 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:46 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 7 ++++++
> target/arm/tcg/translate-a64.c | 40 +++++++++++++++++++++-------------
> 2 files changed, 32 insertions(+), 15 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec
2024-05-24 23:20 ` [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec Richard Henderson
@ 2024-05-28 15:51 ` Peter Maydell
2024-05-28 17:30 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:51 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 10 +++++++++
> target/arm/tcg/translate.h | 4 ++++
> target/arm/tcg/neon-dp.decode | 10 ++-------
> target/arm/tcg/gengvec.c | 22 +++++++++++++++++++
> target/arm/tcg/neon_helper.c | 38 ++++++++++++++++++++++++++++++++-
> target/arm/tcg/translate-a64.c | 17 ++++++---------
> target/arm/tcg/translate-neon.c | 6 ++----
> 7 files changed, 84 insertions(+), 23 deletions(-)
> uint32_t HELPER(glue(neon_,name))(CPUARMState *env, uint32_t arg1, uint32_t arg2) \
> NEON_VOP_BODY(vtype, n)
>
> +#define NEON_GVEC_VOP2(name, vtype) \
> +void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
> +{ \
> + intptr_t i, opr_sz = simd_oprsz(desc); \
> + vtype *d = vd, *n = vn, *m = vm; \
> + for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
> + NEON_FN(d[i], n[i], m[i]); \
Does this need H macros for the bigendian case ? It looks
like we use it for smaller-than-64-bit element cases.
> + } \
> + clear_tail(d, opr_sz, simd_maxsz(desc)); \
> +}
> +
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree
2024-05-24 23:20 ` [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree Richard Henderson
@ 2024-05-28 15:51 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:51 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 4 ++++
> target/arm/tcg/translate-a64.c | 22 +++++++---------------
> 2 files changed, 11 insertions(+), 15 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec
2024-05-24 23:21 ` [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec Richard Henderson
@ 2024-05-28 15:53 ` Peter Maydell
2024-05-28 17:31 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:53 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:28, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 8 ++++++++
> target/arm/tcg/translate.h | 4 ++++
> target/arm/tcg/neon-dp.decode | 10 ++-------
> target/arm/tcg/gengvec.c | 24 ++++++++++++++++++++++
> target/arm/tcg/neon_helper.c | 36 +++++++++++++++++++++++++++++++++
> target/arm/tcg/translate-a64.c | 17 +++++++---------
> target/arm/tcg/translate-neon.c | 6 ++----
> 7 files changed, 83 insertions(+), 22 deletions(-)
> +#define NEON_GVEC_VOP2_ENV(name, vtype) \
> +void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
> +{ \
> + intptr_t i, opr_sz = simd_oprsz(desc); \
> + vtype *d = vd, *n = vn, *m = vm; \
> + CPUARMState *env = venv; \
> + for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
> + NEON_FN(d[i], n[i], m[i]); \
> + } \
> + clear_tail(d, opr_sz, simd_maxsz(desc)); \
> +}
> +
Same question about H macros as for patch 44.
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree
2024-05-24 23:21 ` [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree Richard Henderson
@ 2024-05-28 15:53 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:53 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:32, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 4 ++
> target/arm/tcg/translate-a64.c | 74 ++++++++++++++++++++++------------
> 2 files changed, 53 insertions(+), 25 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec
2024-05-24 23:21 ` [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec Richard Henderson
@ 2024-05-28 15:54 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:54 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree
2024-05-24 23:21 ` [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree Richard Henderson
@ 2024-05-28 15:55 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:55 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 4 +++
> target/arm/tcg/translate-a64.c | 48 ++++++++++++++++------------------
> 2 files changed, 26 insertions(+), 26 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) to decodetree
2024-05-24 23:21 ` [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) " Richard Henderson
@ 2024-05-28 15:56 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:56 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 6 ++++++
> target/arm/tcg/translate-a64.c | 34 +++++++++++-----------------------
> 2 files changed, 17 insertions(+), 23 deletions(-)
> @@ -10958,6 +10956,11 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
>
> case 0x01: /* SQADD, UQADD */
> case 0x05: /* SQSUB, UQSUB */
> + case 0x08: /* SSHL, USHL */
> + case 0x09: /* SQSHL, UQSHL */
> + case 0x0a: /* SRSHL, URSHL */
> + case 0x0b: /* SQRSHL, UQRSHL */
> + case 0x10: /* ADD, SUB */
> unallocated_encoding(s);
> return;
> }
> @@ -11044,14 +11040,6 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
> vec_full_reg_offset(s, rm),
> is_q ? 16 : 8, vec_full_reg_size(s));
> return;
> -
> - case 0x01: /* SQADD, UQADD */
> - case 0x05: /* SQSUB, UQSUB */
> - case 0x08: /* SSHL, USHL */
> - case 0x09: /* SQSHL, UQSHL */
> - case 0x0a: /* SRSHL, URSHL */
> - case 0x0b: /* SQRSHL, UQRSHL */
> - g_assert_not_reached();
> }
Shouldn't the parts of these that aren't ADD,SUB have been in
some previous patches rather than this one?
Otherwise
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ to decodetree
2024-05-24 23:21 ` [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ " Richard Henderson
@ 2024-05-28 15:57 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:57 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:26, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 12 +++
> target/arm/tcg/translate-a64.c | 132 ++++++++++++---------------------
> 2 files changed, 60 insertions(+), 84 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64}
2024-05-24 23:21 ` [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64} Richard Henderson
@ 2024-05-28 15:58 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:58 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:30, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/gengvec.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec
2024-05-24 23:21 ` [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec Richard Henderson
@ 2024-05-28 15:59 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 15:59 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/gengvec.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec
2024-05-24 23:21 ` [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec Richard Henderson
@ 2024-05-28 16:01 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:01 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:26, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 6 --
> target/arm/tcg/translate.h | 5 ++
> target/arm/tcg/gengvec.c | 144 ++++++++++++++++++++++++++++++++
> target/arm/tcg/neon_helper.c | 27 ------
> target/arm/tcg/translate-a64.c | 17 ++--
> target/arm/tcg/translate-neon.c | 4 +-
> 6 files changed, 158 insertions(+), 45 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree
2024-05-24 23:21 ` [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree Richard Henderson
@ 2024-05-28 16:01 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:01 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 2 ++
> target/arm/tcg/translate-a64.c | 11 +++--------
> 2 files changed, 5 insertions(+), 8 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec
2024-05-24 23:21 ` [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec Richard Henderson
@ 2024-05-28 16:02 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:02 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/helper.h | 6 --
> target/arm/tcg/translate.h | 4 +
> target/arm/tcg/gengvec.c | 144 ++++++++++++++++++++++++++++++++
> target/arm/tcg/neon_helper.c | 27 ------
> target/arm/tcg/translate-a64.c | 17 ++--
> target/arm/tcg/translate-neon.c | 4 +-
> 6 files changed, 157 insertions(+), 45 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree
2024-05-24 23:21 ` [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree Richard Henderson
@ 2024-05-28 16:02 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:02 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 2 ++
> target/arm/tcg/translate-a64.c | 11 +++--------
> 2 files changed, 5 insertions(+), 8 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec
2024-05-24 23:21 ` [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec Richard Henderson
@ 2024-05-28 16:03 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:03 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree
2024-05-24 23:21 ` [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree Richard Henderson
@ 2024-05-28 16:04 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:04 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:31, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN to decodetree
2024-05-24 23:21 ` [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN " Richard Henderson
@ 2024-05-28 16:04 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:04 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 4 ++++
> target/arm/tcg/translate-a64.c | 22 ++++++----------------
> 2 files changed, 10 insertions(+), 16 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD to decodetree
2024-05-24 23:21 ` [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD " Richard Henderson
@ 2024-05-28 16:05 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:05 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:29, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 4 ++++
> target/arm/tcg/translate-a64.c | 22 ++++++----------------
> 2 files changed, 10 insertions(+), 16 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 62/67] target/arm: Convert MUL, PMUL to decodetree
2024-05-24 23:21 ` [PATCH v2 62/67] target/arm: Convert MUL, PMUL " Richard Henderson
@ 2024-05-28 16:06 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:06 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:26, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 5 ++++
> target/arm/tcg/translate-a64.c | 51 +++++++++++++---------------------
> 2 files changed, 25 insertions(+), 31 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 63/67] target/arm: Convert MLA, MLS to decodetree
2024-05-24 23:21 ` [PATCH v2 63/67] target/arm: Convert MLA, MLS " Richard Henderson
@ 2024-05-28 16:07 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:07 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 8 ++++
> target/arm/tcg/translate-a64.c | 77 ++++++++++------------------------
> 2 files changed, 31 insertions(+), 54 deletions(-)
>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector)
2024-05-24 23:21 ` [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector) Richard Henderson
@ 2024-05-28 16:08 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:08 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:30, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> We already have a gvec helper for the operations, but we aren't
> using it on the aa32 neon side. Create a unified expander for
> use by both aa32 and aa64 translators.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree
2024-05-24 23:21 ` [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree Richard Henderson
@ 2024-05-28 16:10 ` Peter Maydell
2024-05-28 17:27 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:10 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> These are the last instructions within disas_simd_three_reg_same
> and disas_simd_scalar_three_reg_same, so remove them.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
> index d8e96386be..b05922b425 100644
> --- a/target/arm/tcg/vec_helper.c
> +++ b/target/arm/tcg/vec_helper.c
> @@ -311,6 +311,38 @@ void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
> clear_tail(d, opr_sz, simd_maxsz(desc));
> }
>
> +void HELPER(neon_sqdmulh_idx_h)(void *vd, void *vn, void *vm,
> + void *vq, uint32_t desc)
> +{
> + intptr_t i, j, opr_sz = simd_oprsz(desc);
> + int idx = simd_data(desc);
> + int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
> +
> + for (i = 0; i < opr_sz / 2; i += 16 / 2) {
> + int16_t mm = m[i];
> + for (j = 0; j < 16 / 2; ++j) {
> + d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, false, vq);
> + }
> + }
> + clear_tail(d, opr_sz, simd_maxsz(desc));
> +}
> +
> +void HELPER(neon_sqrdmulh_idx_h)(void *vd, void *vn, void *vm,
> + void *vq, uint32_t desc)
> +{
> + intptr_t i, j, opr_sz = simd_oprsz(desc);
> + int idx = simd_data(desc);
> + int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
> +
> + for (i = 0; i < opr_sz / 2; i += 16 / 2) {
> + int16_t mm = m[i];
> + for (j = 0; j < 16 / 2; ++j) {
> + d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, true, vq);
> + }
> + }
> + clear_tail(d, opr_sz, simd_maxsz(desc));
> +}
> +
> void HELPER(sve2_sqrdmlah_h)(void *vd, void *vn, void *vm,
> void *va, uint32_t desc)
> {
> @@ -474,6 +506,38 @@ void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
> clear_tail(d, opr_sz, simd_maxsz(desc));
> }
>
> +void HELPER(neon_sqdmulh_idx_s)(void *vd, void *vn, void *vm,
> + void *vq, uint32_t desc)
> +{
> + intptr_t i, j, opr_sz = simd_oprsz(desc);
> + int idx = simd_data(desc);
> + int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
> +
> + for (i = 0; i < opr_sz / 4; i += 16 / 4) {
> + int32_t mm = m[i];
> + for (j = 0; j < 16 / 4; ++j) {
> + d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, false, vq);
> + }
> + }
> + clear_tail(d, opr_sz, simd_maxsz(desc));
> +}
> +
> +void HELPER(neon_sqrdmulh_idx_s)(void *vd, void *vn, void *vm,
> + void *vq, uint32_t desc)
> +{
> + intptr_t i, j, opr_sz = simd_oprsz(desc);
> + int idx = simd_data(desc);
> + int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
> +
> + for (i = 0; i < opr_sz / 4; i += 16 / 4) {
> + int32_t mm = m[i];
> + for (j = 0; j < 16 / 4; ++j) {
> + d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, true, vq);
> + }
> + }
> + clear_tail(d, opr_sz, simd_maxsz(desc));
> +}
Missing H macros in these helpers ?
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB to decodetree
2024-05-24 23:21 ` [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB " Richard Henderson
@ 2024-05-28 16:15 ` Peter Maydell
2024-05-28 17:31 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:15 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:27, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> These are the only instructions in the 3 source scalar class.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/a64.decode | 10 ++
> target/arm/tcg/translate-a64.c | 233 ++++++++++++---------------------
> 2 files changed, 93 insertions(+), 150 deletions(-)
>
> static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
> {
> - if (extract32(insn, 24, 1)) {
> - /* Floating point data-processing (3 source) */
> - disas_fp_3src(s, insn);
> - } else if (extract32(insn, 21, 1) == 0) {
> + if (extract32(insn, 21, 1) == 0) {
> /* Floating point to fixed point conversions */
> disas_fp_fixed_conv(s, insn);
> } else {
Doesn't this result in the unallocated-encodings in the
fp-3src class now falling into the "else" clause and
being mis-decoded as the wrong thing? I think this
needs to be
if (extract32(insn, 24, 1)) {
unallocated_encoding();
} else if (extract32(insn, 21, 1) == 0) {
[etc]
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 67/67] target/arm: Convert FCSEL to decodetree
2024-05-24 23:21 ` [PATCH v2 67/67] target/arm: Convert FCSEL " Richard Henderson
@ 2024-05-28 16:16 ` Peter Maydell
0 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:16 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:32, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1)
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
` (66 preceding siblings ...)
2024-05-24 23:21 ` [PATCH v2 67/67] target/arm: Convert FCSEL " Richard Henderson
@ 2024-05-28 16:20 ` Peter Maydell
67 siblings, 0 replies; 114+ messages in thread
From: Peter Maydell @ 2024-05-28 16:20 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Sat, 25 May 2024 at 00:22, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> In the process, convert more code to gvec as well -- I will need
> the gvec code for implementing SME2. I guess this is about 1/3
> of the job done, but there's no reason to wait until the patch
> set is completely unwieldy.
>
> Changes for v2:
> * Fix existing RISU failures vs neoverse-n1.
> * Introduce vfp_load_reg16, fixing a regression wrt VNEG (scalar, hp).
> * Fix typo in SUQADD vectorization.
> * Two more conversions.
I've now finished review for this. I pulled 2 and 4-36 into my
target-arm pullreq currently on list. To save you having to
go through a lot of replies that are just me giving r-by tags,
the patches I had comments/questions on are:
3, 38, 44, 46, 50, 65, 66.
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree
2024-05-28 16:10 ` Peter Maydell
@ 2024-05-28 17:27 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:27 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 09:10, Peter Maydell wrote:
>> +void HELPER(neon_sqdmulh_idx_s)(void *vd, void *vn, void *vm,
>> + void *vq, uint32_t desc)
>> +{
>> + intptr_t i, j, opr_sz = simd_oprsz(desc);
>> + int idx = simd_data(desc);
>> + int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
>> +
>> + for (i = 0; i < opr_sz / 4; i += 16 / 4) {
>> + int32_t mm = m[i];
>> + for (j = 0; j < 16 / 4; ++j) {
>> + d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, false, vq);
>> + }
>> + }
>> + clear_tail(d, opr_sz, simd_maxsz(desc));
>> +}
>> +
>> +void HELPER(neon_sqrdmulh_idx_s)(void *vd, void *vn, void *vm,
>> + void *vq, uint32_t desc)
>> +{
>> + intptr_t i, j, opr_sz = simd_oprsz(desc);
>> + int idx = simd_data(desc);
>> + int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
>> +
>> + for (i = 0; i < opr_sz / 4; i += 16 / 4) {
>> + int32_t mm = m[i];
>> + for (j = 0; j < 16 / 4; ++j) {
>> + d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, true, vq);
>> + }
>> + }
>> + clear_tail(d, opr_sz, simd_maxsz(desc));
>> +}
>
> Missing H macros in these helpers ?
No. The only index that's not identical across the vector is M, and H is handled once at
the beginning. Otherwise n[] and d[] have the same operation applied to all indicies, and
it doesn't matter which order in which these happen.
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec
2024-05-28 15:51 ` Peter Maydell
@ 2024-05-28 17:30 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:30 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 08:51, Peter Maydell wrote:
> On Sat, 25 May 2024 at 00:27, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> target/arm/helper.h | 10 +++++++++
>> target/arm/tcg/translate.h | 4 ++++
>> target/arm/tcg/neon-dp.decode | 10 ++-------
>> target/arm/tcg/gengvec.c | 22 +++++++++++++++++++
>> target/arm/tcg/neon_helper.c | 38 ++++++++++++++++++++++++++++++++-
>> target/arm/tcg/translate-a64.c | 17 ++++++---------
>> target/arm/tcg/translate-neon.c | 6 ++----
>> 7 files changed, 84 insertions(+), 23 deletions(-)
>
>
>> uint32_t HELPER(glue(neon_,name))(CPUARMState *env, uint32_t arg1, uint32_t arg2) \
>> NEON_VOP_BODY(vtype, n)
>>
>> +#define NEON_GVEC_VOP2(name, vtype) \
>> +void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
>> +{ \
>> + intptr_t i, opr_sz = simd_oprsz(desc); \
>> + vtype *d = vd, *n = vn, *m = vm; \
>> + for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
>> + NEON_FN(d[i], n[i], m[i]); \
>
> Does this need H macros for the bigendian case ? It looks
> like we use it for smaller-than-64-bit element cases.
The same operation happens in each lane, and order of evaluation does not matter. So, no
H macros needed.
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec
2024-05-28 15:53 ` Peter Maydell
@ 2024-05-28 17:31 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:31 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 08:53, Peter Maydell wrote:
> On Sat, 25 May 2024 at 00:28, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> target/arm/helper.h | 8 ++++++++
>> target/arm/tcg/translate.h | 4 ++++
>> target/arm/tcg/neon-dp.decode | 10 ++-------
>> target/arm/tcg/gengvec.c | 24 ++++++++++++++++++++++
>> target/arm/tcg/neon_helper.c | 36 +++++++++++++++++++++++++++++++++
>> target/arm/tcg/translate-a64.c | 17 +++++++---------
>> target/arm/tcg/translate-neon.c | 6 ++----
>> 7 files changed, 83 insertions(+), 22 deletions(-)
>
>
>> +#define NEON_GVEC_VOP2_ENV(name, vtype) \
>> +void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
>> +{ \
>> + intptr_t i, opr_sz = simd_oprsz(desc); \
>> + vtype *d = vd, *n = vn, *m = vm; \
>> + CPUARMState *env = venv; \
>> + for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
>> + NEON_FN(d[i], n[i], m[i]); \
>> + } \
>> + clear_tail(d, opr_sz, simd_maxsz(desc)); \
>> +}
>> +
>
> Same question about H macros as for patch 44.
No H macros needed.
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB to decodetree
2024-05-28 16:15 ` Peter Maydell
@ 2024-05-28 17:31 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:31 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 09:15, Peter Maydell wrote:
> On Sat, 25 May 2024 at 00:27, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> These are the only instructions in the 3 source scalar class.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> target/arm/tcg/a64.decode | 10 ++
>> target/arm/tcg/translate-a64.c | 233 ++++++++++++---------------------
>> 2 files changed, 93 insertions(+), 150 deletions(-)
>>
>
>> static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
>> {
>> - if (extract32(insn, 24, 1)) {
>> - /* Floating point data-processing (3 source) */
>> - disas_fp_3src(s, insn);
>> - } else if (extract32(insn, 21, 1) == 0) {
>> + if (extract32(insn, 21, 1) == 0) {
>> /* Floating point to fixed point conversions */
>> disas_fp_fixed_conv(s, insn);
>> } else {
>
> Doesn't this result in the unallocated-encodings in the
> fp-3src class now falling into the "else" clause and
> being mis-decoded as the wrong thing? I think this
> needs to be
>
> if (extract32(insn, 24, 1)) {
> unallocated_encoding();
> } else if (extract32(insn, 21, 1) == 0) {
> [etc]
Agreed.
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI
2024-05-28 13:18 ` Peter Maydell
@ 2024-05-28 17:36 ` Richard Henderson
2024-05-29 13:32 ` Peter Maydell
0 siblings, 1 reply; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:36 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 06:18, Peter Maydell wrote:
> On Sat, 25 May 2024 at 00:25, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> For all, rm == 15 is invalid.
>> Prior to v8, thumb with rm == 13 is invalid.
>> For PLDW, rn == 15 is invalid.
>
>> Fixes a RISU mismatch for the HINTSPACE pattern in t32.risu
>> compared to a neoverse-n1 host.
>
> These are UNPREDICTABLE cases, not invalid. In general
> we don't try to match a specific implementation's
> UNPREDICTABLE choices.
>
> I think we're better off avoiding the mismatch by improving
> the risu patterns to avoid the UNPREDICTABLE cases.
We do plenty of other treatments of UNPREDICTABLE as UNDEF (e.g. STREX). Why is this case
any different?
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec
2024-05-28 15:37 ` Peter Maydell
@ 2024-05-28 17:41 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-28 17:41 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/28/24 08:37, Peter Maydell wrote:
> On Sat, 25 May 2024 at 00:32, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> target/arm/helper.h | 16 +++++
>> target/arm/tcg/translate-a64.h | 6 ++
>> target/arm/tcg/gengvec64.c | 106 +++++++++++++++++++++++++++++++
>> target/arm/tcg/translate-a64.c | 113 ++++++++++++++-------------------
>> target/arm/tcg/vec_helper.c | 64 +++++++++++++++++++
>> 5 files changed, 241 insertions(+), 64 deletions(-)
>
>> diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
>> index 093b498b13..4b76e476a0 100644
>> --- a/target/arm/tcg/gengvec64.c
>> +++ b/target/arm/tcg/gengvec64.c
>> @@ -188,3 +188,109 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
>> tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
>> }
>>
>> +static void gen_suqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
>> + TCGv_vec a, TCGv_vec b)
>> +{
>> + TCGv_vec max =
>> + tcg_constant_vec_matching(t, vece, (1ull << ((8 << vece) - 1)) - 1);
>> + TCGv_vec u = tcg_temp_new_vec_matching(t);
>> +
>> + /* Maximum value that can be added to @a without overflow. */
>> + tcg_gen_sub_vec(vece, u, max, a);
>> +
>> + /* Constrain addend so that the next addition never overflows. */
>> + tcg_gen_umin_vec(vece, u, u, b);
>> + tcg_gen_add_vec(vece, t, u, a);
>> +
>> + /* Compute QC by comparing the adjusted @b. */
>> + tcg_gen_xor_vec(vece, u, u, b);
>> + tcg_gen_or_vec(vece, qc, qc, u);
>
> With this kind of code where we wind up doing a vector op
> into vfp.qc, is there anything somewhere that asserts that
> we don't try to do it with a vector length bigger than
> sizeof(vfp.qc) (i.e. 128) ?
No, but I could add an assert to the top-level expander below.
(In this case gen_gvec_usqadd_qc.)
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI
2024-05-28 17:36 ` Richard Henderson
@ 2024-05-29 13:32 ` Peter Maydell
2024-05-29 17:39 ` Richard Henderson
0 siblings, 1 reply; 114+ messages in thread
From: Peter Maydell @ 2024-05-29 13:32 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, qemu-arm
On Tue, 28 May 2024 at 18:36, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 5/28/24 06:18, Peter Maydell wrote:
> > On Sat, 25 May 2024 at 00:25, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> For all, rm == 15 is invalid.
> >> Prior to v8, thumb with rm == 13 is invalid.
> >> For PLDW, rn == 15 is invalid.
> >
> >> Fixes a RISU mismatch for the HINTSPACE pattern in t32.risu
> >> compared to a neoverse-n1 host.
> >
> > These are UNPREDICTABLE cases, not invalid. In general
> > we don't try to match a specific implementation's
> > UNPREDICTABLE choices.
> >
> > I think we're better off avoiding the mismatch by improving
> > the risu patterns to avoid the UNPREDICTABLE cases.
>
> We do plenty of other treatments of UNPREDICTABLE as UNDEF (e.g. STREX). Why is this case
> any different?
It just seems like a lot of effort to go to. Sometimes we
UNDEF for UNPREDICTABLEs, but quite often we say "the
behaviour we get for free is fine, so no need to write
extra code".
In this particular case, also:
* we'd need to go back and cross-check against older
architecture manuals and also look at whether M-profile
and A-profile are doing the same thing here
* the "v8 loosens the UNPREDICTABLE case for m = 13 T1
encoding" looks suspiciously to me like a "nobody ever
actually made this do anything except behave like the A1
encoding, so make T1 and A1 the same" kind of relaxation
Basically, it would take me probably 15+ minutes to review
the changes against the various versions of the docs and
I don't think it's worth the effort :-)
thanks
-- PMM
^ permalink raw reply [flat|nested] 114+ messages in thread
* Re: [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI
2024-05-29 13:32 ` Peter Maydell
@ 2024-05-29 17:39 ` Richard Henderson
0 siblings, 0 replies; 114+ messages in thread
From: Richard Henderson @ 2024-05-29 17:39 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, qemu-arm
On 5/29/24 06:32, Peter Maydell wrote:
>> We do plenty of other treatments of UNPREDICTABLE as UNDEF (e.g. STREX). Why is this case
>> any different?
>
> It just seems like a lot of effort to go to. Sometimes we
> UNDEF for UNPREDICTABLEs, but quite often we say "the
> behaviour we get for free is fine, so no need to write
> extra code".
>
> In this particular case, also:
> * we'd need to go back and cross-check against older
> architecture manuals and also look at whether M-profile
> and A-profile are doing the same thing here
> * the "v8 loosens the UNPREDICTABLE case for m = 13 T1
> encoding" looks suspiciously to me like a "nobody ever
> actually made this do anything except behave like the A1
> encoding, so make T1 and A1 the same" kind of relaxation
>
> Basically, it would take me probably 15+ minutes to review
> the changes against the various versions of the docs and
> I don't think it's worth the effort :-)
Fair enough. I've sent a patch to update thumb.risu to avoid the problematic patterns.
r~
^ permalink raw reply [flat|nested] 114+ messages in thread
end of thread, other threads:[~2024-05-29 17:39 UTC | newest]
Thread overview: 114+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-24 23:20 [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Richard Henderson
2024-05-24 23:20 ` [PATCH v2 01/67] target/arm: Add neoverse-n1 to qemu-arm (DO NOT MERGE) Richard Henderson
2024-05-24 23:20 ` [PATCH v2 02/67] target/arm: Use PLD, PLDW, PLI not NOP for t32 Richard Henderson
2024-05-28 13:13 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 03/67] target/arm: Reject incorrect operands to PLD, PLDW, PLI Richard Henderson
2024-05-28 13:18 ` Peter Maydell
2024-05-28 17:36 ` Richard Henderson
2024-05-29 13:32 ` Peter Maydell
2024-05-29 17:39 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 04/67] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Richard Henderson
2024-05-28 12:48 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 05/67] target/arm: Fix decode of FMOV (hp) vs MOVI Richard Henderson
2024-05-28 12:34 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 06/67] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Richard Henderson
2024-05-28 12:25 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 07/67] target/arm: Split out gengvec.c Richard Henderson
2024-05-24 23:20 ` [PATCH v2 08/67] target/arm: Split out gengvec64.c Richard Henderson
2024-05-24 23:20 ` [PATCH v2 09/67] target/arm: Convert Cryptographic AES to decodetree Richard Henderson
2024-05-24 23:20 ` [PATCH v2 10/67] target/arm: Convert Cryptographic 3-register SHA " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 11/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 12/67] target/arm: Convert Cryptographic 3-register SHA512 " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 13/67] target/arm: Convert Cryptographic 2-register " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 14/67] target/arm: Convert Cryptographic 4-register " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 15/67] target/arm: Convert Cryptographic 3-register, imm2 " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 16/67] target/arm: Convert XAR " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 17/67] target/arm: Convert Advanced SIMD copy " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 18/67] target/arm: Convert FMULX " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 19/67] target/arm: Convert FADD, FSUB, FDIV, FMUL " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 20/67] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 21/67] target/arm: Introduce vfp_load_reg16 Richard Henderson
2024-05-28 12:22 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 22/67] target/arm: Expand vfp neg and abs inline Richard Henderson
2024-05-24 23:20 ` [PATCH v2 23/67] target/arm: Convert FNMUL to decodetree Richard Henderson
2024-05-24 23:20 ` [PATCH v2 24/67] target/arm: Convert FMLA, FMLS " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 25/67] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 26/67] target/arm: Convert FABD " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 27/67] target/arm: Convert FRECPS, FRSQRTS " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 28/67] target/arm: Convert FADDP " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 29/67] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 30/67] target/arm: Use gvec for neon faddp, fmaxp, fminp Richard Henderson
2024-05-24 23:20 ` [PATCH v2 31/67] target/arm: Convert ADDP to decodetree Richard Henderson
2024-05-24 23:20 ` [PATCH v2 32/67] target/arm: Use gvec for neon padd Richard Henderson
2024-05-24 23:20 ` [PATCH v2 33/67] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Richard Henderson
2024-05-24 23:20 ` [PATCH v2 34/67] target/arm: Use gvec for neon pmax, pmin Richard Henderson
2024-05-24 23:20 ` [PATCH v2 35/67] target/arm: Convert FMLAL, FMLSL to decodetree Richard Henderson
2024-05-24 23:20 ` [PATCH v2 36/67] target/arm: Convert disas_simd_3same_logic " Richard Henderson
2024-05-24 23:20 ` [PATCH v2 37/67] target/arm: Improve vector UQADD, UQSUB, SQADD, SQSUB Richard Henderson
2024-05-28 15:24 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 38/67] target/arm: Convert SUQADD and USQADD to gvec Richard Henderson
2024-05-28 15:37 ` Peter Maydell
2024-05-28 17:41 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 39/67] target/arm: Inline scalar SUQADD and USQADD Richard Henderson
2024-05-28 15:40 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 40/67] target/arm: Inline scalar SQADD, UQADD, SQSUB, UQSUB Richard Henderson
2024-05-28 15:42 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 41/67] target/arm: Convert SQADD, SQSUB, UQADD, UQSUB to decodetree Richard Henderson
2024-05-28 15:44 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 42/67] target/arm: Convert SUQADD, USQADD " Richard Henderson
2024-05-28 15:44 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 43/67] target/arm: Convert SSHL, USHL " Richard Henderson
2024-05-28 15:46 ` Peter Maydell
2024-05-24 23:20 ` [PATCH v2 44/67] target/arm: Convert SRSHL and URSHL (register) to gvec Richard Henderson
2024-05-28 15:51 ` Peter Maydell
2024-05-28 17:30 ` Richard Henderson
2024-05-24 23:20 ` [PATCH v2 45/67] target/arm: Convert SRSHL, URSHL to decodetree Richard Henderson
2024-05-28 15:51 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 46/67] target/arm: Convert SQSHL and UQSHL (register) to gvec Richard Henderson
2024-05-28 15:53 ` Peter Maydell
2024-05-28 17:31 ` Richard Henderson
2024-05-24 23:21 ` [PATCH v2 47/67] target/arm: Convert SQSHL, UQSHL to decodetree Richard Henderson
2024-05-28 15:53 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 48/67] target/arm: Convert SQRSHL and UQRSHL (register) to gvec Richard Henderson
2024-05-28 15:54 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 49/67] target/arm: Convert SQRSHL, UQRSHL to decodetree Richard Henderson
2024-05-28 15:55 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 50/67] target/arm: Convert ADD, SUB (vector) " Richard Henderson
2024-05-28 15:56 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 51/67] target/arm: Convert CMGT, CMHI, CMGE, CMHS, CMTST, CMEQ " Richard Henderson
2024-05-28 15:57 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 52/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_{i32, i64} Richard Henderson
2024-05-28 15:58 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 53/67] target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec Richard Henderson
2024-05-28 15:59 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 54/67] target/arm: Convert SHADD, UHADD to gvec Richard Henderson
2024-05-28 16:01 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 55/67] target/arm: Convert SHADD, UHADD to decodetree Richard Henderson
2024-05-28 16:01 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 56/67] target/arm: Convert SHSUB, UHSUB to gvec Richard Henderson
2024-05-28 16:02 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 57/67] target/arm: Convert SHSUB, UHSUB to decodetree Richard Henderson
2024-05-28 16:02 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 58/67] target/arm: Convert SRHADD, URHADD to gvec Richard Henderson
2024-05-28 16:03 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 59/67] target/arm: Convert SRHADD, URHADD to decodetree Richard Henderson
2024-05-28 16:04 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 60/67] target/arm: Convert SMAX, SMIN, UMAX, UMIN " Richard Henderson
2024-05-28 16:04 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 61/67] target/arm: Convert SABA, SABD, UABA, UABD " Richard Henderson
2024-05-28 16:05 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 62/67] target/arm: Convert MUL, PMUL " Richard Henderson
2024-05-28 16:06 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 63/67] target/arm: Convert MLA, MLS " Richard Henderson
2024-05-28 16:07 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 64/67] target/arm: Tidy SQDMULH, SQRDMULH (vector) Richard Henderson
2024-05-28 16:08 ` Peter Maydell
2024-05-24 23:21 ` [PATCH v2 65/67] target/arm: Convert SQDMULH, SQRDMULH to decodetree Richard Henderson
2024-05-28 16:10 ` Peter Maydell
2024-05-28 17:27 ` Richard Henderson
2024-05-24 23:21 ` [PATCH v2 66/67] target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB " Richard Henderson
2024-05-28 16:15 ` Peter Maydell
2024-05-28 17:31 ` Richard Henderson
2024-05-24 23:21 ` [PATCH v2 67/67] target/arm: Convert FCSEL " Richard Henderson
2024-05-28 16:16 ` Peter Maydell
2024-05-28 16:20 ` [PATCH v2 00/67] target/arm: Convert a64 advsimd to decodetree (part 1) Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).