* [PULL v2 00/42] target-arm queue
@ 2024-05-28 14:07 Peter Maydell
2024-05-28 14:07 ` [PULL 01/42] xlnx_dpdma: fix descriptor endianness bug Peter Maydell
` (42 more replies)
0 siblings, 43 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
Hi; most of this is the first half of the A64 simd decodetree
conversion; the rest is a mix of fixes from the last couple of weeks.
v2 uses patches from the v2 decodetree series to avoid a few
regressions in some A32 insns.
(Richard: I'm still planning to review the second half of the
v2 decodetree series; I just wanted to get the respin of this
pullreq out today...)
thanks
-- PMM
The following changes since commit ad10b4badc1dd5b28305f9b9f1168cf0aa3ae946:
Merge tag 'pull-error-2024-05-27' of https://repo.or.cz/qemu/armbru into staging (2024-05-27 06:40:42 -0700)
are available in the Git repository at:
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240528
for you to fetch changes up to f240df3c31b40e4cf1af1f156a88efc1a1df406c:
target/arm: Convert disas_simd_3same_logic to decodetree (2024-05-28 14:29:01 +0100)
----------------------------------------------------------------
target-arm queue:
* xlnx_dpdma: fix descriptor endianness bug
* hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
* hw/arm/npcm7xx: remove setting of mp-affinity
* hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
* hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
* hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
* hw: arm: Remove use of tabs in some source files
* docs/system: Remove ADC from raspi documentation
* target/arm: Start of the conversion of A64 SIMD to decodetree
----------------------------------------------------------------
Alexandra Diupina (1):
xlnx_dpdma: fix descriptor endianness bug
Andrey Shumilin (1):
hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
Dorjoy Chowdhury (1):
hw/arm/npcm7xx: remove setting of mp-affinity
Inès Varhol (1):
hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
Philippe Mathieu-Daudé (1):
hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
Rayhan Faizel (1):
docs/system: Remove ADC from raspi documentation
Richard Henderson (34):
target/arm: Use PLD, PLDW, PLI not NOP for t32
target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer)
target/arm: Fix decode of FMOV (hp) vs MOVI
target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16)
target/arm: Split out gengvec.c
target/arm: Split out gengvec64.c
target/arm: Convert Cryptographic AES to decodetree
target/arm: Convert Cryptographic 3-register SHA to decodetree
target/arm: Convert Cryptographic 2-register SHA to decodetree
target/arm: Convert Cryptographic 3-register SHA512 to decodetree
target/arm: Convert Cryptographic 2-register SHA512 to decodetree
target/arm: Convert Cryptographic 4-register to decodetree
target/arm: Convert Cryptographic 3-register, imm2 to decodetree
target/arm: Convert XAR to decodetree
target/arm: Convert Advanced SIMD copy to decodetree
target/arm: Convert FMULX to decodetree
target/arm: Convert FADD, FSUB, FDIV, FMUL to decodetree
target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM to decodetree
target/arm: Introduce vfp_load_reg16
target/arm: Expand vfp neg and abs inline
target/arm: Convert FNMUL to decodetree
target/arm: Convert FMLA, FMLS to decodetree
target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT to decodetree
target/arm: Convert FABD to decodetree
target/arm: Convert FRECPS, FRSQRTS to decodetree
target/arm: Convert FADDP to decodetree
target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP to decodetree
target/arm: Use gvec for neon faddp, fmaxp, fminp
target/arm: Convert ADDP to decodetree
target/arm: Use gvec for neon padd
target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree
target/arm: Use gvec for neon pmax, pmin
target/arm: Convert FMLAL, FMLSL to decodetree
target/arm: Convert disas_simd_3same_logic to decodetree
Tanmay Patil (1):
hw: arm: Remove use of tabs in some source files
Zenghui Yu (1):
hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
docs/system/arm/raspi.rst | 1 -
target/arm/helper.h | 68 +-
target/arm/tcg/helper-a64.h | 12 +
target/arm/tcg/translate-a64.h | 4 +
target/arm/tcg/translate.h | 51 +
target/arm/tcg/a64.decode | 315 +++-
target/arm/tcg/t32.decode | 25 +-
hw/arm/boot.c | 8 +-
hw/arm/npcm7xx.c | 3 -
hw/char/omap_uart.c | 49 +-
hw/char/stm32l4x5_usart.c | 2 +-
hw/dma/xlnx_dpdma.c | 68 +-
hw/gpio/zaurus.c | 59 +-
hw/input/tsc2005.c | 135 +-
hw/intc/arm_gic.c | 4 +-
target/arm/hvf/hvf.c | 130 +-
target/arm/tcg/gengvec.c | 1672 +++++++++++++++++++++
target/arm/tcg/gengvec64.c | 190 +++
target/arm/tcg/neon_helper.c | 5 -
target/arm/tcg/translate-a64.c | 3137 +++++++++++++--------------------------
target/arm/tcg/translate-neon.c | 136 +-
target/arm/tcg/translate-sve.c | 145 +-
target/arm/tcg/translate-vfp.c | 93 +-
target/arm/tcg/translate.c | 1592 +-------------------
target/arm/tcg/vec_helper.c | 221 ++-
target/arm/vfp_helper.c | 30 -
target/arm/tcg/meson.build | 2 +
27 files changed, 3860 insertions(+), 4297 deletions(-)
create mode 100644 target/arm/tcg/gengvec.c
create mode 100644 target/arm/tcg/gengvec64.c
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PULL 01/42] xlnx_dpdma: fix descriptor endianness bug
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 02/42] hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers Peter Maydell
` (41 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Alexandra Diupina <adiupina@astralinux.ru>
Add xlnx_dpdma_read_descriptor() and
xlnx_dpdma_write_descriptor() functions.
xlnx_dpdma_read_descriptor() combines reading a
descriptor from desc_addr by calling dma_memory_read()
and swapping the desc fields from guest memory order
to host memory order. xlnx_dpdma_write_descriptor()
performs similar actions when writing a descriptor.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: d3c6369a96 ("introduce xlnx-dpdma")
Signed-off-by: Alexandra Diupina <adiupina@astralinux.ru>
[PMM: tweaked indent, dropped behaviour change for write-failure case]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
hw/dma/xlnx_dpdma.c | 68 ++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 64 insertions(+), 4 deletions(-)
diff --git a/hw/dma/xlnx_dpdma.c b/hw/dma/xlnx_dpdma.c
index 530717d1885..dde4aeca401 100644
--- a/hw/dma/xlnx_dpdma.c
+++ b/hw/dma/xlnx_dpdma.c
@@ -614,6 +614,65 @@ static void xlnx_dpdma_register_types(void)
type_register_static(&xlnx_dpdma_info);
}
+static MemTxResult xlnx_dpdma_read_descriptor(XlnxDPDMAState *s,
+ uint64_t desc_addr,
+ DPDMADescriptor *desc)
+{
+ MemTxResult res = dma_memory_read(&address_space_memory, desc_addr,
+ &desc, sizeof(DPDMADescriptor),
+ MEMTXATTRS_UNSPECIFIED);
+ if (res) {
+ return res;
+ }
+
+ /* Convert from LE into host endianness. */
+ desc->control = le32_to_cpu(desc->control);
+ desc->descriptor_id = le32_to_cpu(desc->descriptor_id);
+ desc->xfer_size = le32_to_cpu(desc->xfer_size);
+ desc->line_size_stride = le32_to_cpu(desc->line_size_stride);
+ desc->timestamp_lsb = le32_to_cpu(desc->timestamp_lsb);
+ desc->timestamp_msb = le32_to_cpu(desc->timestamp_msb);
+ desc->address_extension = le32_to_cpu(desc->address_extension);
+ desc->next_descriptor = le32_to_cpu(desc->next_descriptor);
+ desc->source_address = le32_to_cpu(desc->source_address);
+ desc->address_extension_23 = le32_to_cpu(desc->address_extension_23);
+ desc->address_extension_45 = le32_to_cpu(desc->address_extension_45);
+ desc->source_address2 = le32_to_cpu(desc->source_address2);
+ desc->source_address3 = le32_to_cpu(desc->source_address3);
+ desc->source_address4 = le32_to_cpu(desc->source_address4);
+ desc->source_address5 = le32_to_cpu(desc->source_address5);
+ desc->crc = le32_to_cpu(desc->crc);
+
+ return res;
+}
+
+static MemTxResult xlnx_dpdma_write_descriptor(uint64_t desc_addr,
+ DPDMADescriptor *desc)
+{
+ DPDMADescriptor tmp_desc = *desc;
+
+ /* Convert from host endianness into LE. */
+ tmp_desc.control = cpu_to_le32(tmp_desc.control);
+ tmp_desc.descriptor_id = cpu_to_le32(tmp_desc.descriptor_id);
+ tmp_desc.xfer_size = cpu_to_le32(tmp_desc.xfer_size);
+ tmp_desc.line_size_stride = cpu_to_le32(tmp_desc.line_size_stride);
+ tmp_desc.timestamp_lsb = cpu_to_le32(tmp_desc.timestamp_lsb);
+ tmp_desc.timestamp_msb = cpu_to_le32(tmp_desc.timestamp_msb);
+ tmp_desc.address_extension = cpu_to_le32(tmp_desc.address_extension);
+ tmp_desc.next_descriptor = cpu_to_le32(tmp_desc.next_descriptor);
+ tmp_desc.source_address = cpu_to_le32(tmp_desc.source_address);
+ tmp_desc.address_extension_23 = cpu_to_le32(tmp_desc.address_extension_23);
+ tmp_desc.address_extension_45 = cpu_to_le32(tmp_desc.address_extension_45);
+ tmp_desc.source_address2 = cpu_to_le32(tmp_desc.source_address2);
+ tmp_desc.source_address3 = cpu_to_le32(tmp_desc.source_address3);
+ tmp_desc.source_address4 = cpu_to_le32(tmp_desc.source_address4);
+ tmp_desc.source_address5 = cpu_to_le32(tmp_desc.source_address5);
+ tmp_desc.crc = cpu_to_le32(tmp_desc.crc);
+
+ return dma_memory_write(&address_space_memory, desc_addr, &tmp_desc,
+ sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED);
+}
+
size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
bool one_desc)
{
@@ -651,8 +710,7 @@ size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
desc_addr = xlnx_dpdma_descriptor_next_address(s, channel);
}
- if (dma_memory_read(&address_space_memory, desc_addr, &desc,
- sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED)) {
+ if (xlnx_dpdma_read_descriptor(s, desc_addr, &desc)) {
s->registers[DPDMA_EISR] |= ((1 << 1) << channel);
xlnx_dpdma_update_irq(s);
s->operation_finished[channel] = true;
@@ -755,8 +813,10 @@ size_t xlnx_dpdma_start_operation(XlnxDPDMAState *s, uint8_t channel,
/* The descriptor need to be updated when it's completed. */
DPRINTF("update the descriptor with the done flag set.\n");
xlnx_dpdma_desc_set_done(&desc);
- dma_memory_write(&address_space_memory, desc_addr, &desc,
- sizeof(DPDMADescriptor), MEMTXATTRS_UNSPECIFIED);
+ if (xlnx_dpdma_write_descriptor(desc_addr, &desc)) {
+ DPRINTF("Can't write the descriptor.\n");
+ /* TODO: check hardware behaviour for memory write failure */
+ }
}
if (xlnx_dpdma_desc_completion_interrupt(&desc)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 02/42] hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
2024-05-28 14:07 ` [PULL 01/42] xlnx_dpdma: fix descriptor endianness bug Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 03/42] hw/arm/npcm7xx: remove setting of mp-affinity Peter Maydell
` (40 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Zenghui Yu <zenghui.yu@linux.dev>
We wrongly encoded ID_AA64PFR1_EL1 using {3,0,0,4,2} in hvf_sreg_match[] so
we fail to get the expected ARMCPRegInfo from cp_regs hash table with the
wrong key.
Fix it with the correct encoding {3,0,0,4,1}. With that fixed, the Linux
guest can properly detect FEAT_SSBS2 on my M1 HW.
All DBG{B,W}{V,C}R_EL1 registers are also wrongly encoded with op0 == 14.
It happens to work because HVF_SYSREG(CRn, CRm, 14, op1, op2) equals to
HVF_SYSREG(CRn, CRm, 2, op1, op2), by definition. But we shouldn't rely on
it.
Cc: qemu-stable@nongnu.org
Fixes: a1477da3ddeb ("hvf: Add Apple Silicon support")
Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Message-id: 20240503153453.54389-1-zenghui.yu@linux.dev
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/hvf/hvf.c | 130 +++++++++++++++++++++----------------------
1 file changed, 65 insertions(+), 65 deletions(-)
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index 08d0757438c..45e2218be58 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -396,85 +396,85 @@ struct hvf_sreg_match {
};
static struct hvf_sreg_match hvf_sreg_match[] = {
- { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 2, 0, 7) },
- { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
- { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
- { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
- { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 2, 0, 4) },
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 2, 0, 5) },
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 2, 0, 6) },
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 2, 0, 7) },
#ifdef SYNC_NO_RAW_REGS
/*
@@ -486,7 +486,7 @@ static struct hvf_sreg_match hvf_sreg_match[] = {
{ HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
{ HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
#endif
- { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 1) },
{ HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
{ HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
{ HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 03/42] hw/arm/npcm7xx: remove setting of mp-affinity
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
2024-05-28 14:07 ` [PULL 01/42] xlnx_dpdma: fix descriptor endianness bug Peter Maydell
2024-05-28 14:07 ` [PULL 02/42] hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 04/42] hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size Peter Maydell
` (39 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Dorjoy Chowdhury <dorjoychy111@gmail.com>
The value of the mp-affinity property being set in npcm7xx_realize is
always the same as the default value it would have when arm_cpu_realizefn
is called if the property is not set here. So there is no need to set
the property value in npcm7xx_realize function.
Signed-off-by: Dorjoy Chowdhury <dorjoychy111@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240504141733.14813-1-dorjoychy111@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
hw/arm/npcm7xx.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
index 9f2d96c733a..cb7791301b4 100644
--- a/hw/arm/npcm7xx.c
+++ b/hw/arm/npcm7xx.c
@@ -487,9 +487,6 @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
/* CPUs */
for (i = 0; i < nc->num_cpus; i++) {
- object_property_set_int(OBJECT(&s->cpu[i]), "mp-affinity",
- arm_build_mp_affinity(i, NPCM7XX_MAX_NUM_CPUS),
- &error_abort);
object_property_set_int(OBJECT(&s->cpu[i]), "reset-cbar",
NPCM7XX_GIC_CPU_IF_ADDR, &error_abort);
object_property_set_bool(OBJECT(&s->cpu[i]), "reset-hivecs", true,
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 04/42] hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (2 preceding siblings ...)
2024-05-28 14:07 ` [PULL 03/42] hw/arm/npcm7xx: remove setting of mp-affinity Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 05/42] hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n> Peter Maydell
` (38 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Inès Varhol <ines.varhol@telecom-paris.fr>
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
Message-id: 20240505141613.387508-1-ines.varhol@telecom-paris.fr
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
hw/char/stm32l4x5_usart.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
index 02f666308c0..fc5dcac0c45 100644
--- a/hw/char/stm32l4x5_usart.c
+++ b/hw/char/stm32l4x5_usart.c
@@ -56,7 +56,7 @@ REG32(CR1, 0x00)
FIELD(CR1, UE, 0, 1) /* USART enable */
REG32(CR2, 0x04)
FIELD(CR2, ADD_1, 28, 4) /* ADD[7:4] */
- FIELD(CR2, ADD_0, 24, 1) /* ADD[3:0] */
+ FIELD(CR2, ADD_0, 24, 4) /* ADD[3:0] */
FIELD(CR2, RTOEN, 23, 1) /* Receiver timeout enable */
FIELD(CR2, ABRMOD, 21, 2) /* Auto baud rate mode */
FIELD(CR2, ABREN, 20, 1) /* Auto baud rate enable */
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 05/42] hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (3 preceding siblings ...)
2024-05-28 14:07 ` [PULL 04/42] hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 06/42] hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx() Peter Maydell
` (37 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Andrey Shumilin <shum.sdl@nppct.ru>
In gic_cpu_read() and gic_cpu_write(), we delegate the handling of
reading and writing the Non-Secure view of the GICC_APR<n> registers
to functions gic_apr_ns_view() and gic_apr_write_ns_view().
Unfortunately we got the order of the arguments wrong, swapping the
CPU number and the register number (which the compiler doesn't catch
because they're both integers).
Most guests probably didn't notice this bug because directly
accessing the APR registers is typically something only done by
firmware when it is doing state save for going into a sleep mode.
Correct the mismatched call arguments.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Cc: qemu-stable@nongnu.org
Fixes: 51fd06e0ee ("hw/intc/arm_gic: Fix handling of GICC_APR<n>, GICC_NSAPR<n> registers")
Signed-off-by: Andrey Shumilin <shum.sdl@nppct.ru>
[PMM: Rewrote commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée<alex.bennee@linaro.org>
---
hw/intc/arm_gic.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
index 074cf50af25..e4b8437f8b8 100644
--- a/hw/intc/arm_gic.c
+++ b/hw/intc/arm_gic.c
@@ -1658,7 +1658,7 @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
*data = s->h_apr[gic_get_vcpu_real_id(cpu)];
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
- *data = gic_apr_ns_view(s, regno, cpu);
+ *data = gic_apr_ns_view(s, cpu, regno);
} else {
*data = s->apr[regno][cpu];
}
@@ -1746,7 +1746,7 @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
- gic_apr_write_ns_view(s, regno, cpu, value);
+ gic_apr_write_ns_view(s, cpu, regno, value);
} else {
s->apr[regno][cpu] = value;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 06/42] hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (4 preceding siblings ...)
2024-05-28 14:07 ` [PULL 05/42] hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n> Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 07/42] hw: arm: Remove use of tabs in some source files Peter Maydell
` (36 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Philippe Mathieu-Daudé <philmd@linaro.org>
Check the function index is in range and use an unsigned
variable to avoid the following warning with GCC 13.2.0:
[666/5358] Compiling C object libcommon.fa.p/hw_input_tsc2005.c.o
hw/input/tsc2005.c: In function 'tsc2005_timer_tick':
hw/input/tsc2005.c:416:26: warning: array subscript has type 'char' [-Wchar-subscripts]
416 | s->dav |= mode_regs[s->function];
| ~^~~~~~~~~~
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240508143513.44996-1-philmd@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed missing ')']
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
hw/input/tsc2005.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
index 941f163d364..ac7f54eeafb 100644
--- a/hw/input/tsc2005.c
+++ b/hw/input/tsc2005.c
@@ -406,6 +406,9 @@ uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len)
static void tsc2005_timer_tick(void *opaque)
{
TSC2005State *s = opaque;
+ unsigned int function = s->function;
+
+ assert(function < ARRAY_SIZE(mode_regs));
/* Timer ticked -- a set of conversions has been finished. */
@@ -413,7 +416,7 @@ static void tsc2005_timer_tick(void *opaque)
return;
s->busy = false;
- s->dav |= mode_regs[s->function];
+ s->dav |= mode_regs[function];
s->function = -1;
tsc2005_pin_update(s);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 07/42] hw: arm: Remove use of tabs in some source files
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (5 preceding siblings ...)
2024-05-28 14:07 ` [PULL 06/42] hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx() Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 08/42] docs/system: Remove ADC from raspi documentation Peter Maydell
` (35 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Tanmay Patil <tanmaynpatil105@gmail.com>
Some of the source files for older devices use hardcoded tabs
instead of our current coding standard's required spaces.
Fix these in the following files:
- hw/arm/boot.c
- hw/char/omap_uart.c
- hw/gpio/zaurus.c
- hw/input/tsc2005.c
This commit is mostly whitespace-only changes; it also
adds curly-braces to some 'if' statements.
This addresses part of https://gitlab.com/qemu-project/qemu/-/issues/373
but some other files remain to be handled.
Signed-off-by: Tanmay Patil <tanmaynpatil105@gmail.com>
Message-id: 20240508081502.88375-1-tanmaynpatil105@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: tweaked commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
hw/arm/boot.c | 8 +--
hw/char/omap_uart.c | 49 +++++++++--------
hw/gpio/zaurus.c | 59 ++++++++++----------
hw/input/tsc2005.c | 130 ++++++++++++++++++++++++--------------------
4 files changed, 130 insertions(+), 116 deletions(-)
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
index 84ea6a807a4..d480a7da02c 100644
--- a/hw/arm/boot.c
+++ b/hw/arm/boot.c
@@ -347,13 +347,13 @@ static void set_kernel_args_old(const struct arm_boot_info *info,
WRITE_WORD(p, info->ram_size / 4096);
/* ramdisk_size */
WRITE_WORD(p, 0);
-#define FLAG_READONLY 1
-#define FLAG_RDLOAD 4
-#define FLAG_RDPROMPT 8
+#define FLAG_READONLY 1
+#define FLAG_RDLOAD 4
+#define FLAG_RDPROMPT 8
/* flags */
WRITE_WORD(p, FLAG_READONLY | FLAG_RDLOAD | FLAG_RDPROMPT);
/* rootdev */
- WRITE_WORD(p, (31 << 8) | 0); /* /dev/mtdblock0 */
+ WRITE_WORD(p, (31 << 8) | 0); /* /dev/mtdblock0 */
/* video_num_cols */
WRITE_WORD(p, 0);
/* video_num_rows */
diff --git a/hw/char/omap_uart.c b/hw/char/omap_uart.c
index 6848bddb4e2..c2ef4c137e1 100644
--- a/hw/char/omap_uart.c
+++ b/hw/char/omap_uart.c
@@ -61,7 +61,7 @@ struct omap_uart_s *omap_uart_init(hwaddr base,
s->fclk = fclk;
s->irq = irq;
s->serial = serial_mm_init(get_system_memory(), base, 2, irq,
- omap_clk_getrate(fclk)/16,
+ omap_clk_getrate(fclk) / 16,
chr ?: qemu_chr_new(label, "null", NULL),
DEVICE_NATIVE_ENDIAN);
return s;
@@ -76,27 +76,27 @@ static uint64_t omap_uart_read(void *opaque, hwaddr addr, unsigned size)
}
switch (addr) {
- case 0x20: /* MDR1 */
+ case 0x20: /* MDR1 */
return s->mdr[0];
- case 0x24: /* MDR2 */
+ case 0x24: /* MDR2 */
return s->mdr[1];
- case 0x40: /* SCR */
+ case 0x40: /* SCR */
return s->scr;
- case 0x44: /* SSR */
+ case 0x44: /* SSR */
return 0x0;
- case 0x48: /* EBLR (OMAP2) */
+ case 0x48: /* EBLR (OMAP2) */
return s->eblr;
- case 0x4C: /* OSC_12M_SEL (OMAP1) */
+ case 0x4C: /* OSC_12M_SEL (OMAP1) */
return s->clksel;
- case 0x50: /* MVR */
+ case 0x50: /* MVR */
return 0x30;
- case 0x54: /* SYSC (OMAP2) */
+ case 0x54: /* SYSC (OMAP2) */
return s->syscontrol;
- case 0x58: /* SYSS (OMAP2) */
+ case 0x58: /* SYSS (OMAP2) */
return 1;
- case 0x5c: /* WER (OMAP2) */
+ case 0x5c: /* WER (OMAP2) */
return s->wkup;
- case 0x60: /* CFPS (OMAP2) */
+ case 0x60: /* CFPS (OMAP2) */
return s->cfps;
}
@@ -115,35 +115,36 @@ static void omap_uart_write(void *opaque, hwaddr addr,
}
switch (addr) {
- case 0x20: /* MDR1 */
+ case 0x20: /* MDR1 */
s->mdr[0] = value & 0x7f;
break;
- case 0x24: /* MDR2 */
+ case 0x24: /* MDR2 */
s->mdr[1] = value & 0xff;
break;
- case 0x40: /* SCR */
+ case 0x40: /* SCR */
s->scr = value & 0xff;
break;
- case 0x48: /* EBLR (OMAP2) */
+ case 0x48: /* EBLR (OMAP2) */
s->eblr = value & 0xff;
break;
- case 0x4C: /* OSC_12M_SEL (OMAP1) */
+ case 0x4C: /* OSC_12M_SEL (OMAP1) */
s->clksel = value & 1;
break;
- case 0x44: /* SSR */
- case 0x50: /* MVR */
- case 0x58: /* SYSS (OMAP2) */
+ case 0x44: /* SSR */
+ case 0x50: /* MVR */
+ case 0x58: /* SYSS (OMAP2) */
OMAP_RO_REG(addr);
break;
- case 0x54: /* SYSC (OMAP2) */
+ case 0x54: /* SYSC (OMAP2) */
s->syscontrol = value & 0x1d;
- if (value & 2)
+ if (value & 2) {
omap_uart_reset(s);
+ }
break;
- case 0x5c: /* WER (OMAP2) */
+ case 0x5c: /* WER (OMAP2) */
s->wkup = value & 0x7f;
break;
- case 0x60: /* CFPS (OMAP2) */
+ case 0x60: /* CFPS (OMAP2) */
s->cfps = value & 0xff;
break;
default:
diff --git a/hw/gpio/zaurus.c b/hw/gpio/zaurus.c
index 5884804c589..7342440b958 100644
--- a/hw/gpio/zaurus.c
+++ b/hw/gpio/zaurus.c
@@ -49,19 +49,20 @@ struct ScoopInfo {
uint16_t isr;
};
-#define SCOOP_MCR 0x00
-#define SCOOP_CDR 0x04
-#define SCOOP_CSR 0x08
-#define SCOOP_CPR 0x0c
-#define SCOOP_CCR 0x10
-#define SCOOP_IRR_IRM 0x14
-#define SCOOP_IMR 0x18
-#define SCOOP_ISR 0x1c
-#define SCOOP_GPCR 0x20
-#define SCOOP_GPWR 0x24
-#define SCOOP_GPRR 0x28
+#define SCOOP_MCR 0x00
+#define SCOOP_CDR 0x04
+#define SCOOP_CSR 0x08
+#define SCOOP_CPR 0x0c
+#define SCOOP_CCR 0x10
+#define SCOOP_IRR_IRM 0x14
+#define SCOOP_IMR 0x18
+#define SCOOP_ISR 0x1c
+#define SCOOP_GPCR 0x20
+#define SCOOP_GPWR 0x24
+#define SCOOP_GPRR 0x28
-static inline void scoop_gpio_handler_update(ScoopInfo *s) {
+static inline void scoop_gpio_handler_update(ScoopInfo *s)
+{
uint32_t level, diff;
int bit;
level = s->gpio_level & s->gpio_dir;
@@ -125,8 +126,9 @@ static void scoop_write(void *opaque, hwaddr addr,
break;
case SCOOP_CPR:
s->power = value;
- if (value & 0x80)
+ if (value & 0x80) {
s->power |= 0x8040;
+ }
break;
case SCOOP_CCR:
s->ccr = value;
@@ -145,7 +147,7 @@ static void scoop_write(void *opaque, hwaddr addr,
scoop_gpio_handler_update(s);
break;
case SCOOP_GPWR:
- case SCOOP_GPRR: /* GPRR is probably R/O in real HW */
+ case SCOOP_GPRR: /* GPRR is probably R/O in real HW */
s->gpio_level = value & s->gpio_dir;
scoop_gpio_handler_update(s);
break;
@@ -166,10 +168,11 @@ static void scoop_gpio_set(void *opaque, int line, int level)
{
ScoopInfo *s = (ScoopInfo *) opaque;
- if (level)
+ if (level) {
s->gpio_level |= (1 << line);
- else
+ } else {
s->gpio_level &= ~(1 << line);
+ }
}
static void scoop_init(Object *obj)
@@ -203,7 +206,7 @@ static int scoop_post_load(void *opaque, int version_id)
return 0;
}
-static bool is_version_0 (void *opaque, int version_id)
+static bool is_version_0(void *opaque, int version_id)
{
return version_id == 0;
}
@@ -265,7 +268,7 @@ type_init(scoop_register_types)
/* Write the bootloader parameters memory area. */
-#define MAGIC_CHG(a, b, c, d) ((d << 24) | (c << 16) | (b << 8) | a)
+#define MAGIC_CHG(a, b, c, d) ((d << 24) | (c << 16) | (b << 8) | a)
static struct QEMU_PACKED sl_param_info {
uint32_t comadj_keyword;
@@ -286,16 +289,16 @@ static struct QEMU_PACKED sl_param_info {
uint32_t phad_keyword;
int32_t phadadj;
} zaurus_bootparam = {
- .comadj_keyword = MAGIC_CHG('C', 'M', 'A', 'D'),
- .comadj = 125,
- .uuid_keyword = MAGIC_CHG('U', 'U', 'I', 'D'),
- .uuid = { -1 },
- .touch_keyword = MAGIC_CHG('T', 'U', 'C', 'H'),
- .touch_xp = -1,
- .adadj_keyword = MAGIC_CHG('B', 'V', 'A', 'D'),
- .adadj = -1,
- .phad_keyword = MAGIC_CHG('P', 'H', 'A', 'D'),
- .phadadj = 0x01,
+ .comadj_keyword = MAGIC_CHG('C', 'M', 'A', 'D'),
+ .comadj = 125,
+ .uuid_keyword = MAGIC_CHG('U', 'U', 'I', 'D'),
+ .uuid = { -1 },
+ .touch_keyword = MAGIC_CHG('T', 'U', 'C', 'H'),
+ .touch_xp = -1,
+ .adadj_keyword = MAGIC_CHG('B', 'V', 'A', 'D'),
+ .adadj = -1,
+ .phad_keyword = MAGIC_CHG('P', 'H', 'A', 'D'),
+ .phadadj = 0x01,
};
void sl_bootparam_write(hwaddr ptr)
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
index ac7f54eeafb..54a15d24410 100644
--- a/hw/input/tsc2005.c
+++ b/hw/input/tsc2005.c
@@ -28,10 +28,10 @@
#include "migration/vmstate.h"
#include "trace.h"
-#define TSC_CUT_RESOLUTION(value, p) ((value) >> (16 - (p ? 12 : 10)))
+#define TSC_CUT_RESOLUTION(value, p) ((value) >> (16 - (p ? 12 : 10)))
typedef struct {
- qemu_irq pint; /* Combination of the nPENIRQ and DAV signals */
+ qemu_irq pint; /* Combination of the nPENIRQ and DAV signals */
QEMUTimer *timer;
uint16_t model;
@@ -63,7 +63,7 @@ typedef struct {
} TSC2005State;
enum {
- TSC_MODE_XYZ_SCAN = 0x0,
+ TSC_MODE_XYZ_SCAN = 0x0,
TSC_MODE_XY_SCAN,
TSC_MODE_X,
TSC_MODE_Y,
@@ -82,100 +82,100 @@ enum {
};
static const uint16_t mode_regs[16] = {
- 0xf000, /* X, Y, Z scan */
- 0xc000, /* X, Y scan */
- 0x8000, /* X */
- 0x4000, /* Y */
- 0x3000, /* Z */
- 0x0800, /* AUX */
- 0x0400, /* TEMP1 */
- 0x0200, /* TEMP2 */
- 0x0800, /* AUX scan */
- 0x0040, /* X test */
- 0x0020, /* Y test */
- 0x0080, /* Short-circuit test */
- 0x0000, /* Reserved */
- 0x0000, /* X+, X- drivers */
- 0x0000, /* Y+, Y- drivers */
- 0x0000, /* Y+, X- drivers */
+ 0xf000, /* X, Y, Z scan */
+ 0xc000, /* X, Y scan */
+ 0x8000, /* X */
+ 0x4000, /* Y */
+ 0x3000, /* Z */
+ 0x0800, /* AUX */
+ 0x0400, /* TEMP1 */
+ 0x0200, /* TEMP2 */
+ 0x0800, /* AUX scan */
+ 0x0040, /* X test */
+ 0x0020, /* Y test */
+ 0x0080, /* Short-circuit test */
+ 0x0000, /* Reserved */
+ 0x0000, /* X+, X- drivers */
+ 0x0000, /* Y+, Y- drivers */
+ 0x0000, /* Y+, X- drivers */
};
-#define X_TRANSFORM(s) \
+#define X_TRANSFORM(s) \
((s->y * s->tr[0] - s->x * s->tr[1]) / s->tr[2] + s->tr[3])
-#define Y_TRANSFORM(s) \
+#define Y_TRANSFORM(s) \
((s->y * s->tr[4] - s->x * s->tr[5]) / s->tr[6] + s->tr[7])
-#define Z1_TRANSFORM(s) \
+#define Z1_TRANSFORM(s) \
((400 - ((s)->x >> 7) + ((s)->pressure << 10)) << 4)
-#define Z2_TRANSFORM(s) \
+#define Z2_TRANSFORM(s) \
((4000 + ((s)->y >> 7) - ((s)->pressure << 10)) << 4)
-#define AUX_VAL (700 << 4) /* +/- 3 at 12-bit */
-#define TEMP1_VAL (1264 << 4) /* +/- 5 at 12-bit */
-#define TEMP2_VAL (1531 << 4) /* +/- 5 at 12-bit */
+#define AUX_VAL (700 << 4) /* +/- 3 at 12-bit */
+#define TEMP1_VAL (1264 << 4) /* +/- 5 at 12-bit */
+#define TEMP2_VAL (1531 << 4) /* +/- 5 at 12-bit */
static uint16_t tsc2005_read(TSC2005State *s, int reg)
{
uint16_t ret;
switch (reg) {
- case 0x0: /* X */
+ case 0x0: /* X */
s->dav &= ~mode_regs[TSC_MODE_X];
return TSC_CUT_RESOLUTION(X_TRANSFORM(s), s->precision) +
(s->noise & 3);
- case 0x1: /* Y */
+ case 0x1: /* Y */
s->dav &= ~mode_regs[TSC_MODE_Y];
- s->noise ++;
+ s->noise++;
return TSC_CUT_RESOLUTION(Y_TRANSFORM(s), s->precision) ^
(s->noise & 3);
- case 0x2: /* Z1 */
+ case 0x2: /* Z1 */
s->dav &= 0xdfff;
return TSC_CUT_RESOLUTION(Z1_TRANSFORM(s), s->precision) -
(s->noise & 3);
- case 0x3: /* Z2 */
+ case 0x3: /* Z2 */
s->dav &= 0xefff;
return TSC_CUT_RESOLUTION(Z2_TRANSFORM(s), s->precision) |
(s->noise & 3);
- case 0x4: /* AUX */
+ case 0x4: /* AUX */
s->dav &= ~mode_regs[TSC_MODE_AUX];
return TSC_CUT_RESOLUTION(AUX_VAL, s->precision);
- case 0x5: /* TEMP1 */
+ case 0x5: /* TEMP1 */
s->dav &= ~mode_regs[TSC_MODE_TEMP1];
return TSC_CUT_RESOLUTION(TEMP1_VAL, s->precision) -
(s->noise & 5);
- case 0x6: /* TEMP2 */
+ case 0x6: /* TEMP2 */
s->dav &= 0xdfff;
s->dav &= ~mode_regs[TSC_MODE_TEMP2];
return TSC_CUT_RESOLUTION(TEMP2_VAL, s->precision) ^
(s->noise & 3);
- case 0x7: /* Status */
+ case 0x7: /* Status */
ret = s->dav | (s->reset << 7) | (s->pdst << 2) | 0x0;
s->dav &= ~(mode_regs[TSC_MODE_X_TEST] | mode_regs[TSC_MODE_Y_TEST] |
mode_regs[TSC_MODE_TS_TEST]);
s->reset = true;
return ret;
- case 0x8: /* AUX high threshold */
+ case 0x8: /* AUX high threshold */
return s->aux_thr[1];
- case 0x9: /* AUX low threshold */
+ case 0x9: /* AUX low threshold */
return s->aux_thr[0];
- case 0xa: /* TEMP high threshold */
+ case 0xa: /* TEMP high threshold */
return s->temp_thr[1];
- case 0xb: /* TEMP low threshold */
+ case 0xb: /* TEMP low threshold */
return s->temp_thr[0];
- case 0xc: /* CFR0 */
+ case 0xc: /* CFR0 */
return (s->pressure << 15) | ((!s->busy) << 14) |
- (s->nextprecision << 13) | s->timing[0];
- case 0xd: /* CFR1 */
+ (s->nextprecision << 13) | s->timing[0];
+ case 0xd: /* CFR1 */
return s->timing[1];
- case 0xe: /* CFR2 */
+ case 0xe: /* CFR2 */
return (s->pin_func << 14) | s->filter;
- case 0xf: /* Function select status */
+ case 0xf: /* Function select status */
return s->function >= 0 ? 1 << s->function : 0;
}
@@ -200,13 +200,14 @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
s->temp_thr[0] = data;
break;
- case 0xc: /* CFR0 */
+ case 0xc: /* CFR0 */
s->host_mode = (data >> 15) != 0;
if (s->enabled != !(data & 0x4000)) {
s->enabled = !(data & 0x4000);
trace_tsc2005_sense(s->enabled ? "enabled" : "disabled");
- if (s->busy && !s->enabled)
+ if (s->busy && !s->enabled) {
timer_del(s->timer);
+ }
s->busy = s->busy && s->enabled;
}
s->nextprecision = (data >> 13) & 1;
@@ -216,10 +217,10 @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
"tsc2005_write: illegal conversion clock setting\n");
}
break;
- case 0xd: /* CFR1 */
+ case 0xd: /* CFR1 */
s->timing[1] = data & 0xf07;
break;
- case 0xe: /* CFR2 */
+ case 0xe: /* CFR2 */
s->pin_func = (data >> 14) & 3;
s->filter = data & 0x3fff;
break;
@@ -258,10 +259,12 @@ static void tsc2005_pin_update(TSC2005State *s)
switch (s->nextfunction) {
case TSC_MODE_XYZ_SCAN:
case TSC_MODE_XY_SCAN:
- if (!s->host_mode && s->dav)
+ if (!s->host_mode && s->dav) {
s->enabled = false;
- if (!s->pressure)
+ }
+ if (!s->pressure) {
return;
+ }
/* Fall through */
case TSC_MODE_AUX_SCAN:
break;
@@ -269,8 +272,9 @@ static void tsc2005_pin_update(TSC2005State *s)
case TSC_MODE_X:
case TSC_MODE_Y:
case TSC_MODE_Z:
- if (!s->pressure)
+ if (!s->pressure) {
return;
+ }
/* Fall through */
case TSC_MODE_AUX:
case TSC_MODE_TEMP1:
@@ -278,8 +282,9 @@ static void tsc2005_pin_update(TSC2005State *s)
case TSC_MODE_X_TEST:
case TSC_MODE_Y_TEST:
case TSC_MODE_TS_TEST:
- if (s->dav)
+ if (s->dav) {
s->enabled = false;
+ }
break;
case TSC_MODE_RESERVED:
@@ -290,13 +295,14 @@ static void tsc2005_pin_update(TSC2005State *s)
return;
}
- if (!s->enabled || s->busy)
+ if (!s->enabled || s->busy) {
return;
+ }
s->busy = true;
s->precision = s->nextprecision;
s->function = s->nextfunction;
- s->pdst = !s->pnd0; /* Synchronised on internal clock */
+ s->pdst = !s->pnd0; /* Synchronised on internal clock */
expires = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
(NANOSECONDS_PER_SECOND >> 7);
timer_mod(s->timer, expires);
@@ -331,7 +337,7 @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
TSC2005State *s = opaque;
uint32_t ret = 0;
- switch (s->state ++) {
+ switch (s->state++) {
case 0:
if (value & 0x80) {
/* Command */
@@ -343,8 +349,9 @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
if (s->enabled != !(value & 1)) {
s->enabled = !(value & 1);
trace_tsc2005_sense(s->enabled ? "enabled" : "disabled");
- if (s->busy && !s->enabled)
+ if (s->busy && !s->enabled) {
timer_del(s->timer);
+ }
s->busy = s->busy && s->enabled;
}
tsc2005_pin_update(s);
@@ -368,10 +375,11 @@ static uint8_t tsc2005_txrx_word(void *opaque, uint8_t value)
break;
case 1:
- if (s->command)
+ if (s->command) {
ret = (s->data >> 8) & 0xff;
- else
+ } else {
s->data |= value << 8;
+ }
break;
case 2:
@@ -412,8 +420,9 @@ static void tsc2005_timer_tick(void *opaque)
/* Timer ticked -- a set of conversions has been finished. */
- if (!s->busy)
+ if (!s->busy) {
return;
+ }
s->busy = false;
s->dav |= mode_regs[function];
@@ -438,8 +447,9 @@ static void tsc2005_touchscreen_event(void *opaque,
* signaling TS events immediately, but for now we simulate
* the first conversion delay for sake of correctness.
*/
- if (p != s->pressure)
+ if (p != s->pressure) {
tsc2005_pin_update(s);
+ }
}
static int tsc2005_post_load(void *opaque, int version_id)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 08/42] docs/system: Remove ADC from raspi documentation
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (6 preceding siblings ...)
2024-05-28 14:07 ` [PULL 07/42] hw: arm: Remove use of tabs in some source files Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 09/42] target/arm: Use PLD, PLDW, PLI not NOP for t32 Peter Maydell
` (34 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Rayhan Faizel <rayhan.faizel@gmail.com>
None of the RPi boards have ADC on-board. In real life, an external ADC chip
is required to operate on analog signals.
Signed-off-by: Rayhan Faizel <rayhan.faizel@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240512085716.222326-1-rayhan.faizel@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
docs/system/arm/raspi.rst | 1 -
1 file changed, 1 deletion(-)
diff --git a/docs/system/arm/raspi.rst b/docs/system/arm/raspi.rst
index fbec1da6a1e..44eec3f1c33 100644
--- a/docs/system/arm/raspi.rst
+++ b/docs/system/arm/raspi.rst
@@ -40,7 +40,6 @@ Implemented devices
Missing devices
---------------
- * Analog to Digital Converter (ADC)
* Pulse Width Modulation (PWM)
* PCIE Root Port (raspi4b)
* GENET Ethernet Controller (raspi4b)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 09/42] target/arm: Use PLD, PLDW, PLI not NOP for t32
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (7 preceding siblings ...)
2024-05-28 14:07 ` [PULL 08/42] docs/system: Remove ADC from raspi documentation Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 10/42] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Peter Maydell
` (33 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
This fixes a bug in that neither PLI nor PLDW are present in ARMv6T2,
but are introduced with ARMv7 and ARMv7MP respectively.
For clarity, do not use NOP for PLD.
Note that there is no PLDW (literal). Architecturally in the
T1 encoding of "PLD (literal)" bit 5 is "(0)", which means
that it should be zero and if it is not then the behaviour
is CONSTRAINED UNPREDICTABLE (might UNDEF, NOP, or ignore the
value of the bit).
In our implementation we have patterns for both:
+ PLD 1111 1000 -001 1111 1111 ------------ # (literal)
+ PLD 1111 1000 -011 1111 1111 ------------ # (literal)
and so we effectively ignore the value of bit 5. (This is a
permitted option for this CONSTRAINED UNPREDICTABLE.) This isn't a
behaviour change in this commit, since we previously had NOP lines
for both those patterns.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240524232121.284515-3-richard.henderson@linaro.org
[PMM: adjusted commit message to note that PLD (lit) T1 bit 5
being 1 is an UNPREDICTABLE case.]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/t32.decode | 25 ++++++++++++-------------
target/arm/tcg/translate.c | 4 ++--
2 files changed, 14 insertions(+), 15 deletions(-)
diff --git a/target/arm/tcg/t32.decode b/target/arm/tcg/t32.decode
index f21ad0167ab..d327178829d 100644
--- a/target/arm/tcg/t32.decode
+++ b/target/arm/tcg/t32.decode
@@ -458,41 +458,41 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
# Note that Load, unsigned (literal) overlaps all other load encodings.
{
{
- NOP 1111 1000 -001 1111 1111 ------------ # PLD
+ PLD 1111 1000 -001 1111 1111 ------------ # (literal)
LDRB_ri 1111 1000 .001 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1000 1001 ---- 1111 ------------ # PLD
+ PLD 1111 1000 1001 ---- 1111 ------------ # (immediate T1)
LDRB_ri 1111 1000 1001 .... .... ............ @ldst_ri_pos
}
LDRB_ri 1111 1000 0001 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1000 0001 ---- 1111 1100 -------- # PLD
+ PLD 1111 1000 0001 ---- 1111 1100 -------- # (immediate T2)
LDRB_ri 1111 1000 0001 .... .... 1100 ........ @ldst_ri_neg
}
LDRBT_ri 1111 1000 0001 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1000 0001 ---- 1111 000000 -- ---- # PLD
+ PLD 1111 1000 0001 ---- 1111 000000 -- ---- # (register)
LDRB_rr 1111 1000 0001 .... .... 000000 .. .... @ldst_rr
}
}
{
{
- NOP 1111 1000 -011 1111 1111 ------------ # PLD
+ PLD 1111 1000 -011 1111 1111 ------------ # (literal)
LDRH_ri 1111 1000 .011 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1000 1011 ---- 1111 ------------ # PLDW
+ PLDW 1111 1000 1011 ---- 1111 ------------ # (immediate T1)
LDRH_ri 1111 1000 1011 .... .... ............ @ldst_ri_pos
}
LDRH_ri 1111 1000 0011 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1000 0011 ---- 1111 1100 -------- # PLDW
+ PLDW 1111 1000 0011 ---- 1111 1100 -------- # (immediate T2)
LDRH_ri 1111 1000 0011 .... .... 1100 ........ @ldst_ri_neg
}
LDRHT_ri 1111 1000 0011 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1000 0011 ---- 1111 000000 -- ---- # PLDW
+ PLDW 1111 1000 0011 ---- 1111 000000 -- ---- # (register)
LDRH_rr 1111 1000 0011 .... .... 000000 .. .... @ldst_rr
}
}
@@ -504,24 +504,23 @@ STR_ri 1111 1000 1100 .... .... ............ @ldst_ri_pos
LDRT_ri 1111 1000 0101 .... .... 1110 ........ @ldst_ri_unp
LDR_rr 1111 1000 0101 .... .... 000000 .. .... @ldst_rr
}
-# NOPs here are PLI.
{
{
- NOP 1111 1001 -001 1111 1111 ------------
+ PLI 1111 1001 -001 1111 1111 ------------ # (literal T3)
LDRSB_ri 1111 1001 .001 1111 .... ............ @ldst_ri_lit
}
{
- NOP 1111 1001 1001 ---- 1111 ------------
+ PLI 1111 1001 1001 ---- 1111 ------------ # (immediate T1)
LDRSB_ri 1111 1001 1001 .... .... ............ @ldst_ri_pos
}
LDRSB_ri 1111 1001 0001 .... .... 1..1 ........ @ldst_ri_idx
{
- NOP 1111 1001 0001 ---- 1111 1100 --------
+ PLI 1111 1001 0001 ---- 1111 1100 -------- # (immediate T2)
LDRSB_ri 1111 1001 0001 .... .... 1100 ........ @ldst_ri_neg
}
LDRSBT_ri 1111 1001 0001 .... .... 1110 ........ @ldst_ri_unp
{
- NOP 1111 1001 0001 ---- 1111 000000 -- ----
+ PLI 1111 1001 0001 ---- 1111 000000 -- ---- # (register)
LDRSB_rr 1111 1001 0001 .... .... 000000 .. .... @ldst_rr
}
}
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
index d605e10f110..187eacffd96 100644
--- a/target/arm/tcg/translate.c
+++ b/target/arm/tcg/translate.c
@@ -8765,12 +8765,12 @@ static bool trans_PLD(DisasContext *s, arg_PLD *a)
return ENABLE_ARCH_5TE;
}
-static bool trans_PLDW(DisasContext *s, arg_PLD *a)
+static bool trans_PLDW(DisasContext *s, arg_PLDW *a)
{
return arm_dc_feature(s, ARM_FEATURE_V7MP);
}
-static bool trans_PLI(DisasContext *s, arg_PLD *a)
+static bool trans_PLI(DisasContext *s, arg_PLI *a)
{
return ENABLE_ARCH_7;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 10/42] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer)
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (8 preceding siblings ...)
2024-05-28 14:07 ` [PULL 09/42] target/arm: Use PLD, PLDW, PLI not NOP for t32 Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 11/42] target/arm: Fix decode of FMOV (hp) vs MOVI Peter Maydell
` (32 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Fixes RISU mismatch for "fcvtzs h31, h0, #14".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240524232121.284515-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 4126aaa27e6..d97acdbaf9a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -8707,6 +8707,9 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
read_vec_element_i32(s, tcg_op, rn, pass, size);
fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
if (is_scalar) {
+ if (size == MO_16 && !is_u) {
+ tcg_gen_ext16u_i32(tcg_op, tcg_op);
+ }
write_fp_sreg(s, rd, tcg_op);
} else {
write_vec_element_i32(s, tcg_op, rd, pass, size);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 11/42] target/arm: Fix decode of FMOV (hp) vs MOVI
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (9 preceding siblings ...)
2024-05-28 14:07 ` [PULL 10/42] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 12/42] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Peter Maydell
` (31 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
The decode of FMOV (vector, immediate, half-precision) vs
invalid cases of MOVI are incorrect.
Fixes RISU mismatch for invalid insn 0x2f01fd31.
Fixes: 70b4e6a4457 ("arm/translate-a64: add FP16 FMOV to simd_mod_imm")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240524232121.284515-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d97acdbaf9a..5455ae36850 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -7904,27 +7904,31 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
bool is_q = extract32(insn, 30, 1);
uint64_t imm = 0;
- if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
- /* Check for FMOV (vector, immediate) - half-precision */
- if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
+ if (o2) {
+ if (cmode != 0xf || is_neg) {
unallocated_encoding(s);
return;
}
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (cmode == 15 && o2 && !is_neg) {
/* FMOV (vector, immediate) - half-precision */
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ unallocated_encoding(s);
+ return;
+ }
imm = vfp_expand_imm(MO_16, abcdefgh);
/* now duplicate across the lanes */
imm = dup_const(MO_16, imm);
} else {
+ if (cmode == 0xf && is_neg && !is_q) {
+ unallocated_encoding(s);
+ return;
+ }
imm = asimd_imm_const(abcdefgh, cmode, is_neg);
}
+ if (!fp_access_check(s)) {
+ return;
+ }
+
if (!((cmode & 0x9) == 0x1 || (cmode & 0xd) == 0x9)) {
/* MOVI or MVNI, with MVNI negation handled above. */
tcg_gen_gvec_dup_imm(MO_64, vec_full_reg_offset(s, rd), is_q ? 16 : 8,
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 12/42] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16)
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (10 preceding siblings ...)
2024-05-28 14:07 ` [PULL 11/42] target/arm: Fix decode of FMOV (hp) vs MOVI Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 13/42] target/arm: Split out gengvec.c Peter Maydell
` (30 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
All of these insns have "if sz == '1' then UNDEFINED" in their pseudocode.
Fixes a RISU miscompare for invalid insn 0x5ef0c87a.
Fixes: 5c36d89567c ("arm/translate-a64: add all FP16 ops in simd_scalar_pairwise")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240524232121.284515-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5455ae36850..0bdddb8517a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -8006,7 +8006,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0x2f: /* FMINP */
/* FP op, size[0] is 32 or 64 bit*/
if (!u) {
- if (!dc_isar_feature(aa64_fp16, s)) {
+ if ((size & 1) || !dc_isar_feature(aa64_fp16, s)) {
unallocated_encoding(s);
return;
} else {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 13/42] target/arm: Split out gengvec.c
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (11 preceding siblings ...)
2024-05-28 14:07 ` [PULL 12/42] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 14/42] target/arm: Split out gengvec64.c Peter Maydell
` (29 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate.h | 5 +
target/arm/tcg/gengvec.c | 1612 ++++++++++++++++++++++++++++++++++++
target/arm/tcg/translate.c | 1588 -----------------------------------
target/arm/tcg/meson.build | 1 +
4 files changed, 1618 insertions(+), 1588 deletions(-)
create mode 100644 target/arm/tcg/gengvec.c
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index dc66ff21908..80e85096a83 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -445,6 +445,11 @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
+void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh);
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh);
+
void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
new file mode 100644
index 00000000000..7a1856253ff
--- /dev/null
+++ b/target/arm/tcg/gengvec.c
@@ -0,0 +1,1612 @@
+/*
+ * ARM generic vector expansion
+ *
+ * Copyright (c) 2003 Fabrice Bellard
+ * Copyright (c) 2005-2007 CodeSourcery
+ * Copyright (c) 2007 OpenedHand, Ltd.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "translate.h"
+
+
+static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
+ uint32_t opr_sz, uint32_t max_sz,
+ gen_helper_gvec_3_ptr *fn)
+{
+ TCGv_ptr qc_ptr = tcg_temp_new_ptr();
+
+ tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
+ tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
+ opr_sz, max_sz, 0, fn);
+}
+
+void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
+void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3_ptr * const fns[2] = {
+ gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
+ };
+ tcg_debug_assert(vece >= 1 && vece <= 2);
+ gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
+}
+
+#define GEN_CMP0(NAME, COND) \
+ void NAME(unsigned vece, uint32_t d, uint32_t m, \
+ uint32_t opr_sz, uint32_t max_sz) \
+ { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
+
+GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
+GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
+GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
+GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
+GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
+
+#undef GEN_CMP0
+
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_sar8i_i64(a, a, shift);
+ tcg_gen_vec_add8_i64(d, d, a);
+}
+
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_sar16i_i64(a, a, shift);
+ tcg_gen_vec_add16_i64(d, d, a);
+}
+
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_sari_i32(a, a, shift);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_sari_i64(a, a, shift);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ tcg_gen_sari_vec(vece, a, a, sh);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_ssra8_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_ssra16_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_ssra32_i32,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_ssra64_i64,
+ .fniv = gen_ssra_vec,
+ .fno = gen_helper_gvec_ssra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits.
+ */
+ shift = MIN(shift, (8 << vece) - 1);
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+}
+
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_shr8i_i64(a, a, shift);
+ tcg_gen_vec_add8_i64(d, d, a);
+}
+
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_vec_shr16i_i64(a, a, shift);
+ tcg_gen_vec_add16_i64(d, d, a);
+}
+
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_shri_i32(a, a, shift);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_shri_i64(a, a, shift);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ tcg_gen_shri_vec(vece, a, a, sh);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_usra8_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8, },
+ { .fni8 = gen_usra16_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16, },
+ { .fni4 = gen_usra32_i32,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32, },
+ { .fni8 = gen_usra64_i64,
+ .fniv = gen_usra_vec,
+ .fno = gen_helper_gvec_usra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64, },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Unsigned results in all zeros as input to accumulate: nop.
+ */
+ if (shift < (8 << vece)) {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ } else {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ }
+}
+
+/*
+ * Shift one less than the requested amount, and the low bit is
+ * the rounding bit. For the 8 and 16-bit operations, because we
+ * mask the low bit, we can perform a normal integer shift instead
+ * of a vector shift.
+ */
+static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_sar8i_i64(d, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_sar16i_i64(d, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t;
+
+ /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
+ if (sh == 32) {
+ tcg_gen_movi_i32(d, 0);
+ return;
+ }
+ t = tcg_temp_new_i32();
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
+ tcg_gen_sari_i32(d, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+ void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
+ tcg_gen_sari_i64(d, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
+ tcg_gen_dupi_vec(vece, ones, 1);
+ tcg_gen_and_vec(vece, t, t, ones);
+ tcg_gen_sari_vec(vece, d, a, sh);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_srshr8_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_srshr16_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_srshr32_i32,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_srshr64_i64,
+ .fniv = gen_srshr_vec,
+ .fno = gen_helper_gvec_srshr_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ if (shift == (8 << vece)) {
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits. With rounding, this produces
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
+ * I.e. always zero.
+ */
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr8_i64(t, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr16_i64(t, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ gen_srshr32_i32(t, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ gen_srshr64_i64(t, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ gen_srshr_vec(vece, t, a, sh);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_srsra8_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fni8 = gen_srsra16_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_srsra32_i32,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_srsra64_i64,
+ .fniv = gen_srsra_vec,
+ .fno = gen_helper_gvec_srsra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Signed results in all sign bits. With rounding, this produces
+ * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
+ * I.e. always zero. With accumulation, this leaves D unchanged.
+ */
+ if (shift == (8 << vece)) {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
+ tcg_gen_vec_shr8i_i64(d, a, sh);
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, sh - 1);
+ tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
+ tcg_gen_vec_shr16i_i64(d, a, sh);
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t;
+
+ /* Handle shift by the input size for the benefit of trans_URSHR_ri */
+ if (sh == 32) {
+ tcg_gen_extract_i32(d, a, sh - 1, 1);
+ return;
+ }
+ t = tcg_temp_new_i32();
+ tcg_gen_extract_i32(t, a, sh - 1, 1);
+ tcg_gen_shri_i32(d, a, sh);
+ tcg_gen_add_i32(d, d, t);
+}
+
+void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_extract_i64(t, a, sh - 1, 1);
+ tcg_gen_shri_i64(d, a, sh);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec ones = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shri_vec(vece, t, a, shift - 1);
+ tcg_gen_dupi_vec(vece, ones, 1);
+ tcg_gen_and_vec(vece, t, t, ones);
+ tcg_gen_shri_vec(vece, d, a, shift);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_urshr8_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_urshr16_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_urshr32_i32,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_urshr64_i64,
+ .fniv = gen_urshr_vec,
+ .fno = gen_helper_gvec_urshr_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ if (shift == (8 << vece)) {
+ /*
+ * Shifts larger than the element size are architecturally valid.
+ * Unsigned results in zero. With rounding, this produces a
+ * copy of the most significant bit.
+ */
+ tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 8) {
+ tcg_gen_vec_shr8i_i64(t, a, 7);
+ } else {
+ gen_urshr8_i64(t, a, sh);
+ }
+ tcg_gen_vec_add8_i64(d, d, t);
+}
+
+static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 16) {
+ tcg_gen_vec_shr16i_i64(t, a, 15);
+ } else {
+ gen_urshr16_i64(t, a, sh);
+ }
+ tcg_gen_vec_add16_i64(d, d, t);
+}
+
+static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ if (sh == 32) {
+ tcg_gen_shri_i32(t, a, 31);
+ } else {
+ gen_urshr32_i32(t, a, sh);
+ }
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ if (sh == 64) {
+ tcg_gen_shri_i64(t, a, 63);
+ } else {
+ gen_urshr64_i64(t, a, sh);
+ }
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ if (sh == (8 << vece)) {
+ tcg_gen_shri_vec(vece, t, a, sh - 1);
+ } else {
+ gen_urshr_vec(vece, t, a, sh);
+ }
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_shri_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen2i ops[4] = {
+ { .fni8 = gen_ursra8_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fni8 = gen_ursra16_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_ursra32_i32,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_ursra64_i64,
+ .fniv = gen_ursra_vec,
+ .fno = gen_helper_gvec_ursra_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize] */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+}
+
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shri_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_shri_i32(a, a, shift);
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
+}
+
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_shri_i64(a, a, shift);
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
+}
+
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
+ tcg_gen_shri_vec(vece, t, a, sh);
+ tcg_gen_and_vec(vece, d, d, m);
+ tcg_gen_or_vec(vece, d, d, t);
+}
+
+void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
+ const GVecGen2i ops[4] = {
+ { .fni8 = gen_shr8_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shr16_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shr32_ins_i32,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_shr64_ins_i64,
+ .fniv = gen_shr_ins_vec,
+ .fno = gen_helper_gvec_sri_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [1..esize]. */
+ tcg_debug_assert(shift > 0);
+ tcg_debug_assert(shift <= (8 << vece));
+
+ /* Shift of esize leaves destination unchanged. */
+ if (shift < (8 << vece)) {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ } else {
+ /* Nop, but we do need to clear the tail. */
+ tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
+ }
+}
+
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shli_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_shli_i64(t, a, shift);
+ tcg_gen_andi_i64(t, t, mask);
+ tcg_gen_andi_i64(d, d, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
+{
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
+}
+
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
+{
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
+}
+
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_shli_vec(vece, t, a, sh);
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
+ tcg_gen_and_vec(vece, d, d, m);
+ tcg_gen_or_vec(vece, d, d, t);
+}
+
+void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
+ int64_t shift, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
+ const GVecGen2i ops[4] = {
+ { .fni8 = gen_shl8_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_b,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni8 = gen_shl16_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_h,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_shl32_ins_i32,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_s,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_shl64_ins_i64,
+ .fniv = gen_shl_ins_vec,
+ .fno = gen_helper_gvec_sli_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+
+ /* tszimm encoding produces immediates in the range [0..esize-1]. */
+ tcg_debug_assert(shift >= 0);
+ tcg_debug_assert(shift < (8 << vece));
+
+ if (shift == 0) {
+ tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
+ }
+}
+
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u8(a, a, b);
+ gen_helper_neon_add_u8(d, d, a);
+}
+
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u8(a, a, b);
+ gen_helper_neon_sub_u8(d, d, a);
+}
+
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u16(a, a, b);
+ gen_helper_neon_add_u16(d, d, a);
+}
+
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ gen_helper_neon_mul_u16(a, a, b);
+ gen_helper_neon_sub_u16(d, d, a);
+}
+
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_mul_i32(a, a, b);
+ tcg_gen_add_i32(d, d, a);
+}
+
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_mul_i32(a, a, b);
+ tcg_gen_sub_i32(d, d, a);
+}
+
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_mul_i64(a, a, b);
+ tcg_gen_add_i64(d, d, a);
+}
+
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_mul_i64(a, a, b);
+ tcg_gen_sub_i64(d, d, a);
+}
+
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_mul_vec(vece, a, a, b);
+ tcg_gen_add_vec(vece, d, d, a);
+}
+
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_mul_vec(vece, a, a, b);
+ tcg_gen_sub_vec(vece, d, d, a);
+}
+
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
+ * these tables are shared with AArch64 which does support them.
+ */
+void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_mul_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_mla8_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_mla16_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_mla32_i32,
+ .fniv = gen_mla_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_mla64_i64,
+ .fniv = gen_mla_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_mul_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_mls8_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_mls16_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_mls32_i32,
+ .fniv = gen_mls_vec,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_mls64_i64,
+ .fniv = gen_mls_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .load_dest = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+/* CMTST : test is "if (X & Y != 0)". */
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ tcg_gen_and_i32(d, a, b);
+ tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
+}
+
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ tcg_gen_and_i64(d, a, b);
+ tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
+}
+
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ tcg_gen_and_vec(vece, d, a, b);
+ tcg_gen_dupi_vec(vece, a, 0);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
+}
+
+void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
+ static const GVecGen3 ops[4] = {
+ { .fni4 = gen_helper_neon_tst_u8,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fni4 = gen_helper_neon_tst_u16,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_cmtst_i32,
+ .fniv = gen_cmtst_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_cmtst_i64,
+ .fniv = gen_cmtst_vec,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
+{
+ TCGv_i32 lval = tcg_temp_new_i32();
+ TCGv_i32 rval = tcg_temp_new_i32();
+ TCGv_i32 lsh = tcg_temp_new_i32();
+ TCGv_i32 rsh = tcg_temp_new_i32();
+ TCGv_i32 zero = tcg_constant_i32(0);
+ TCGv_i32 max = tcg_constant_i32(32);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i32(lsh, shift);
+ tcg_gen_neg_i32(rsh, lsh);
+ tcg_gen_shl_i32(lval, src, lsh);
+ tcg_gen_shr_i32(rval, src, rsh);
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
+}
+
+void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
+{
+ TCGv_i64 lval = tcg_temp_new_i64();
+ TCGv_i64 rval = tcg_temp_new_i64();
+ TCGv_i64 lsh = tcg_temp_new_i64();
+ TCGv_i64 rsh = tcg_temp_new_i64();
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 max = tcg_constant_i64(64);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i64(lsh, shift);
+ tcg_gen_neg_i64(rsh, lsh);
+ tcg_gen_shl_i64(lval, src, lsh);
+ tcg_gen_shr_i64(rval, src, rsh);
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
+}
+
+static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
+ TCGv_vec src, TCGv_vec shift)
+{
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec msk, max;
+
+ tcg_gen_neg_vec(vece, rsh, shift);
+ if (vece == MO_8) {
+ tcg_gen_mov_vec(lsh, shift);
+ } else {
+ msk = tcg_temp_new_vec_matching(dst);
+ tcg_gen_dupi_vec(vece, msk, 0xff);
+ tcg_gen_and_vec(vece, lsh, shift, msk);
+ tcg_gen_and_vec(vece, rsh, rsh, msk);
+ }
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
+ tcg_gen_shrv_vec(vece, rval, src, rsh);
+
+ max = tcg_temp_new_vec_matching(dst);
+ tcg_gen_dupi_vec(vece, max, 8 << vece);
+
+ /*
+ * The choice of LT (signed) and GEU (unsigned) are biased toward
+ * the instructions of the x86_64 host. For MO_8, the whole byte
+ * is significant so we must use an unsigned compare; otherwise we
+ * have already masked to a byte and so a signed compare works.
+ * Other tcg hosts have a full set of comparisons and do not care.
+ */
+ if (vece == MO_8) {
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
+ tcg_gen_andc_vec(vece, lval, lval, lsh);
+ tcg_gen_andc_vec(vece, rval, rval, rsh);
+ } else {
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
+ tcg_gen_and_vec(vece, lval, lval, lsh);
+ tcg_gen_and_vec(vece, rval, rval, rsh);
+ }
+ tcg_gen_or_vec(vece, dst, lval, rval);
+}
+
+void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_neg_vec, INDEX_op_shlv_vec,
+ INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_ushl_vec,
+ .fno = gen_helper_gvec_ushl_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_ushl_vec,
+ .fno = gen_helper_gvec_ushl_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_ushl_i32,
+ .fniv = gen_ushl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_ushl_i64,
+ .fniv = gen_ushl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
+{
+ TCGv_i32 lval = tcg_temp_new_i32();
+ TCGv_i32 rval = tcg_temp_new_i32();
+ TCGv_i32 lsh = tcg_temp_new_i32();
+ TCGv_i32 rsh = tcg_temp_new_i32();
+ TCGv_i32 zero = tcg_constant_i32(0);
+ TCGv_i32 max = tcg_constant_i32(31);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i32(lsh, shift);
+ tcg_gen_neg_i32(rsh, lsh);
+ tcg_gen_shl_i32(lval, src, lsh);
+ tcg_gen_umin_i32(rsh, rsh, max);
+ tcg_gen_sar_i32(rval, src, rsh);
+ tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
+ tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
+}
+
+void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
+{
+ TCGv_i64 lval = tcg_temp_new_i64();
+ TCGv_i64 rval = tcg_temp_new_i64();
+ TCGv_i64 lsh = tcg_temp_new_i64();
+ TCGv_i64 rsh = tcg_temp_new_i64();
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 max = tcg_constant_i64(63);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_ext8s_i64(lsh, shift);
+ tcg_gen_neg_i64(rsh, lsh);
+ tcg_gen_shl_i64(lval, src, lsh);
+ tcg_gen_umin_i64(rsh, rsh, max);
+ tcg_gen_sar_i64(rval, src, rsh);
+ tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
+ tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
+}
+
+static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
+ TCGv_vec src, TCGv_vec shift)
+{
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
+ TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
+
+ /*
+ * Rely on the TCG guarantee that out of range shifts produce
+ * unspecified results, not undefined behaviour (i.e. no trap).
+ * Discard out-of-range results after the fact.
+ */
+ tcg_gen_neg_vec(vece, rsh, shift);
+ if (vece == MO_8) {
+ tcg_gen_mov_vec(lsh, shift);
+ } else {
+ tcg_gen_dupi_vec(vece, tmp, 0xff);
+ tcg_gen_and_vec(vece, lsh, shift, tmp);
+ tcg_gen_and_vec(vece, rsh, rsh, tmp);
+ }
+
+ /* Bound rsh so out of bound right shift gets -1. */
+ tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
+ tcg_gen_umin_vec(vece, rsh, rsh, tmp);
+ tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
+
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
+ tcg_gen_sarv_vec(vece, rval, src, rsh);
+
+ /* Select in-bound left shift. */
+ tcg_gen_andc_vec(vece, lval, lval, tmp);
+
+ /* Select between left and right shift. */
+ if (vece == MO_8) {
+ tcg_gen_dupi_vec(vece, tmp, 0);
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
+ } else {
+ tcg_gen_dupi_vec(vece, tmp, 0x80);
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
+ }
+}
+
+void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
+ INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_sshl_vec,
+ .fno = gen_helper_gvec_sshl_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_sshl_vec,
+ .fno = gen_helper_gvec_sshl_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_sshl_i32,
+ .fniv = gen_sshl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_sshl_i64,
+ .fniv = gen_sshl_vec,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_add_vec(vece, x, a, b);
+ tcg_gen_usadd_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_b,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_h,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_s,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fniv = gen_uqadd_vec,
+ .fno = gen_helper_gvec_uqadd_d,
+ .write_aofs = true,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_add_vec(vece, x, a, b);
+ tcg_gen_ssadd_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_sqadd_vec,
+ .fno = gen_helper_gvec_sqadd_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_sub_vec(vece, x, a, b);
+ tcg_gen_ussub_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_uqsub_vec,
+ .fno = gen_helper_gvec_uqsub_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
+ TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
+ tcg_gen_sub_vec(vece, x, a, b);
+ tcg_gen_sssub_vec(vece, t, a, b);
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+ tcg_gen_or_vec(vece, sat, sat, x);
+}
+
+void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
+ };
+ static const GVecGen4 ops[4] = {
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_b,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_8 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_h,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_16 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_s,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_32 },
+ { .fniv = gen_sqsub_vec,
+ .fno = gen_helper_gvec_sqsub_d,
+ .opt_opc = vecop_list,
+ .write_aofs = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
+ rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_sub_i32(t, a, b);
+ tcg_gen_sub_i32(d, b, a);
+ tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
+}
+
+static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t, a, b);
+ tcg_gen_sub_i64(d, b, a);
+ tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
+}
+
+static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_smin_vec(vece, t, a, b);
+ tcg_gen_smax_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_sabd_i32,
+ .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_sabd_i64,
+ .fniv = gen_sabd_vec,
+ .fno = gen_helper_gvec_sabd_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+
+ tcg_gen_sub_i32(t, a, b);
+ tcg_gen_sub_i32(d, b, a);
+ tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
+}
+
+static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+
+ tcg_gen_sub_i64(t, a, b);
+ tcg_gen_sub_i64(d, b, a);
+ tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
+}
+
+static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+
+ tcg_gen_umin_vec(vece, t, a, b);
+ tcg_gen_umax_vec(vece, d, a, b);
+ tcg_gen_sub_vec(vece, d, d, t);
+}
+
+void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_b,
+ .opt_opc = vecop_list,
+ .vece = MO_8 },
+ { .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_h,
+ .opt_opc = vecop_list,
+ .vece = MO_16 },
+ { .fni4 = gen_uabd_i32,
+ .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_s,
+ .opt_opc = vecop_list,
+ .vece = MO_32 },
+ { .fni8 = gen_uabd_i64,
+ .fniv = gen_uabd_vec,
+ .fno = gen_helper_gvec_uabd_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+ gen_sabd_i32(t, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ gen_sabd_i64(t, a, b);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ gen_sabd_vec(vece, t, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_add_vec,
+ INDEX_op_smin_vec, INDEX_op_smax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_saba_i32,
+ .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_saba_i64,
+ .fniv = gen_saba_vec,
+ .fno = gen_helper_gvec_saba_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
+
+static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
+{
+ TCGv_i32 t = tcg_temp_new_i32();
+ gen_uabd_i32(t, a, b);
+ tcg_gen_add_i32(d, d, t);
+}
+
+static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ gen_uabd_i64(t, a, b);
+ tcg_gen_add_i64(d, d, t);
+}
+
+static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
+{
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
+ gen_uabd_vec(vece, t, a, b);
+ tcg_gen_add_vec(vece, d, d, t);
+}
+
+void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = {
+ INDEX_op_sub_vec, INDEX_op_add_vec,
+ INDEX_op_umin_vec, INDEX_op_umax_vec, 0
+ };
+ static const GVecGen3 ops[4] = {
+ { .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_b,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_8 },
+ { .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_h,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_16 },
+ { .fni4 = gen_uaba_i32,
+ .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_s,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_32 },
+ { .fni8 = gen_uaba_i64,
+ .fniv = gen_uaba_vec,
+ .fno = gen_helper_gvec_uaba_d,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .opt_opc = vecop_list,
+ .load_dest = true,
+ .vece = MO_64 },
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
+}
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
index 187eacffd96..c5bc691d92b 100644
--- a/target/arm/tcg/translate.c
+++ b/target/arm/tcg/translate.c
@@ -2912,1594 +2912,6 @@ static void gen_exception_return(DisasContext *s, TCGv_i32 pc)
gen_rfe(s, pc, load_cpu_field(spsr));
}
-static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
- uint32_t opr_sz, uint32_t max_sz,
- gen_helper_gvec_3_ptr *fn)
-{
- TCGv_ptr qc_ptr = tcg_temp_new_ptr();
-
- tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
- tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
- opr_sz, max_sz, 0, fn);
-}
-
-void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static gen_helper_gvec_3_ptr * const fns[2] = {
- gen_helper_gvec_qrdmlah_s16, gen_helper_gvec_qrdmlah_s32
- };
- tcg_debug_assert(vece >= 1 && vece <= 2);
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
-}
-
-void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static gen_helper_gvec_3_ptr * const fns[2] = {
- gen_helper_gvec_qrdmlsh_s16, gen_helper_gvec_qrdmlsh_s32
- };
- tcg_debug_assert(vece >= 1 && vece <= 2);
- gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
-}
-
-#define GEN_CMP0(NAME, COND) \
- void NAME(unsigned vece, uint32_t d, uint32_t m, \
- uint32_t opr_sz, uint32_t max_sz) \
- { tcg_gen_gvec_cmpi(COND, vece, d, m, 0, opr_sz, max_sz); }
-
-GEN_CMP0(gen_gvec_ceq0, TCG_COND_EQ)
-GEN_CMP0(gen_gvec_cle0, TCG_COND_LE)
-GEN_CMP0(gen_gvec_cge0, TCG_COND_GE)
-GEN_CMP0(gen_gvec_clt0, TCG_COND_LT)
-GEN_CMP0(gen_gvec_cgt0, TCG_COND_GT)
-
-#undef GEN_CMP0
-
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_sar8i_i64(a, a, shift);
- tcg_gen_vec_add8_i64(d, d, a);
-}
-
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_sar16i_i64(a, a, shift);
- tcg_gen_vec_add16_i64(d, d, a);
-}
-
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_sari_i32(a, a, shift);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_sari_i64(a, a, shift);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- tcg_gen_sari_vec(vece, a, a, sh);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_ssra8_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_ssra16_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_ssra32_i32,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_ssra64_i64,
- .fniv = gen_ssra_vec,
- .fno = gen_helper_gvec_ssra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits.
- */
- shift = MIN(shift, (8 << vece) - 1);
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
-}
-
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_shr8i_i64(a, a, shift);
- tcg_gen_vec_add8_i64(d, d, a);
-}
-
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_vec_shr16i_i64(a, a, shift);
- tcg_gen_vec_add16_i64(d, d, a);
-}
-
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_shri_i32(a, a, shift);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_shri_i64(a, a, shift);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- tcg_gen_shri_vec(vece, a, a, sh);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_usra8_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8, },
- { .fni8 = gen_usra16_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16, },
- { .fni4 = gen_usra32_i32,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32, },
- { .fni8 = gen_usra64_i64,
- .fniv = gen_usra_vec,
- .fno = gen_helper_gvec_usra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64, },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Unsigned results in all zeros as input to accumulate: nop.
- */
- if (shift < (8 << vece)) {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- } else {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- }
-}
-
-/*
- * Shift one less than the requested amount, and the low bit is
- * the rounding bit. For the 8 and 16-bit operations, because we
- * mask the low bit, we can perform a normal integer shift instead
- * of a vector shift.
- */
-static void gen_srshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
- tcg_gen_vec_sar8i_i64(d, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
- tcg_gen_vec_sar16i_i64(d, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t;
-
- /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
- if (sh == 32) {
- tcg_gen_movi_i32(d, 0);
- return;
- }
- t = tcg_temp_new_i32();
- tcg_gen_extract_i32(t, a, sh - 1, 1);
- tcg_gen_sari_i32(d, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_srshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_extract_i64(t, a, sh - 1, 1);
- tcg_gen_sari_i64(d, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_srshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shri_vec(vece, t, a, sh - 1);
- tcg_gen_dupi_vec(vece, ones, 1);
- tcg_gen_and_vec(vece, t, t, ones);
- tcg_gen_sari_vec(vece, d, a, sh);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_srshr8_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_srshr16_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_srshr32_i32,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_srshr64_i64,
- .fniv = gen_srshr_vec,
- .fno = gen_helper_gvec_srshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- if (shift == (8 << vece)) {
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits. With rounding, this produces
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
- * I.e. always zero.
- */
- tcg_gen_gvec_dup_imm(vece, rd_ofs, opr_sz, max_sz, 0);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_srsra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr8_i64(t, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_srsra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr16_i64(t, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_srsra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- gen_srshr32_i32(t, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_srsra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- gen_srshr64_i64(t, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_srsra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- gen_srshr_vec(vece, t, a, sh);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_sari_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_srsra8_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fni8 = gen_srsra16_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_srsra32_i32,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_srsra64_i64,
- .fniv = gen_srsra_vec,
- .fno = gen_helper_gvec_srsra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /*
- * Shifts larger than the element size are architecturally valid.
- * Signed results in all sign bits. With rounding, this produces
- * (-1 + 1) >> 1 == 0, or (0 + 1) >> 1 == 0.
- * I.e. always zero. With accumulation, this leaves D unchanged.
- */
- if (shift == (8 << vece)) {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_urshr8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
- tcg_gen_vec_shr8i_i64(d, a, sh);
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, sh - 1);
- tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
- tcg_gen_vec_shr16i_i64(d, a, sh);
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t;
-
- /* Handle shift by the input size for the benefit of trans_URSHR_ri */
- if (sh == 32) {
- tcg_gen_extract_i32(d, a, sh - 1, 1);
- return;
- }
- t = tcg_temp_new_i32();
- tcg_gen_extract_i32(t, a, sh - 1, 1);
- tcg_gen_shri_i32(d, a, sh);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_urshr64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_extract_i64(t, a, sh - 1, 1);
- tcg_gen_shri_i64(d, a, sh);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_urshr_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t shift)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec ones = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shri_vec(vece, t, a, shift - 1);
- tcg_gen_dupi_vec(vece, ones, 1);
- tcg_gen_and_vec(vece, t, t, ones);
- tcg_gen_shri_vec(vece, d, a, shift);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_urshr8_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_urshr16_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_urshr32_i32,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_urshr64_i64,
- .fniv = gen_urshr_vec,
- .fno = gen_helper_gvec_urshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- if (shift == (8 << vece)) {
- /*
- * Shifts larger than the element size are architecturally valid.
- * Unsigned results in zero. With rounding, this produces a
- * copy of the most significant bit.
- */
- tcg_gen_gvec_shri(vece, rd_ofs, rm_ofs, shift - 1, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_ursra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 8) {
- tcg_gen_vec_shr8i_i64(t, a, 7);
- } else {
- gen_urshr8_i64(t, a, sh);
- }
- tcg_gen_vec_add8_i64(d, d, t);
-}
-
-static void gen_ursra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 16) {
- tcg_gen_vec_shr16i_i64(t, a, 15);
- } else {
- gen_urshr16_i64(t, a, sh);
- }
- tcg_gen_vec_add16_i64(d, d, t);
-}
-
-static void gen_ursra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- if (sh == 32) {
- tcg_gen_shri_i32(t, a, 31);
- } else {
- gen_urshr32_i32(t, a, sh);
- }
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_ursra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- if (sh == 64) {
- tcg_gen_shri_i64(t, a, 63);
- } else {
- gen_urshr64_i64(t, a, sh);
- }
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_ursra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- if (sh == (8 << vece)) {
- tcg_gen_shri_vec(vece, t, a, sh - 1);
- } else {
- gen_urshr_vec(vece, t, a, sh);
- }
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_shri_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen2i ops[4] = {
- { .fni8 = gen_ursra8_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fni8 = gen_ursra16_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_ursra32_i32,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_ursra64_i64,
- .fniv = gen_ursra_vec,
- .fno = gen_helper_gvec_ursra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize] */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
-}
-
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shri_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_shri_i32(a, a, shift);
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
-}
-
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_shri_i64(a, a, shift);
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
-}
-
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec m = tcg_temp_new_vec_matching(d);
-
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
- tcg_gen_shri_vec(vece, t, a, sh);
- tcg_gen_and_vec(vece, d, d, m);
- tcg_gen_or_vec(vece, d, d, t);
-}
-
-void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_shri_vec, 0 };
- const GVecGen2i ops[4] = {
- { .fni8 = gen_shr8_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_shr16_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_shr32_ins_i32,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_shr64_ins_i64,
- .fniv = gen_shr_ins_vec,
- .fno = gen_helper_gvec_sri_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [1..esize]. */
- tcg_debug_assert(shift > 0);
- tcg_debug_assert(shift <= (8 << vece));
-
- /* Shift of esize leaves destination unchanged. */
- if (shift < (8 << vece)) {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- } else {
- /* Nop, but we do need to clear the tail. */
- tcg_gen_gvec_mov(vece, rd_ofs, rd_ofs, opr_sz, max_sz);
- }
-}
-
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_8, 0xff << shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shli_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_shli_i64(t, a, shift);
- tcg_gen_andi_i64(t, t, mask);
- tcg_gen_andi_i64(d, d, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
-{
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
-}
-
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
-{
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
-}
-
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- TCGv_vec m = tcg_temp_new_vec_matching(d);
-
- tcg_gen_shli_vec(vece, t, a, sh);
- tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
- tcg_gen_and_vec(vece, d, d, m);
- tcg_gen_or_vec(vece, d, d, t);
-}
-
-void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
- int64_t shift, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_shli_vec, 0 };
- const GVecGen2i ops[4] = {
- { .fni8 = gen_shl8_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_b,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni8 = gen_shl16_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_h,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_shl32_ins_i32,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_s,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_shl64_ins_i64,
- .fniv = gen_shl_ins_vec,
- .fno = gen_helper_gvec_sli_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
-
- /* tszimm encoding produces immediates in the range [0..esize-1]. */
- tcg_debug_assert(shift >= 0);
- tcg_debug_assert(shift < (8 << vece));
-
- if (shift == 0) {
- tcg_gen_gvec_mov(vece, rd_ofs, rm_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_2i(rd_ofs, rm_ofs, opr_sz, max_sz, shift, &ops[vece]);
- }
-}
-
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u8(a, a, b);
- gen_helper_neon_add_u8(d, d, a);
-}
-
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u8(a, a, b);
- gen_helper_neon_sub_u8(d, d, a);
-}
-
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u16(a, a, b);
- gen_helper_neon_add_u16(d, d, a);
-}
-
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- gen_helper_neon_mul_u16(a, a, b);
- gen_helper_neon_sub_u16(d, d, a);
-}
-
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_mul_i32(a, a, b);
- tcg_gen_add_i32(d, d, a);
-}
-
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_mul_i32(a, a, b);
- tcg_gen_sub_i32(d, d, a);
-}
-
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_mul_i64(a, a, b);
- tcg_gen_add_i64(d, d, a);
-}
-
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_mul_i64(a, a, b);
- tcg_gen_sub_i64(d, d, a);
-}
-
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_mul_vec(vece, a, a, b);
- tcg_gen_add_vec(vece, d, d, a);
-}
-
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_mul_vec(vece, a, a, b);
- tcg_gen_sub_vec(vece, d, d, a);
-}
-
-/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
- * these tables are shared with AArch64 which does support them.
- */
-void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_mul_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_mla8_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_mla16_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_mla32_i32,
- .fniv = gen_mla_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_mla64_i64,
- .fniv = gen_mla_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_mul_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_mls8_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_mls16_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_mls32_i32,
- .fniv = gen_mls_vec,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_mls64_i64,
- .fniv = gen_mls_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .load_dest = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-/* CMTST : test is "if (X & Y != 0)". */
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- tcg_gen_and_i32(d, a, b);
- tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
-}
-
-void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- tcg_gen_and_i64(d, a, b);
- tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
-}
-
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- tcg_gen_and_vec(vece, d, a, b);
- tcg_gen_dupi_vec(vece, a, 0);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
-}
-
-void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_cmp_vec, 0 };
- static const GVecGen3 ops[4] = {
- { .fni4 = gen_helper_neon_tst_u8,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fni4 = gen_helper_neon_tst_u16,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_cmtst_i32,
- .fniv = gen_cmtst_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_cmtst_i64,
- .fniv = gen_cmtst_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
-{
- TCGv_i32 lval = tcg_temp_new_i32();
- TCGv_i32 rval = tcg_temp_new_i32();
- TCGv_i32 lsh = tcg_temp_new_i32();
- TCGv_i32 rsh = tcg_temp_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
- TCGv_i32 max = tcg_constant_i32(32);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i32(lsh, shift);
- tcg_gen_neg_i32(rsh, lsh);
- tcg_gen_shl_i32(lval, src, lsh);
- tcg_gen_shr_i32(rval, src, rsh);
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
- tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
-}
-
-void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
-{
- TCGv_i64 lval = tcg_temp_new_i64();
- TCGv_i64 rval = tcg_temp_new_i64();
- TCGv_i64 lsh = tcg_temp_new_i64();
- TCGv_i64 rsh = tcg_temp_new_i64();
- TCGv_i64 zero = tcg_constant_i64(0);
- TCGv_i64 max = tcg_constant_i64(64);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i64(lsh, shift);
- tcg_gen_neg_i64(rsh, lsh);
- tcg_gen_shl_i64(lval, src, lsh);
- tcg_gen_shr_i64(rval, src, rsh);
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
- tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
-}
-
-static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
- TCGv_vec src, TCGv_vec shift)
-{
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec msk, max;
-
- tcg_gen_neg_vec(vece, rsh, shift);
- if (vece == MO_8) {
- tcg_gen_mov_vec(lsh, shift);
- } else {
- msk = tcg_temp_new_vec_matching(dst);
- tcg_gen_dupi_vec(vece, msk, 0xff);
- tcg_gen_and_vec(vece, lsh, shift, msk);
- tcg_gen_and_vec(vece, rsh, rsh, msk);
- }
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_shlv_vec(vece, lval, src, lsh);
- tcg_gen_shrv_vec(vece, rval, src, rsh);
-
- max = tcg_temp_new_vec_matching(dst);
- tcg_gen_dupi_vec(vece, max, 8 << vece);
-
- /*
- * The choice of LT (signed) and GEU (unsigned) are biased toward
- * the instructions of the x86_64 host. For MO_8, the whole byte
- * is significant so we must use an unsigned compare; otherwise we
- * have already masked to a byte and so a signed compare works.
- * Other tcg hosts have a full set of comparisons and do not care.
- */
- if (vece == MO_8) {
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
- tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
- tcg_gen_andc_vec(vece, lval, lval, lsh);
- tcg_gen_andc_vec(vece, rval, rval, rsh);
- } else {
- tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
- tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
- tcg_gen_and_vec(vece, lval, lval, lsh);
- tcg_gen_and_vec(vece, rval, rval, rsh);
- }
- tcg_gen_or_vec(vece, dst, lval, rval);
-}
-
-void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_neg_vec, INDEX_op_shlv_vec,
- INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_ushl_vec,
- .fno = gen_helper_gvec_ushl_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_ushl_vec,
- .fno = gen_helper_gvec_ushl_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_ushl_i32,
- .fniv = gen_ushl_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_ushl_i64,
- .fniv = gen_ushl_vec,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
-{
- TCGv_i32 lval = tcg_temp_new_i32();
- TCGv_i32 rval = tcg_temp_new_i32();
- TCGv_i32 lsh = tcg_temp_new_i32();
- TCGv_i32 rsh = tcg_temp_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
- TCGv_i32 max = tcg_constant_i32(31);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i32(lsh, shift);
- tcg_gen_neg_i32(rsh, lsh);
- tcg_gen_shl_i32(lval, src, lsh);
- tcg_gen_umin_i32(rsh, rsh, max);
- tcg_gen_sar_i32(rval, src, rsh);
- tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
- tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
-}
-
-void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
-{
- TCGv_i64 lval = tcg_temp_new_i64();
- TCGv_i64 rval = tcg_temp_new_i64();
- TCGv_i64 lsh = tcg_temp_new_i64();
- TCGv_i64 rsh = tcg_temp_new_i64();
- TCGv_i64 zero = tcg_constant_i64(0);
- TCGv_i64 max = tcg_constant_i64(63);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_ext8s_i64(lsh, shift);
- tcg_gen_neg_i64(rsh, lsh);
- tcg_gen_shl_i64(lval, src, lsh);
- tcg_gen_umin_i64(rsh, rsh, max);
- tcg_gen_sar_i64(rval, src, rsh);
- tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
- tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
-}
-
-static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
- TCGv_vec src, TCGv_vec shift)
-{
- TCGv_vec lval = tcg_temp_new_vec_matching(dst);
- TCGv_vec rval = tcg_temp_new_vec_matching(dst);
- TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
- TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
-
- /*
- * Rely on the TCG guarantee that out of range shifts produce
- * unspecified results, not undefined behaviour (i.e. no trap).
- * Discard out-of-range results after the fact.
- */
- tcg_gen_neg_vec(vece, rsh, shift);
- if (vece == MO_8) {
- tcg_gen_mov_vec(lsh, shift);
- } else {
- tcg_gen_dupi_vec(vece, tmp, 0xff);
- tcg_gen_and_vec(vece, lsh, shift, tmp);
- tcg_gen_and_vec(vece, rsh, rsh, tmp);
- }
-
- /* Bound rsh so out of bound right shift gets -1. */
- tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
- tcg_gen_umin_vec(vece, rsh, rsh, tmp);
- tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
-
- tcg_gen_shlv_vec(vece, lval, src, lsh);
- tcg_gen_sarv_vec(vece, rval, src, rsh);
-
- /* Select in-bound left shift. */
- tcg_gen_andc_vec(vece, lval, lval, tmp);
-
- /* Select between left and right shift. */
- if (vece == MO_8) {
- tcg_gen_dupi_vec(vece, tmp, 0);
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
- } else {
- tcg_gen_dupi_vec(vece, tmp, 0x80);
- tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
- }
-}
-
-void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
- INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_sshl_vec,
- .fno = gen_helper_gvec_sshl_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_sshl_vec,
- .fno = gen_helper_gvec_sshl_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_sshl_i32,
- .fniv = gen_sshl_vec,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_sshl_i64,
- .fniv = gen_sshl_vec,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_add_vec(vece, x, a, b);
- tcg_gen_usadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_b,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_h,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_s,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fniv = gen_uqadd_vec,
- .fno = gen_helper_gvec_uqadd_d,
- .write_aofs = true,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_add_vec(vece, x, a, b);
- tcg_gen_ssadd_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_sqadd_vec,
- .fno = gen_helper_gvec_sqadd_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_sub_vec(vece, x, a, b);
- tcg_gen_ussub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_uqsub_vec,
- .fno = gen_helper_gvec_uqsub_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
- TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec x = tcg_temp_new_vec_matching(t);
- tcg_gen_sub_vec(vece, x, a, b);
- tcg_gen_sssub_vec(vece, t, a, b);
- tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
- tcg_gen_or_vec(vece, sat, sat, x);
-}
-
-void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
- };
- static const GVecGen4 ops[4] = {
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_b,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_8 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_h,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_16 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_s,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_32 },
- { .fniv = gen_sqsub_vec,
- .fno = gen_helper_gvec_sqsub_d,
- .opt_opc = vecop_list,
- .write_aofs = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
- rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_sabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- tcg_gen_sub_i32(t, a, b);
- tcg_gen_sub_i32(d, b, a);
- tcg_gen_movcond_i32(TCG_COND_LT, d, a, b, d, t);
-}
-
-static void gen_sabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_sub_i64(t, a, b);
- tcg_gen_sub_i64(d, b, a);
- tcg_gen_movcond_i64(TCG_COND_LT, d, a, b, d, t);
-}
-
-static void gen_sabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- tcg_gen_smin_vec(vece, t, a, b);
- tcg_gen_smax_vec(vece, d, a, b);
- tcg_gen_sub_vec(vece, d, d, t);
-}
-
-void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_smin_vec, INDEX_op_smax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_sabd_i32,
- .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_sabd_i64,
- .fniv = gen_sabd_vec,
- .fno = gen_helper_gvec_sabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uabd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
-
- tcg_gen_sub_i32(t, a, b);
- tcg_gen_sub_i32(d, b, a);
- tcg_gen_movcond_i32(TCG_COND_LTU, d, a, b, d, t);
-}
-
-static void gen_uabd_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
-
- tcg_gen_sub_i64(t, a, b);
- tcg_gen_sub_i64(d, b, a);
- tcg_gen_movcond_i64(TCG_COND_LTU, d, a, b, d, t);
-}
-
-static void gen_uabd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
-
- tcg_gen_umin_vec(vece, t, a, b);
- tcg_gen_umax_vec(vece, d, a, b);
- tcg_gen_sub_vec(vece, d, d, t);
-}
-
-void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_umin_vec, INDEX_op_umax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_b,
- .opt_opc = vecop_list,
- .vece = MO_8 },
- { .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_h,
- .opt_opc = vecop_list,
- .vece = MO_16 },
- { .fni4 = gen_uabd_i32,
- .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_s,
- .opt_opc = vecop_list,
- .vece = MO_32 },
- { .fni8 = gen_uabd_i64,
- .fniv = gen_uabd_vec,
- .fno = gen_helper_gvec_uabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_saba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
- gen_sabd_i32(t, a, b);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_saba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- gen_sabd_i64(t, a, b);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_saba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- gen_sabd_vec(vece, t, a, b);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_add_vec,
- INDEX_op_smin_vec, INDEX_op_smax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_saba_i32,
- .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_saba_i64,
- .fniv = gen_saba_vec,
- .fno = gen_helper_gvec_saba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
-static void gen_uaba_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
-{
- TCGv_i32 t = tcg_temp_new_i32();
- gen_uabd_i32(t, a, b);
- tcg_gen_add_i32(d, d, t);
-}
-
-static void gen_uaba_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- gen_uabd_i64(t, a, b);
- tcg_gen_add_i64(d, d, t);
-}
-
-static void gen_uaba_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
-{
- TCGv_vec t = tcg_temp_new_vec_matching(d);
- gen_uabd_vec(vece, t, a, b);
- tcg_gen_add_vec(vece, d, d, t);
-}
-
-void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = {
- INDEX_op_sub_vec, INDEX_op_add_vec,
- INDEX_op_umin_vec, INDEX_op_umax_vec, 0
- };
- static const GVecGen3 ops[4] = {
- { .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_b,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_8 },
- { .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_h,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_16 },
- { .fni4 = gen_uaba_i32,
- .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_s,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_32 },
- { .fni8 = gen_uaba_i64,
- .fniv = gen_uaba_vec,
- .fno = gen_helper_gvec_uaba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- .opt_opc = vecop_list,
- .load_dest = true,
- .vece = MO_64 },
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
-}
-
static bool aa32_cpreg_encoding_in_impdef_space(uint8_t crn, uint8_t crm)
{
static const uint16_t mask[3] = {
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
index 3b1a9f0fc5e..bdb5c7352f2 100644
--- a/target/arm/tcg/meson.build
+++ b/target/arm/tcg/meson.build
@@ -24,6 +24,7 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: gen_a64)
arm_ss.add(files(
'cpu32.c',
+ 'gengvec.c',
'translate.c',
'translate-m-nocp.c',
'translate-mve.c',
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 14/42] target/arm: Split out gengvec64.c
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (12 preceding siblings ...)
2024-05-28 14:07 ` [PULL 13/42] target/arm: Split out gengvec.c Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 15/42] target/arm: Convert Cryptographic AES to decodetree Peter Maydell
` (28 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Split some routines out of translate-a64.c and translate-sve.c
that are used by both.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.h | 4 +
target/arm/tcg/gengvec64.c | 190 +++++++++++++++++++++++++++++++++
target/arm/tcg/translate-a64.c | 26 -----
target/arm/tcg/translate-sve.c | 145 +------------------------
target/arm/tcg/meson.build | 1 +
5 files changed, 197 insertions(+), 169 deletions(-)
create mode 100644 target/arm/tcg/gengvec64.c
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
index 7b811b8ac51..91750f0ca91 100644
--- a/target/arm/tcg/translate-a64.h
+++ b/target/arm/tcg/translate-a64.h
@@ -193,6 +193,10 @@ void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, int64_t shift,
uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz);
void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
new file mode 100644
index 00000000000..093b498b13d
--- /dev/null
+++ b/target/arm/tcg/gengvec64.c
@@ -0,0 +1,190 @@
+/*
+ * AArch64 generic vector expansion
+ *
+ * Copyright (c) 2013 Alexander Graf <agraf@suse.de>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "translate.h"
+#include "translate-a64.h"
+
+
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
+{
+ tcg_gen_rotli_i64(d, m, 1);
+ tcg_gen_xor_i64(d, d, n);
+}
+
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
+{
+ tcg_gen_rotli_vec(vece, d, m, 1);
+ tcg_gen_xor_vec(vece, d, d, n);
+}
+
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
+ static const GVecGen3 op = {
+ .fni8 = gen_rax1_i64,
+ .fniv = gen_rax1_vec,
+ .opt_opc = vecop_list,
+ .fno = gen_helper_crypto_rax1,
+ .vece = MO_64,
+ };
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
+}
+
+static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ uint64_t mask = dup_const(MO_8, 0xff >> sh);
+
+ tcg_gen_xor_i64(t, n, m);
+ tcg_gen_shri_i64(d, t, sh);
+ tcg_gen_shli_i64(t, t, 8 - sh);
+ tcg_gen_andi_i64(d, d, mask);
+ tcg_gen_andi_i64(t, t, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ TCGv_i64 t = tcg_temp_new_i64();
+ uint64_t mask = dup_const(MO_16, 0xffff >> sh);
+
+ tcg_gen_xor_i64(t, n, m);
+ tcg_gen_shri_i64(d, t, sh);
+ tcg_gen_shli_i64(t, t, 16 - sh);
+ tcg_gen_andi_i64(d, d, mask);
+ tcg_gen_andi_i64(t, t, ~mask);
+ tcg_gen_or_i64(d, d, t);
+}
+
+static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
+{
+ tcg_gen_xor_i32(d, n, m);
+ tcg_gen_rotri_i32(d, d, sh);
+}
+
+static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
+{
+ tcg_gen_xor_i64(d, n, m);
+ tcg_gen_rotri_i64(d, d, sh);
+}
+
+static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, int64_t sh)
+{
+ tcg_gen_xor_vec(vece, d, n, m);
+ tcg_gen_rotri_vec(vece, d, d, sh);
+}
+
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, int64_t shift,
+ uint32_t opr_sz, uint32_t max_sz)
+{
+ static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
+ static const GVecGen3i ops[4] = {
+ { .fni8 = gen_xar8_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_b,
+ .opt_opc = vecop,
+ .vece = MO_8 },
+ { .fni8 = gen_xar16_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_h,
+ .opt_opc = vecop,
+ .vece = MO_16 },
+ { .fni4 = gen_xar_i32,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_sve2_xar_s,
+ .opt_opc = vecop,
+ .vece = MO_32 },
+ { .fni8 = gen_xar_i64,
+ .fniv = gen_xar_vec,
+ .fno = gen_helper_gvec_xar_d,
+ .opt_opc = vecop,
+ .vece = MO_64 }
+ };
+ int esize = 8 << vece;
+
+ /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
+ tcg_debug_assert(shift >= 0);
+ tcg_debug_assert(shift <= esize);
+ shift &= esize - 1;
+
+ if (shift == 0) {
+ /* xar with no rotate devolves to xor. */
+ tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
+ } else {
+ tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
+ shift, &ops[vece]);
+ }
+}
+
+static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
+{
+ tcg_gen_xor_i64(d, n, m);
+ tcg_gen_xor_i64(d, d, k);
+}
+
+static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, TCGv_vec k)
+{
+ tcg_gen_xor_vec(vece, d, n, m);
+ tcg_gen_xor_vec(vece, d, d, k);
+}
+
+void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+ static const GVecGen4 op = {
+ .fni8 = gen_eor3_i64,
+ .fniv = gen_eor3_vec,
+ .fno = gen_helper_sve2_eor3,
+ .vece = MO_64,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ };
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
+}
+
+static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
+{
+ tcg_gen_andc_i64(d, m, k);
+ tcg_gen_xor_i64(d, d, n);
+}
+
+static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+ TCGv_vec m, TCGv_vec k)
+{
+ tcg_gen_andc_vec(vece, d, m, k);
+ tcg_gen_xor_vec(vece, d, d, n);
+}
+
+void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+ uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+ static const GVecGen4 op = {
+ .fni8 = gen_bcax_i64,
+ .fniv = gen_bcax_vec,
+ .fno = gen_helper_sve2_bcax,
+ .vece = MO_64,
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ };
+ tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
+}
+
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 0bdddb8517a..8842ff634d5 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -13623,32 +13623,6 @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
}
-static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
-{
- tcg_gen_rotli_i64(d, m, 1);
- tcg_gen_xor_i64(d, d, n);
-}
-
-static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
-{
- tcg_gen_rotli_vec(vece, d, m, 1);
- tcg_gen_xor_vec(vece, d, d, n);
-}
-
-void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
- static const GVecGen3 op = {
- .fni8 = gen_rax1_i64,
- .fniv = gen_rax1_vec,
- .opt_opc = vecop_list,
- .fno = gen_helper_crypto_rax1,
- .vece = MO_64,
- };
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
-}
-
/* Crypto three-reg SHA512
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
* +-----------------------+------+---+---+-----+--------+------+------+
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
index ada05aa5302..798ab2bfb13 100644
--- a/target/arm/tcg/translate-sve.c
+++ b/target/arm/tcg/translate-sve.c
@@ -527,94 +527,6 @@ TRANS_FEAT(ORR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_or, a)
TRANS_FEAT(EOR_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_xor, a)
TRANS_FEAT(BIC_zzz, aa64_sve, gen_gvec_fn_arg_zzz, tcg_gen_gvec_andc, a)
-static void gen_xar8_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- uint64_t mask = dup_const(MO_8, 0xff >> sh);
-
- tcg_gen_xor_i64(t, n, m);
- tcg_gen_shri_i64(d, t, sh);
- tcg_gen_shli_i64(t, t, 8 - sh);
- tcg_gen_andi_i64(d, d, mask);
- tcg_gen_andi_i64(t, t, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_xar16_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- TCGv_i64 t = tcg_temp_new_i64();
- uint64_t mask = dup_const(MO_16, 0xffff >> sh);
-
- tcg_gen_xor_i64(t, n, m);
- tcg_gen_shri_i64(d, t, sh);
- tcg_gen_shli_i64(t, t, 16 - sh);
- tcg_gen_andi_i64(d, d, mask);
- tcg_gen_andi_i64(t, t, ~mask);
- tcg_gen_or_i64(d, d, t);
-}
-
-static void gen_xar_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, int32_t sh)
-{
- tcg_gen_xor_i32(d, n, m);
- tcg_gen_rotri_i32(d, d, sh);
-}
-
-static void gen_xar_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, int64_t sh)
-{
- tcg_gen_xor_i64(d, n, m);
- tcg_gen_rotri_i64(d, d, sh);
-}
-
-static void gen_xar_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, int64_t sh)
-{
- tcg_gen_xor_vec(vece, d, n, m);
- tcg_gen_rotri_vec(vece, d, d, sh);
-}
-
-void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
- uint32_t rm_ofs, int64_t shift,
- uint32_t opr_sz, uint32_t max_sz)
-{
- static const TCGOpcode vecop[] = { INDEX_op_rotli_vec, 0 };
- static const GVecGen3i ops[4] = {
- { .fni8 = gen_xar8_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_b,
- .opt_opc = vecop,
- .vece = MO_8 },
- { .fni8 = gen_xar16_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_h,
- .opt_opc = vecop,
- .vece = MO_16 },
- { .fni4 = gen_xar_i32,
- .fniv = gen_xar_vec,
- .fno = gen_helper_sve2_xar_s,
- .opt_opc = vecop,
- .vece = MO_32 },
- { .fni8 = gen_xar_i64,
- .fniv = gen_xar_vec,
- .fno = gen_helper_gvec_xar_d,
- .opt_opc = vecop,
- .vece = MO_64 }
- };
- int esize = 8 << vece;
-
- /* The SVE2 range is 1 .. esize; the AdvSIMD range is 0 .. esize-1. */
- tcg_debug_assert(shift >= 0);
- tcg_debug_assert(shift <= esize);
- shift &= esize - 1;
-
- if (shift == 0) {
- /* xar with no rotate devolves to xor. */
- tcg_gen_gvec_xor(vece, rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz);
- } else {
- tcg_gen_gvec_3i(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz,
- shift, &ops[vece]);
- }
-}
-
static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
{
if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
@@ -629,61 +541,8 @@ static bool trans_XAR(DisasContext *s, arg_rrri_esz *a)
return true;
}
-static void gen_eor3_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
-{
- tcg_gen_xor_i64(d, n, m);
- tcg_gen_xor_i64(d, d, k);
-}
-
-static void gen_eor3_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, TCGv_vec k)
-{
- tcg_gen_xor_vec(vece, d, n, m);
- tcg_gen_xor_vec(vece, d, d, k);
-}
-
-static void gen_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
-{
- static const GVecGen4 op = {
- .fni8 = gen_eor3_i64,
- .fniv = gen_eor3_vec,
- .fno = gen_helper_sve2_eor3,
- .vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- };
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
-}
-
-TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_eor3, a)
-
-static void gen_bcax_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 k)
-{
- tcg_gen_andc_i64(d, m, k);
- tcg_gen_xor_i64(d, d, n);
-}
-
-static void gen_bcax_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
- TCGv_vec m, TCGv_vec k)
-{
- tcg_gen_andc_vec(vece, d, m, k);
- tcg_gen_xor_vec(vece, d, d, n);
-}
-
-static void gen_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
- uint32_t a, uint32_t oprsz, uint32_t maxsz)
-{
- static const GVecGen4 op = {
- .fni8 = gen_bcax_i64,
- .fniv = gen_bcax_vec,
- .fno = gen_helper_sve2_bcax,
- .vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
- };
- tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
-}
-
-TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_bcax, a)
+TRANS_FEAT(EOR3, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_eor3, a)
+TRANS_FEAT(BCAX, aa64_sve2, gen_gvec_fn_arg_zzzz, gen_gvec_bcax, a)
static void gen_bsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz)
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
index bdb5c7352f2..508932a249f 100644
--- a/target/arm/tcg/meson.build
+++ b/target/arm/tcg/meson.build
@@ -43,6 +43,7 @@ arm_ss.add(files(
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
'cpu64.c',
+ 'gengvec64.c',
'translate-a64.c',
'translate-sve.c',
'translate-sme.c',
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 15/42] target/arm: Convert Cryptographic AES to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (13 preceding siblings ...)
2024-05-28 14:07 ` [PULL 14/42] target/arm: Split out gengvec64.c Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 16/42] target/arm: Convert Cryptographic 3-register SHA " Peter Maydell
` (27 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 21 +++++++--
target/arm/tcg/translate-a64.c | 86 +++++++++++++++-------------------
2 files changed, 54 insertions(+), 53 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 0e7656fd158..1de09903dc4 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -19,11 +19,17 @@
# This file is processed by scripts/decodetree.py
#
-&r rn
-&ri rd imm
-&rri_sf rd rn imm sf
-&i imm
+%rd 0:5
+&r rn
+&ri rd imm
+&rri_sf rd rn imm sf
+&i imm
+&qrr_e q rd rn esz
+&qrrr_e q rd rn rm esz
+
+@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
+@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
### Data Processing - Immediate
@@ -590,3 +596,10 @@ CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy
CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy
CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy
CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy
+
+### Cryptographic AES
+
+AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
+AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
+AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
+AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8842ff634d5..3894db4bee2 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1313,6 +1313,34 @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
return true;
}
+/*
+ * Expanders for AdvSIMD translation functions.
+ */
+
+static bool do_gvec_op2_ool(DisasContext *s, arg_qrr_e *a, int data,
+ gen_helper_gvec_2 *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op2_ool(s, a->q, a->rd, a->rn, data, fn);
+ }
+ return true;
+}
+
+static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
+ gen_helper_gvec_3 *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_ool(s, a->q, a->rd, a->rn, a->rm, data, fn);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4560,6 +4588,15 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
return true;
}
+/*
+ * Cryptographic AES
+ */
+
+TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
+TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
+TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
+TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13460,54 +13497,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto AES
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----------------+------+-----------+--------+-----+------+------+
- * | 0 1 0 0 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----------------+------+-----------+--------+-----+------+------+
- */
-static void disas_crypto_aes(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_2 *genfn2 = NULL;
- gen_helper_gvec_3 *genfn3 = NULL;
-
- if (!dc_isar_feature(aa64_aes, s) || size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0x4: /* AESE */
- genfn3 = gen_helper_crypto_aese;
- break;
- case 0x6: /* AESMC */
- genfn2 = gen_helper_crypto_aesmc;
- break;
- case 0x5: /* AESD */
- genfn3 = gen_helper_crypto_aesd;
- break;
- case 0x7: /* AESIMC */
- genfn2 = gen_helper_crypto_aesimc;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- if (genfn2) {
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn2);
- } else {
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, genfn3);
- }
-}
-
/* Crypto three-reg SHA
* 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
* +-----------------+------+---+------+---+--------+-----+------+------+
@@ -13917,7 +13906,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x4e280800, 0xff3e0c00, disas_crypto_aes },
{ 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 16/42] target/arm: Convert Cryptographic 3-register SHA to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (14 preceding siblings ...)
2024-05-28 14:07 ` [PULL 15/42] target/arm: Convert Cryptographic AES to decodetree Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 17/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
` (26 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 11 +++++
target/arm/tcg/translate-a64.c | 78 +++++-----------------------------
2 files changed, 21 insertions(+), 68 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1de09903dc4..7590659ee68 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -30,6 +30,7 @@
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
+@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
### Data Processing - Immediate
@@ -603,3 +604,13 @@ AESE 01001110 00 10100 00100 10 ..... ..... @r2r_q1e0
AESD 01001110 00 10100 00101 10 ..... ..... @r2r_q1e0
AESMC 01001110 00 10100 00110 10 ..... ..... @rr_q1e0
AESIMC 01001110 00 10100 00111 10 ..... ..... @rr_q1e0
+
+### Cryptographic three-register SHA
+
+SHA1C 0101 1110 000 ..... 000000 ..... ..... @rrr_q1e0
+SHA1P 0101 1110 000 ..... 000100 ..... ..... @rrr_q1e0
+SHA1M 0101 1110 000 ..... 001000 ..... ..... @rrr_q1e0
+SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
+SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
+SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
+SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 3894db4bee2..5bef39d4e7d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4589,7 +4589,7 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
}
/*
- * Cryptographic AES
+ * Cryptographic AES, SHA
*/
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
@@ -4597,6 +4597,15 @@ TRANS_FEAT(AESD, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aesd)
TRANS_FEAT(AESMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesmc)
TRANS_FEAT(AESIMC, aa64_aes, do_gvec_op2_ool, a, 0, gen_helper_crypto_aesimc)
+TRANS_FEAT(SHA1C, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1c)
+TRANS_FEAT(SHA1P, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1p)
+TRANS_FEAT(SHA1M, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1m)
+TRANS_FEAT(SHA1SU0, aa64_sha1, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha1su0)
+
+TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h)
+TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
+TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13497,72 +13506,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto three-reg SHA
- * 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
- * +-----------------+------+---+------+---+--------+-----+------+------+
- * | 0 1 0 1 1 1 1 0 | size | 0 | Rm | 0 | opcode | 0 0 | Rn | Rd |
- * +-----------------+------+---+------+---+--------+-----+------+------+
- */
-static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 3);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_3 *genfn;
- bool feature;
-
- if (size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA1C */
- genfn = gen_helper_crypto_sha1c;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 1: /* SHA1P */
- genfn = gen_helper_crypto_sha1p;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 2: /* SHA1M */
- genfn = gen_helper_crypto_sha1m;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 3: /* SHA1SU0 */
- genfn = gen_helper_crypto_sha1su0;
- feature = dc_isar_feature(aa64_sha1, s);
- break;
- case 4: /* SHA256H */
- genfn = gen_helper_crypto_sha256h;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- case 5: /* SHA256H2 */
- genfn = gen_helper_crypto_sha256h2;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- case 6: /* SHA256SU1 */
- genfn = gen_helper_crypto_sha256su1;
- feature = dc_isar_feature(aa64_sha256, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
-}
-
/* Crypto two-reg SHA
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
* +-----------------+------+-----------+--------+-----+------+------+
@@ -13906,7 +13849,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x5e000000, 0xff208c00, disas_crypto_three_reg_sha },
{ 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 17/42] target/arm: Convert Cryptographic 2-register SHA to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (15 preceding siblings ...)
2024-05-28 14:07 ` [PULL 16/42] target/arm: Convert Cryptographic 3-register SHA " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 18/42] target/arm: Convert Cryptographic 3-register SHA512 " Peter Maydell
` (25 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 6 ++++
target/arm/tcg/translate-a64.c | 54 +++-------------------------------
2 files changed, 10 insertions(+), 50 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7590659ee68..350afabc779 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -614,3 +614,9 @@ SHA1SU0 0101 1110 000 ..... 001100 ..... ..... @rrr_q1e0
SHA256H 0101 1110 000 ..... 010000 ..... ..... @rrr_q1e0
SHA256H2 0101 1110 000 ..... 010100 ..... ..... @rrr_q1e0
SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
+
+### Cryptographic two-register SHA
+
+SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
+SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
+SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5bef39d4e7d..1d20bf0c35b 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4606,6 +4606,10 @@ TRANS_FEAT(SHA256H, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256
TRANS_FEAT(SHA256H2, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256h2)
TRANS_FEAT(SHA256SU1, aa64_sha256, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha256su1)
+TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
+TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
+TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13506,55 +13510,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto two-reg SHA
- * 31 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----------------+------+-----------+--------+-----+------+------+
- * | 0 1 0 1 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----------------+------+-----------+--------+-----+------+------+
- */
-static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
-{
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- gen_helper_gvec_2 *genfn;
- bool feature;
-
- if (size != 0) {
- unallocated_encoding(s);
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA1H */
- feature = dc_isar_feature(aa64_sha1, s);
- genfn = gen_helper_crypto_sha1h;
- break;
- case 1: /* SHA1SU1 */
- feature = dc_isar_feature(aa64_sha1, s);
- genfn = gen_helper_crypto_sha1su1;
- break;
- case 2: /* SHA256SU0 */
- feature = dc_isar_feature(aa64_sha256, s);
- genfn = gen_helper_crypto_sha256su0;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
- gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
-}
-
/* Crypto three-reg SHA512
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
* +-----------------------+------+---+---+-----+--------+------+------+
@@ -13849,7 +13804,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x5e280800, 0xff3e0c00, disas_crypto_two_reg_sha },
{ 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 18/42] target/arm: Convert Cryptographic 3-register SHA512 to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (16 preceding siblings ...)
2024-05-28 14:07 ` [PULL 17/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 19/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
` (24 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 11 ++++
target/arm/tcg/translate-a64.c | 97 ++++++++--------------------------
2 files changed, 32 insertions(+), 76 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 350afabc779..c342c276089 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -31,6 +31,7 @@
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
+@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
### Data Processing - Immediate
@@ -620,3 +621,13 @@ SHA256SU1 0101 1110 000 ..... 011000 ..... ..... @rrr_q1e0
SHA1H 0101 1110 0010 1000 0000 10 ..... ..... @rr_q1e0
SHA1SU1 0101 1110 0010 1000 0001 10 ..... ..... @rr_q1e0
SHA256SU0 0101 1110 0010 1000 0010 10 ..... ..... @rr_q1e0
+
+### Cryptographic three-register SHA512
+
+SHA512H 1100 1110 011 ..... 100000 ..... ..... @rrr_q1e0
+SHA512H2 1100 1110 011 ..... 100001 ..... ..... @rrr_q1e0
+SHA512SU1 1100 1110 011 ..... 100010 ..... ..... @rrr_q1e0
+RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
+SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
+SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
+SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1d20bf0c35b..77b24cd52ed 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1341,6 +1341,17 @@ static bool do_gvec_op3_ool(DisasContext *s, arg_qrrr_e *a, int data,
return true;
}
+static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4589,7 +4600,7 @@ static bool trans_EXTR(DisasContext *s, arg_extract *a)
}
/*
- * Cryptographic AES, SHA
+ * Cryptographic AES, SHA, SHA512
*/
TRANS_FEAT(AESE, aa64_aes, do_gvec_op3_ool, a, 0, gen_helper_crypto_aese)
@@ -4610,6 +4621,15 @@ TRANS_FEAT(SHA1H, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1h)
TRANS_FEAT(SHA1SU1, aa64_sha1, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha1su1)
TRANS_FEAT(SHA256SU0, aa64_sha256, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha256su0)
+TRANS_FEAT(SHA512H, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h)
+TRANS_FEAT(SHA512H2, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512h2)
+TRANS_FEAT(SHA512SU1, aa64_sha512, do_gvec_op3_ool, a, 0, gen_helper_crypto_sha512su1)
+TRANS_FEAT(RAX1, aa64_sha3, do_gvec_fn3, a, gen_gvec_rax1)
+TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw1)
+TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
+TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
+
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13510,80 +13530,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto three-reg SHA512
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
- * +-----------------------+------+---+---+-----+--------+------+------+
- * | 1 1 0 0 1 1 1 0 0 1 1 | Rm | 1 | O | 0 0 | opcode | Rn | Rd |
- * +-----------------------+------+---+---+-----+--------+------+------+
- */
-static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 10, 2);
- int o = extract32(insn, 14, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
- gen_helper_gvec_3 *oolfn = NULL;
- GVecGen3Fn *gvecfn = NULL;
-
- if (o == 0) {
- switch (opcode) {
- case 0: /* SHA512H */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512h;
- break;
- case 1: /* SHA512H2 */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512h2;
- break;
- case 2: /* SHA512SU1 */
- feature = dc_isar_feature(aa64_sha512, s);
- oolfn = gen_helper_crypto_sha512su1;
- break;
- case 3: /* RAX1 */
- feature = dc_isar_feature(aa64_sha3, s);
- gvecfn = gen_gvec_rax1;
- break;
- default:
- g_assert_not_reached();
- }
- } else {
- switch (opcode) {
- case 0: /* SM3PARTW1 */
- feature = dc_isar_feature(aa64_sm3, s);
- oolfn = gen_helper_crypto_sm3partw1;
- break;
- case 1: /* SM3PARTW2 */
- feature = dc_isar_feature(aa64_sm3, s);
- oolfn = gen_helper_crypto_sm3partw2;
- break;
- case 2: /* SM4EKEY */
- feature = dc_isar_feature(aa64_sm4, s);
- oolfn = gen_helper_crypto_sm4ekey;
- break;
- default:
- unallocated_encoding(s);
- return;
- }
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (oolfn) {
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
- } else {
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
- }
-}
-
/* Crypto two-reg SHA512
* 31 12 11 10 9 5 4 0
* +-----------------------------------------+--------+------+------+
@@ -13804,7 +13750,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce608000, 0xffe0b000, disas_crypto_three_reg_sha512 },
{ 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 19/42] target/arm: Convert Cryptographic 2-register SHA512 to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (17 preceding siblings ...)
2024-05-28 14:07 ` [PULL 18/42] target/arm: Convert Cryptographic 3-register SHA512 " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 20/42] target/arm: Convert Cryptographic 4-register " Peter Maydell
` (23 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 5 ++++
target/arm/tcg/translate-a64.c | 50 ++--------------------------------
2 files changed, 8 insertions(+), 47 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index c342c276089..5a46205751c 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -631,3 +631,8 @@ RAX1 1100 1110 011 ..... 100011 ..... ..... @rrr_q1e3
SM3PARTW1 1100 1110 011 ..... 110000 ..... ..... @rrr_q1e0
SM3PARTW2 1100 1110 011 ..... 110001 ..... ..... @rrr_q1e0
SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
+
+### Cryptographic two-register SHA512
+
+SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
+SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 77b24cd52ed..eed0abe9121 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4629,6 +4629,9 @@ TRANS_FEAT(SM3PARTW1, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3part
TRANS_FEAT(SM3PARTW2, aa64_sm3, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm3partw2)
TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
+TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
+TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -13530,52 +13533,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto two-reg SHA512
- * 31 12 11 10 9 5 4 0
- * +-----------------------------------------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 0 1 0 0 0 | opcode | Rn | Rd |
- * +-----------------------------------------+--------+------+------+
- */
-static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 10, 2);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
-
- switch (opcode) {
- case 0: /* SHA512SU0 */
- feature = dc_isar_feature(aa64_sha512, s);
- break;
- case 1: /* SM4E */
- feature = dc_isar_feature(aa64_sm4, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- switch (opcode) {
- case 0: /* SHA512SU0 */
- gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
- break;
- case 1: /* SM4E */
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
- break;
- default:
- g_assert_not_reached();
- }
-}
-
/* Crypto four-register
* 31 23 22 21 20 16 15 14 10 9 5 4 0
* +-------------------+-----+------+---+------+------+------+
@@ -13750,7 +13707,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xcec08000, 0xfffff000, disas_crypto_two_reg_sha512 },
{ 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 20/42] target/arm: Convert Cryptographic 4-register to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (18 preceding siblings ...)
2024-05-28 14:07 ` [PULL 19/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 21/42] target/arm: Convert Cryptographic 3-register, imm2 " Peter Maydell
` (22 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 8 ++
target/arm/tcg/translate-a64.c | 132 +++++++++++----------------------
2 files changed, 51 insertions(+), 89 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 5a46205751c..ef6902e86a5 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -27,11 +27,13 @@
&i imm
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
+&qrrrr_e q rd rn rm ra esz
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
+@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
### Data Processing - Immediate
@@ -636,3 +638,9 @@ SM4EKEY 1100 1110 011 ..... 110010 ..... ..... @rrr_q1e0
SHA512SU0 1100 1110 110 00000 100000 ..... ..... @rr_q1e0
SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
+
+### Cryptographic four-register
+
+EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
+BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
+SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index eed0abe9121..2951e7eb59e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1352,6 +1352,17 @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
return true;
}
+static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
+{
+ if (!a->q && a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn4(s, a->q, a->rd, a->rn, a->rm, a->ra, fn, a->esz);
+ }
+ return true;
+}
+
/*
* This utility function is for doing register extension with an
* optional shift. You will likely want to pass a temporary for the
@@ -4632,6 +4643,38 @@ TRANS_FEAT(SM4EKEY, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4ekey)
TRANS_FEAT(SHA512SU0, aa64_sha512, do_gvec_op2_ool, a, 0, gen_helper_crypto_sha512su0)
TRANS_FEAT(SM4E, aa64_sm4, do_gvec_op3_ool, a, 0, gen_helper_crypto_sm4e)
+TRANS_FEAT(EOR3, aa64_sha3, do_gvec_fn4, a, gen_gvec_eor3)
+TRANS_FEAT(BCAX, aa64_sha3, do_gvec_fn4, a, gen_gvec_bcax)
+
+static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
+{
+ if (!dc_isar_feature(aa64_sm3, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 tcg_op1 = tcg_temp_new_i32();
+ TCGv_i32 tcg_op2 = tcg_temp_new_i32();
+ TCGv_i32 tcg_op3 = tcg_temp_new_i32();
+ TCGv_i32 tcg_res = tcg_temp_new_i32();
+ unsigned vsz, dofs;
+
+ read_vec_element_i32(s, tcg_op1, a->rn, 3, MO_32);
+ read_vec_element_i32(s, tcg_op2, a->rm, 3, MO_32);
+ read_vec_element_i32(s, tcg_op3, a->ra, 3, MO_32);
+
+ tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
+ tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
+ tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
+
+ /* Clear the whole register first, then store bits [127:96]. */
+ vsz = vec_full_reg_size(s);
+ dofs = vec_full_reg_offset(s, a->rd);
+ tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
+ write_vec_element_i32(s, tcg_res, a->rd, 3, MO_32);
+ }
+ return true;
+}
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -13533,94 +13576,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto four-register
- * 31 23 22 21 20 16 15 14 10 9 5 4 0
- * +-------------------+-----+------+---+------+------+------+
- * | 1 1 0 0 1 1 1 0 0 | Op0 | Rm | 0 | Ra | Rn | Rd |
- * +-------------------+-----+------+---+------+------+------+
- */
-static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
-{
- int op0 = extract32(insn, 21, 2);
- int rm = extract32(insn, 16, 5);
- int ra = extract32(insn, 10, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- bool feature;
-
- switch (op0) {
- case 0: /* EOR3 */
- case 1: /* BCAX */
- feature = dc_isar_feature(aa64_sha3, s);
- break;
- case 2: /* SM3SS1 */
- feature = dc_isar_feature(aa64_sm3, s);
- break;
- default:
- unallocated_encoding(s);
- return;
- }
-
- if (!feature) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- if (op0 < 2) {
- TCGv_i64 tcg_op1, tcg_op2, tcg_op3, tcg_res[2];
- int pass;
-
- tcg_op1 = tcg_temp_new_i64();
- tcg_op2 = tcg_temp_new_i64();
- tcg_op3 = tcg_temp_new_i64();
- tcg_res[0] = tcg_temp_new_i64();
- tcg_res[1] = tcg_temp_new_i64();
-
- for (pass = 0; pass < 2; pass++) {
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
- read_vec_element(s, tcg_op3, ra, pass, MO_64);
-
- if (op0 == 0) {
- /* EOR3 */
- tcg_gen_xor_i64(tcg_res[pass], tcg_op2, tcg_op3);
- } else {
- /* BCAX */
- tcg_gen_andc_i64(tcg_res[pass], tcg_op2, tcg_op3);
- }
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
- }
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
- } else {
- TCGv_i32 tcg_op1, tcg_op2, tcg_op3, tcg_res, tcg_zero;
-
- tcg_op1 = tcg_temp_new_i32();
- tcg_op2 = tcg_temp_new_i32();
- tcg_op3 = tcg_temp_new_i32();
- tcg_res = tcg_temp_new_i32();
- tcg_zero = tcg_constant_i32(0);
-
- read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
- read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
- read_vec_element_i32(s, tcg_op3, ra, 3, MO_32);
-
- tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
- tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
- tcg_gen_rotri_i32(tcg_res, tcg_res, 25);
-
- write_vec_element_i32(s, tcg_zero, rd, 0, MO_32);
- write_vec_element_i32(s, tcg_zero, rd, 1, MO_32);
- write_vec_element_i32(s, tcg_zero, rd, 2, MO_32);
- write_vec_element_i32(s, tcg_res, rd, 3, MO_32);
- }
-}
-
/* Crypto XAR
* 31 21 20 16 15 10 9 5 4 0
* +-----------------------+------+--------+------+------+
@@ -13707,7 +13662,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce000000, 0xff808000, disas_crypto_four_reg },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 21/42] target/arm: Convert Cryptographic 3-register, imm2 to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (19 preceding siblings ...)
2024-05-28 14:07 ` [PULL 20/42] target/arm: Convert Cryptographic 4-register " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 22/42] target/arm: Convert XAR " Peter Maydell
` (21 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 10 ++++++++
target/arm/tcg/translate-a64.c | 43 ++++++++++------------------------
2 files changed, 22 insertions(+), 31 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index ef6902e86a5..1292312a7f9 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -644,3 +644,13 @@ SM4E 1100 1110 110 00000 100001 ..... ..... @r2r_q1e0
EOR3 1100 1110 000 ..... 0 ..... ..... ..... @rrrr_q1e3
BCAX 1100 1110 001 ..... 0 ..... ..... ..... @rrrr_q1e3
SM3SS1 1100 1110 010 ..... 0 ..... ..... ..... @rrrr_q1e3
+
+### Cryptographic three-register, imm2
+
+&crypto3i rd rn rm imm
+@crypto3i ........ ... rm:5 .. imm:2 .. rn:5 rd:5 &crypto3i
+
+SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
+SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
+SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
+SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 2951e7eb59e..cf3a7dfa99f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4676,6 +4676,18 @@ static bool trans_SM3SS1(DisasContext *s, arg_SM3SS1 *a)
return true;
}
+static bool do_crypto3i(DisasContext *s, arg_crypto3i *a, gen_helper_gvec_3 *fn)
+{
+ if (fp_access_check(s)) {
+ gen_gvec_op3_ool(s, true, a->rd, a->rn, a->rm, a->imm, fn);
+ }
+ return true;
+}
+TRANS_FEAT(SM3TT1A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1a)
+TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
+TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
+TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13604,36 +13616,6 @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
vec_full_reg_size(s));
}
-/* Crypto three-reg imm2
- * 31 21 20 16 15 14 13 12 11 10 9 5 4 0
- * +-----------------------+------+-----+------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 0 1 0 | Rm | 1 0 | imm2 | opcode | Rn | Rd |
- * +-----------------------+------+-----+------+--------+------+------+
- */
-static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
-{
- static gen_helper_gvec_3 * const fns[4] = {
- gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
- gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
- };
- int opcode = extract32(insn, 10, 2);
- int imm2 = extract32(insn, 12, 2);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (!dc_isar_feature(aa64_sm3, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
-}
-
/* C3.6 Data processing - SIMD, inc Crypto
*
* As the decode gets a little complex we are using a table based
@@ -13663,7 +13645,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0xce800000, 0xffe00000, disas_crypto_xar },
- { 0xce408000, 0xffe0c000, disas_crypto_three_reg_imm2 },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 22/42] target/arm: Convert XAR to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (20 preceding siblings ...)
2024-05-28 14:07 ` [PULL 21/42] target/arm: Convert Cryptographic 3-register, imm2 " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 23/42] target/arm: Convert Advanced SIMD copy " Peter Maydell
` (20 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 4 ++++
target/arm/tcg/translate-a64.c | 43 +++++++++++-----------------------
2 files changed, 18 insertions(+), 29 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 1292312a7f9..7f354af25d3 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -654,3 +654,7 @@ SM3TT1A 11001110 010 ..... 10 .. 00 ..... ..... @crypto3i
SM3TT1B 11001110 010 ..... 10 .. 01 ..... ..... @crypto3i
SM3TT2A 11001110 010 ..... 10 .. 10 ..... ..... @crypto3i
SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
+
+### Cryptographic XAR
+
+XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index cf3a7dfa99f..75f1e6a7b90 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4688,6 +4688,20 @@ TRANS_FEAT(SM3TT1B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt1b)
TRANS_FEAT(SM3TT2A, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2a)
TRANS_FEAT(SM3TT2B, aa64_sm3, do_crypto3i, a, gen_helper_crypto_sm3tt2b)
+static bool trans_XAR(DisasContext *s, arg_XAR *a)
+{
+ if (!dc_isar_feature(aa64_sha3, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_xar(MO_64, vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), a->imm, 16,
+ vec_full_reg_size(s));
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -13588,34 +13602,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
}
-/* Crypto XAR
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+------+--------+------+------+
- * | 1 1 0 0 1 1 1 0 1 0 0 | Rm | imm6 | Rn | Rd |
- * +-----------------------+------+--------+------+------+
- */
-static void disas_crypto_xar(DisasContext *s, uint32_t insn)
-{
- int rm = extract32(insn, 16, 5);
- int imm6 = extract32(insn, 10, 6);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (!dc_isar_feature(aa64_sha3, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- gen_gvec_xar(MO_64, vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), imm6, 16,
- vec_full_reg_size(s));
-}
-
/* C3.6 Data processing - SIMD, inc Crypto
*
* As the decode gets a little complex we are using a table based
@@ -13644,7 +13630,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0xce800000, 0xffe00000, disas_crypto_xar },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 23/42] target/arm: Convert Advanced SIMD copy to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (21 preceding siblings ...)
2024-05-28 14:07 ` [PULL 22/42] target/arm: Convert XAR " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 24/42] target/arm: Convert FMULX " Peter Maydell
` (19 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 13 +
target/arm/tcg/translate-a64.c | 426 +++++++++++----------------------
2 files changed, 152 insertions(+), 287 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7f354af25d3..d5bfeae7a82 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -658,3 +658,16 @@ SM3TT2B 11001110 010 ..... 10 .. 11 ..... ..... @crypto3i
### Cryptographic XAR
XAR 1100 1110 100 rm:5 imm:6 rn:5 rd:5
+
+### Advanced SIMD scalar copy
+
+DUP_element_s 0101 1110 000 imm:5 0 0000 1 rn:5 rd:5
+
+### Advanced SIMD copy
+
+DUP_element_v 0 q:1 00 1110 000 imm:5 0 0000 1 rn:5 rd:5
+DUP_general 0 q:1 00 1110 000 imm:5 0 0001 1 rn:5 rd:5
+INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
+SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
+UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
+INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 75f1e6a7b90..1a12bf22fd8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4702,6 +4702,145 @@ static bool trans_XAR(DisasContext *s, arg_XAR *a)
return true;
}
+/*
+ * Advanced SIMD copy
+ */
+
+static bool decode_esz_idx(int imm, MemOp *pesz, unsigned *pidx)
+{
+ unsigned esz = ctz32(imm);
+ if (esz <= MO_64) {
+ *pesz = esz;
+ *pidx = imm >> (esz + 1);
+ return true;
+ }
+ return false;
+}
+
+static bool trans_DUP_element_s(DisasContext *s, arg_DUP_element_s *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ /*
+ * This instruction just extracts the specified element and
+ * zero-extends it into the bottom of the destination register.
+ */
+ TCGv_i64 tmp = tcg_temp_new_i64();
+ read_vec_element(s, tmp, a->rn, idx, esz);
+ write_fp_dreg(s, a->rd, tmp);
+ }
+ return true;
+}
+
+static bool trans_DUP_element_v(DisasContext *s, arg_DUP_element_v *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (esz == MO_64 && !a->q) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_dup_mem(esz, vec_full_reg_offset(s, a->rd),
+ vec_reg_offset(s, a->rn, idx, esz),
+ a->q ? 16 : 8, vec_full_reg_size(s));
+ }
+ return true;
+}
+
+static bool trans_DUP_general(DisasContext *s, arg_DUP_general *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (esz == MO_64 && !a->q) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ cpu_reg(s, a->rn));
+ }
+ return true;
+}
+
+static bool do_smov_umov(DisasContext *s, arg_SMOV *a, MemOp is_signed)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (is_signed) {
+ if (esz == MO_64 || (esz == MO_32 && !a->q)) {
+ return false;
+ }
+ } else {
+ if (esz == MO_64 ? !a->q : a->q) {
+ return false;
+ }
+ }
+ if (fp_access_check(s)) {
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
+ read_vec_element(s, tcg_rd, a->rn, idx, esz | is_signed);
+ if (is_signed && !a->q) {
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
+ }
+ }
+ return true;
+}
+
+TRANS(SMOV, do_smov_umov, a, MO_SIGN)
+TRANS(UMOV, do_smov_umov, a, 0)
+
+static bool trans_INS_general(DisasContext *s, arg_INS_general *a)
+{
+ MemOp esz;
+ unsigned idx;
+
+ if (!decode_esz_idx(a->imm, &esz, &idx)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ write_vec_element(s, cpu_reg(s, a->rn), a->rd, idx, esz);
+ clear_vec_high(s, true, a->rd);
+ }
+ return true;
+}
+
+static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
+{
+ MemOp esz;
+ unsigned didx, sidx;
+
+ if (!decode_esz_idx(a->di, &esz, &didx)) {
+ return false;
+ }
+ sidx = a->si >> esz;
+ if (fp_access_check(s)) {
+ TCGv_i64 tmp = tcg_temp_new_i64();
+
+ read_vec_element(s, tmp, a->rn, sidx, esz);
+ write_vec_element(s, tmp, a->rd, didx, esz);
+
+ /* INS is considered a 128-bit write for SVE. */
+ clear_vec_high(s, true, a->rd);
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -7760,268 +7899,6 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
write_fp_dreg(s, rd, tcg_res);
}
-/* DUP (Element, Vector)
- *
- * 31 30 29 21 20 16 15 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_dupe(DisasContext *s, int is_q, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- int index;
-
- if (size > 3 || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- index = imm5 >> (size + 1);
- tcg_gen_gvec_dup_mem(size, vec_full_reg_offset(s, rd),
- vec_reg_offset(s, rn, index, size),
- is_q ? 16 : 8, vec_full_reg_size(s));
-}
-
-/* DUP (element, scalar)
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+--------+-------------+------+------+
- * | 0 1 0 1 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
- * +-----------------------+--------+-------------+------+------+
- */
-static void handle_simd_dupes(DisasContext *s, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- int index;
- TCGv_i64 tmp;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- index = imm5 >> (size + 1);
-
- /* This instruction just extracts the specified element and
- * zero-extends it into the bottom of the destination register.
- */
- tmp = tcg_temp_new_i64();
- read_vec_element(s, tmp, rn, index, size);
- write_fp_dreg(s, rd, tmp);
-}
-
-/* DUP (General)
- *
- * 31 30 29 21 20 16 15 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 1 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_dupg(DisasContext *s, int is_q, int rd, int rn,
- int imm5)
-{
- int size = ctz32(imm5);
- uint32_t dofs, oprsz, maxsz;
-
- if (size > 3 || ((size == 3) && !is_q)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- dofs = vec_full_reg_offset(s, rd);
- oprsz = is_q ? 16 : 8;
- maxsz = vec_full_reg_size(s);
-
- tcg_gen_gvec_dup_i64(size, dofs, oprsz, maxsz, cpu_reg(s, rn));
-}
-
-/* INS (Element)
- *
- * 31 21 20 16 15 14 11 10 9 5 4 0
- * +-----------------------+--------+------------+---+------+------+
- * | 0 1 1 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +-----------------------+--------+------------+---+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- * index: encoded in imm5<4:size+1>
- */
-static void handle_simd_inse(DisasContext *s, int rd, int rn,
- int imm4, int imm5)
-{
- int size = ctz32(imm5);
- int src_index, dst_index;
- TCGv_i64 tmp;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- dst_index = extract32(imm5, 1+size, 5);
- src_index = extract32(imm4, size, 4);
-
- tmp = tcg_temp_new_i64();
-
- read_vec_element(s, tmp, rn, src_index, size);
- write_vec_element(s, tmp, rd, dst_index, size);
-
- /* INS is considered a 128-bit write for SVE. */
- clear_vec_high(s, true, rd);
-}
-
-
-/* INS (General)
- *
- * 31 21 20 16 15 10 9 5 4 0
- * +-----------------------+--------+-------------+------+------+
- * | 0 1 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 0 1 1 1 | Rn | Rd |
- * +-----------------------+--------+-------------+------+------+
- *
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- * index: encoded in imm5<4:size+1>
- */
-static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
-{
- int size = ctz32(imm5);
- int idx;
-
- if (size > 3) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- idx = extract32(imm5, 1 + size, 4 - size);
- write_vec_element(s, cpu_reg(s, rn), rd, idx, size);
-
- /* INS is considered a 128-bit write for SVE. */
- clear_vec_high(s, true, rd);
-}
-
-/*
- * UMOV (General)
- * SMOV (General)
- *
- * 31 30 29 21 20 16 15 12 10 9 5 4 0
- * +---+---+-------------------+--------+-------------+------+------+
- * | 0 | Q | 0 0 1 1 1 0 0 0 0 | imm5 | 0 0 1 U 1 1 | Rn | Rd |
- * +---+---+-------------------+--------+-------------+------+------+
- *
- * U: unsigned when set
- * size: encoded in imm5 (see ARM ARM LowestSetBit())
- */
-static void handle_simd_umov_smov(DisasContext *s, int is_q, int is_signed,
- int rn, int rd, int imm5)
-{
- int size = ctz32(imm5);
- int element;
- TCGv_i64 tcg_rd;
-
- /* Check for UnallocatedEncodings */
- if (is_signed) {
- if (size > 2 || (size == 2 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- } else {
- if (size > 3
- || (size < 3 && is_q)
- || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- element = extract32(imm5, 1+size, 4);
-
- tcg_rd = cpu_reg(s, rd);
- read_vec_element(s, tcg_rd, rn, element, size | (is_signed ? MO_SIGN : 0));
- if (is_signed && !is_q) {
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
- }
-}
-
-/* AdvSIMD copy
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
- * +---+---+----+-----------------+------+---+------+---+------+------+
- * | 0 | Q | op | 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +---+---+----+-----------------+------+---+------+---+------+------+
- */
-static void disas_simd_copy(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int imm4 = extract32(insn, 11, 4);
- int op = extract32(insn, 29, 1);
- int is_q = extract32(insn, 30, 1);
- int imm5 = extract32(insn, 16, 5);
-
- if (op) {
- if (is_q) {
- /* INS (element) */
- handle_simd_inse(s, rd, rn, imm4, imm5);
- } else {
- unallocated_encoding(s);
- }
- } else {
- switch (imm4) {
- case 0:
- /* DUP (element - vector) */
- handle_simd_dupe(s, is_q, rd, rn, imm5);
- break;
- case 1:
- /* DUP (general) */
- handle_simd_dupg(s, is_q, rd, rn, imm5);
- break;
- case 3:
- if (is_q) {
- /* INS (general) */
- handle_simd_insg(s, rd, rn, imm5);
- } else {
- unallocated_encoding(s);
- }
- break;
- case 5:
- case 7:
- /* UMOV/SMOV (is_q indicates 32/64; imm4 indicates signedness) */
- handle_simd_umov_smov(s, is_q, (imm4 == 5), rn, rd, imm5);
- break;
- default:
- unallocated_encoding(s);
- break;
- }
- }
-}
-
/* AdvSIMD modified immediate
* 31 30 29 28 19 18 16 15 12 11 10 9 5 4 0
* +---+---+----+---------------------+-----+-------+----+---+-------+------+
@@ -8085,29 +7962,6 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
}
}
-/* AdvSIMD scalar copy
- * 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
- * +-----+----+-----------------+------+---+------+---+------+------+
- * | 0 1 | op | 1 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
- * +-----+----+-----------------+------+---+------+---+------+------+
- */
-static void disas_simd_scalar_copy(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int imm4 = extract32(insn, 11, 4);
- int imm5 = extract32(insn, 16, 5);
- int op = extract32(insn, 29, 1);
-
- if (op != 0 || imm4 != 0) {
- unallocated_encoding(s);
- return;
- }
-
- /* DUP (element, scalar) */
- handle_simd_dupes(s, rd, rn, imm5);
-}
-
/* AdvSIMD scalar pairwise
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
* +-----+---+-----------+------+-----------+--------+-----+------+------+
@@ -13614,7 +13468,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
{ 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes },
- { 0x0e000400, 0x9fe08400, disas_simd_copy },
{ 0x0f000000, 0x9f000400, disas_simd_indexed }, /* vector indexed */
/* simd_mod_imm decode is a subset of simd_shift_imm, so must precede it */
{ 0x0f000400, 0x9ff80400, disas_simd_mod_imm },
@@ -13627,7 +13480,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
- { 0x5e000400, 0xdfe08400, disas_simd_scalar_copy },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 24/42] target/arm: Convert FMULX to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (22 preceding siblings ...)
2024-05-28 14:07 ` [PULL 23/42] target/arm: Convert Advanced SIMD copy " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 25/42] target/arm: Convert FADD, FSUB, FDIV, FMUL " Peter Maydell
` (18 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Convert all forms (scalar, vector, scalar indexed, vector indexed),
which allows us to remove switch table entries elsewhere.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/helper-a64.h | 8 ++
target/arm/tcg/a64.decode | 45 +++++++
target/arm/tcg/translate-a64.c | 221 +++++++++++++++++++++++++++------
target/arm/tcg/vec_helper.c | 39 +++---
4 files changed, 259 insertions(+), 54 deletions(-)
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index 05181653999..b79751a7170 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -132,3 +132,11 @@ DEF_HELPER_4(cpye, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmulx_idx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d5bfeae7a82..2e0e01be017 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -20,21 +20,44 @@
#
%rd 0:5
+%esz_sd 22:1 !function=plus_2
+%hl 11:1 21:1
+%hlm 11:1 20:2
&r rn
&ri rd imm
&rri_sf rd rn imm sf
&i imm
+&rrr_e rd rn rm esz
+&rrx_e rd rn rm idx esz
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
+&qrrx_e q rd rn rm idx esz
&qrrrr_e q rd rn rm ra esz
+@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
+@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
+
+@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
+@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
+@rrx_d ........ .. . rm:5 .... idx:1 . rn:5 rd:5 &rrx_e esz=3
+
@rr_q1e0 ........ ........ ...... rn:5 rd:5 &qrr_e q=1 esz=0
@r2r_q1e0 ........ ........ ...... rm:5 rd:5 &qrrr_e rn=%rd q=1 esz=0
@rrr_q1e0 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=0
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
+@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
+@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
+
+@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
+ &qrrx_e esz=1 idx=%hlm
+@qrrx_s . q:1 .. .... .. . rm:5 .... . . rn:5 rd:5 \
+ &qrrx_e esz=2 idx=%hl
+@qrrx_d . q:1 .. .... .. . rm:5 .... idx:1 . rn:5 rd:5 \
+ &qrrx_e esz=3
+
### Data Processing - Immediate
# PC-rel addressing
@@ -671,3 +694,25 @@ INS_general 0 1 00 1110 000 imm:5 0 0011 1 rn:5 rd:5
SMOV 0 q:1 00 1110 000 imm:5 0 0101 1 rn:5 rd:5
UMOV 0 q:1 00 1110 000 imm:5 0 0111 1 rn:5 rd:5
INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
+
+### Advanced SIMD scalar three same
+
+FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
+FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
+
+### Advanced SIMD three same
+
+FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
+FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+
+### Advanced SIMD scalar x indexed element
+
+FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
+FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
+FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+
+### Advanced SIMD vector x indexed element
+
+FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
+FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
+FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 1a12bf22fd8..8cbe6cd70f2 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4841,6 +4841,178 @@ static bool trans_INS_element(DisasContext *s, arg_INS_element *a)
return true;
}
+/*
+ * Advanced SIMD three same
+ */
+
+typedef struct FPScalar {
+ void (*gen_h)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
+ void (*gen_s)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
+ void (*gen_d)(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
+} FPScalar;
+
+static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rm);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t1 = read_fp_sreg(s, a->rm);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t1 = read_fp_hreg(s, a->rm);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ return false;
+ }
+ return true;
+}
+
+static const FPScalar f_scalar_fmulx = {
+ gen_helper_advsimd_mulxh,
+ gen_helper_vfp_mulxs,
+ gen_helper_vfp_mulxd,
+};
+TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
+
+static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
+ gen_helper_gvec_3_ptr * const fns[3])
+{
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
+ esz == MO_16, 0, fns[esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
+ gen_helper_gvec_fmulx_h,
+ gen_helper_gvec_fmulx_s,
+ gen_helper_gvec_fmulx_d,
+};
+TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
+
+/*
+ * Advanced SIMD scalar/vector x indexed element
+ */
+
+static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t1, a->rm, a->idx, MO_64);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_32);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t1, a->rm, a->idx, MO_16);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
+
+static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
+ gen_helper_gvec_3_ptr * const fns[3])
+{
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op3_fpst(s, a->q, a->rd, a->rn, a->rm,
+ esz == MO_16, a->idx, fns[esz - 1]);
+ }
+ return true;
+}
+
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
+ gen_helper_gvec_fmulx_idx_h,
+ gen_helper_gvec_fmulx_idx_s,
+ gen_helper_gvec_fmulx_idx_d,
+};
+TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
+
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -9011,9 +9183,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1a: /* FADD */
gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1b: /* FMULX */
- gen_helper_vfp_mulxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9058,6 +9227,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1b: /* FMULX */
g_assert_not_reached();
}
@@ -9084,9 +9254,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1a: /* FADD */
gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1b: /* FMULX */
- gen_helper_vfp_mulxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9134,6 +9301,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1b: /* FMULX */
g_assert_not_reached();
}
@@ -9172,7 +9340,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
/* Floating point: U, size[1] and opcode indicate operation */
int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
switch (fpopcode) {
- case 0x1b: /* FMULX */
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
@@ -9183,6 +9350,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x7a: /* FABD */
break;
default:
+ case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9335,7 +9503,6 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
TCGv_i32 tcg_res;
switch (fpopcode) {
- case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
@@ -9346,6 +9513,7 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
case 0x1d: /* FACGT */
break;
default:
+ case 0x03: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9365,9 +9533,6 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
tcg_res = tcg_temp_new_i32();
switch (fpopcode) {
- case 0x03: /* FMULX */
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x04: /* FCMEQ (reg) */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9394,6 +9559,7 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x03: /* FMULX */
g_assert_not_reached();
}
@@ -11051,7 +11217,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
rn, rm, rd);
return;
- case 0x1b: /* FMULX */
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
@@ -11097,6 +11262,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -11441,7 +11607,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x2: /* FADD */
- case 0x3: /* FMULX */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x7: /* FRECPS */
@@ -11467,6 +11632,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x3: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -11543,9 +11709,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x2: /* FADD */
gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3: /* FMULX */
- gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11597,6 +11760,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x3: /* FMULX */
g_assert_not_reached();
}
@@ -12816,7 +12980,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x01: /* FMLA */
case 0x05: /* FMLS */
case 0x09: /* FMUL */
- case 0x19: /* FMULX */
is_fp = 1;
break;
case 0x1d: /* SQRDMLAH */
@@ -12885,6 +13048,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x19: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -13108,10 +13272,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x09: /* FMUL */
gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
break;
- case 0x19: /* FMULX */
- gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
- break;
default:
+ case 0x19: /* FMULX */
g_assert_not_reached();
}
@@ -13224,24 +13386,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
g_assert_not_reached();
}
break;
- case 0x19: /* FMULX */
- switch (size) {
- case 1:
- if (is_scalar) {
- gen_helper_advsimd_mulxh(tcg_res, tcg_op,
- tcg_idx, fpst);
- } else {
- gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
- tcg_idx, fpst);
- }
- break;
- case 2:
- gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13283,6 +13427,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x19: /* FMULX */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 1f93510b85c..86845819236 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1248,6 +1248,9 @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
#ifdef TARGET_AARCH64
+DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
+DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
+DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
@@ -1385,7 +1388,7 @@ DO_MLA_IDX(gvec_mls_idx_d, uint64_t, -, H8)
#undef DO_MLA_IDX
-#define DO_FMUL_IDX(NAME, ADD, TYPE, H) \
+#define DO_FMUL_IDX(NAME, ADD, MUL, TYPE, H) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
intptr_t i, j, oprsz = simd_oprsz(desc); \
@@ -1395,33 +1398,37 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
for (i = 0; i < oprsz / sizeof(TYPE); i += segment) { \
TYPE mm = m[H(i + idx)]; \
for (j = 0; j < segment; j++) { \
- d[i + j] = TYPE##_##ADD(d[i + j], \
- TYPE##_mul(n[i + j], mm, stat), stat); \
+ d[i + j] = ADD(d[i + j], MUL(n[i + j], mm, stat), stat); \
} \
} \
clear_tail(d, oprsz, simd_maxsz(desc)); \
}
-#define float16_nop(N, M, S) (M)
-#define float32_nop(N, M, S) (M)
-#define float64_nop(N, M, S) (M)
+#define nop(N, M, S) (M)
-DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16, H2)
-DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32, H4)
-DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64, H8)
+DO_FMUL_IDX(gvec_fmul_idx_h, nop, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmul_idx_s, nop, float32_mul, float32, H4)
+DO_FMUL_IDX(gvec_fmul_idx_d, nop, float64_mul, float64, H8)
+
+#ifdef TARGET_AARCH64
+
+DO_FMUL_IDX(gvec_fmulx_idx_h, nop, helper_advsimd_mulxh, float16, H2)
+DO_FMUL_IDX(gvec_fmulx_idx_s, nop, helper_vfp_mulxs, float32, H4)
+DO_FMUL_IDX(gvec_fmulx_idx_d, nop, helper_vfp_mulxd, float64, H8)
+
+#endif
+
+#undef nop
/*
* Non-fused multiply-accumulate operations, for Neon. NB that unlike
* the fused ops below they assume accumulate both from and into Vd.
*/
-DO_FMUL_IDX(gvec_fmla_nf_idx_h, add, float16, H2)
-DO_FMUL_IDX(gvec_fmla_nf_idx_s, add, float32, H4)
-DO_FMUL_IDX(gvec_fmls_nf_idx_h, sub, float16, H2)
-DO_FMUL_IDX(gvec_fmls_nf_idx_s, sub, float32, H4)
+DO_FMUL_IDX(gvec_fmla_nf_idx_h, float16_add, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmla_nf_idx_s, float32_add, float32_mul, float32, H4)
+DO_FMUL_IDX(gvec_fmls_nf_idx_h, float16_sub, float16_mul, float16, H2)
+DO_FMUL_IDX(gvec_fmls_nf_idx_s, float32_sub, float32_mul, float32, H4)
-#undef float16_nop
-#undef float32_nop
-#undef float64_nop
#undef DO_FMUL_IDX
#define DO_FMLA_IDX(NAME, TYPE, H) \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 25/42] target/arm: Convert FADD, FSUB, FDIV, FMUL to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (23 preceding siblings ...)
2024-05-28 14:07 ` [PULL 24/42] target/arm: Convert FMULX " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 26/42] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Peter Maydell
` (17 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/helper-a64.h | 4 +
target/arm/tcg/translate.h | 5 +
target/arm/tcg/a64.decode | 27 +++++
target/arm/tcg/translate-a64.c | 205 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 4 +
5 files changed, 143 insertions(+), 102 deletions(-)
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index b79751a7170..371388f61b5 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -133,6 +133,10 @@ DEF_HELPER_4(cpyfp, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfm, void, env, i32, i32, i32)
DEF_HELPER_4(cpyfe, void, env, i32, i32, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fdiv_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
DEF_HELPER_FLAGS_5(gvec_fmulx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmulx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmulx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 80e85096a83..ecfa242eef3 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -252,6 +252,11 @@ static inline int shl_12(DisasContext *s, int x)
return x << 12;
}
+static inline int xor_2(DisasContext *s, int x)
+{
+ return x ^ 2;
+}
+
static inline int neon_3same_fp_size(DisasContext *s, int x)
{
/* Convert 0==fp32, 1==fp16 into a MO_* value */
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 2e0e01be017..82daafbef52 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -21,6 +21,7 @@
%rd 0:5
%esz_sd 22:1 !function=plus_2
+%esz_hsd 22:2 !function=xor_2
%hl 11:1 21:1
%hlm 11:1 20:2
@@ -37,6 +38,7 @@
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
+@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
@@ -697,22 +699,47 @@ INS_element 0 1 10 1110 000 di:5 0 si:4 1 rn:5 rd:5
### Advanced SIMD scalar three same
+FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
+FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
+FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
+FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
### Advanced SIMD three same
+FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
+FADD_v 0.00 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+
+FSUB_v 0.00 1110 110 ..... 00010 1 ..... ..... @qrrr_h
+FSUB_v 0.00 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+
+FDIV_v 0.10 1110 010 ..... 00111 1 ..... ..... @qrrr_h
+FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
+
+FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
+FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
### Advanced SIMD scalar x indexed element
+FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
+FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
+FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
### Advanced SIMD vector x indexed element
+FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
+FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
+FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 8cbe6cd70f2..97c3d758d62 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4887,6 +4887,34 @@ static bool do_fp3_scalar(DisasContext *s, arg_rrr_e *a, const FPScalar *f)
return true;
}
+static const FPScalar f_scalar_fadd = {
+ gen_helper_vfp_addh,
+ gen_helper_vfp_adds,
+ gen_helper_vfp_addd,
+};
+TRANS(FADD_s, do_fp3_scalar, a, &f_scalar_fadd)
+
+static const FPScalar f_scalar_fsub = {
+ gen_helper_vfp_subh,
+ gen_helper_vfp_subs,
+ gen_helper_vfp_subd,
+};
+TRANS(FSUB_s, do_fp3_scalar, a, &f_scalar_fsub)
+
+static const FPScalar f_scalar_fdiv = {
+ gen_helper_vfp_divh,
+ gen_helper_vfp_divs,
+ gen_helper_vfp_divd,
+};
+TRANS(FDIV_s, do_fp3_scalar, a, &f_scalar_fdiv)
+
+static const FPScalar f_scalar_fmul = {
+ gen_helper_vfp_mulh,
+ gen_helper_vfp_muls,
+ gen_helper_vfp_muld,
+};
+TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
+
static const FPScalar f_scalar_fmulx = {
gen_helper_advsimd_mulxh,
gen_helper_vfp_mulxs,
@@ -4922,6 +4950,34 @@ static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
return true;
}
+static gen_helper_gvec_3_ptr * const f_vector_fadd[3] = {
+ gen_helper_gvec_fadd_h,
+ gen_helper_gvec_fadd_s,
+ gen_helper_gvec_fadd_d,
+};
+TRANS(FADD_v, do_fp3_vector, a, f_vector_fadd)
+
+static gen_helper_gvec_3_ptr * const f_vector_fsub[3] = {
+ gen_helper_gvec_fsub_h,
+ gen_helper_gvec_fsub_s,
+ gen_helper_gvec_fsub_d,
+};
+TRANS(FSUB_v, do_fp3_vector, a, f_vector_fsub)
+
+static gen_helper_gvec_3_ptr * const f_vector_fdiv[3] = {
+ gen_helper_gvec_fdiv_h,
+ gen_helper_gvec_fdiv_s,
+ gen_helper_gvec_fdiv_d,
+};
+TRANS(FDIV_v, do_fp3_vector, a, f_vector_fdiv)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
+ gen_helper_gvec_fmul_h,
+ gen_helper_gvec_fmul_s,
+ gen_helper_gvec_fmul_d,
+};
+TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
+
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
gen_helper_gvec_fmulx_h,
gen_helper_gvec_fmulx_s,
@@ -4975,6 +5031,7 @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
return true;
}
+TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
@@ -5005,6 +5062,13 @@ static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
return true;
}
+static gen_helper_gvec_3_ptr * const f_vector_idx_fmul[3] = {
+ gen_helper_gvec_fmul_idx_h,
+ gen_helper_gvec_fmul_idx_s,
+ gen_helper_gvec_fmul_idx_d,
+};
+TRANS(FMUL_vi, do_fp3_vector_idx, a, f_vector_idx_fmul)
+
static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
gen_helper_gvec_fmulx_idx_h,
gen_helper_gvec_fmulx_idx_s,
@@ -6827,18 +6891,6 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
tcg_op2 = read_fp_sreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6855,6 +6907,12 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negs(tcg_res, tcg_res);
break;
+ default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
+ g_assert_not_reached();
}
write_fp_sreg(s, rd, tcg_res);
@@ -6875,18 +6933,6 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
tcg_op2 = read_fp_dreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6903,6 +6949,12 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negd(tcg_res, tcg_res);
break;
+ default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
+ g_assert_not_reached();
}
write_fp_dreg(s, rd, tcg_res);
@@ -6923,18 +6975,6 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_op2 = read_fp_hreg(s, rm);
switch (opcode) {
- case 0x0: /* FMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1: /* FDIV */
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2: /* FADD */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3: /* FSUB */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FMAX */
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6952,6 +6992,10 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
break;
default:
+ case 0x0: /* FMUL */
+ case 0x1: /* FDIV */
+ case 0x2: /* FADD */
+ case 0x3: /* FSUB */
g_assert_not_reached();
}
@@ -9180,9 +9224,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x18: /* FMAXNM */
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FADD */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9195,27 +9236,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x38: /* FMINNM */
gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3a: /* FSUB */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3e: /* FMIN */
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5b: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5d: /* FACGE */
gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5f: /* FDIV */
- gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_absd(tcg_res, tcg_res);
@@ -9227,7 +9259,11 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
g_assert_not_reached();
}
@@ -9251,9 +9287,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
tcg_res, fpst);
break;
- case 0x1a: /* FADD */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9269,27 +9302,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x38: /* FMINNM */
gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x3a: /* FSUB */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3e: /* FMIN */
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5b: /* FMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5d: /* FACGE */
gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5f: /* FDIV */
- gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_abss(tcg_res, tcg_res);
@@ -9301,7 +9325,11 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
g_assert_not_reached();
}
@@ -11224,15 +11252,11 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x19: /* FMLA */
case 0x39: /* FMLS */
case 0x18: /* FMAXNM */
- case 0x1a: /* FADD */
case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
- case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
- case 0x5f: /* FDIV */
case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
if (!fp_access_check(s)) {
@@ -11262,7 +11286,11 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x3a: /* FSUB */
+ case 0x5b: /* FMUL */
+ case 0x5f: /* FDIV */
unallocated_encoding(s);
return;
}
@@ -11606,19 +11634,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
- case 0x2: /* FADD */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x7: /* FRECPS */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
- case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
- case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
- case 0x17: /* FDIV */
case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
@@ -11632,7 +11656,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0xa: /* FSUB */
+ case 0x13: /* FMUL */
+ case 0x17: /* FDIV */
unallocated_encoding(s);
return;
}
@@ -11706,9 +11734,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0x2: /* FADD */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11728,27 +11753,18 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0xa: /* FSUB */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xe: /* FMIN */
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x13: /* FMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x14: /* FCMGE */
gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x15: /* FACGE */
gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x17: /* FDIV */
- gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
@@ -11760,7 +11776,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0xa: /* FSUB */
+ case 0x13: /* FMUL */
+ case 0x17: /* FDIV */
g_assert_not_reached();
}
@@ -12979,7 +12999,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
break;
case 0x01: /* FMLA */
case 0x05: /* FMLS */
- case 0x09: /* FMUL */
is_fp = 1;
break;
case 0x1d: /* SQRDMLAH */
@@ -13048,6 +13067,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
unallocated_encoding(s);
return;
@@ -13269,10 +13289,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
read_vec_element(s, tcg_res, rd, pass, MO_64);
gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
break;
- case 0x09: /* FMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
- break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
}
@@ -13368,24 +13386,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
g_assert_not_reached();
}
break;
- case 0x09: /* FMUL */
- switch (size) {
- case 1:
- if (is_scalar) {
- gen_helper_advsimd_mulh(tcg_res, tcg_op,
- tcg_idx, fpst);
- } else {
- gen_helper_advsimd_mul2h(tcg_res, tcg_op,
- tcg_idx, fpst);
- }
- break;
- case 2:
- gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13427,6 +13427,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 86845819236..41065363710 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1248,6 +1248,10 @@ DO_3OP(gvec_rsqrts_nf_h, float16_rsqrts_nf, float16)
DO_3OP(gvec_rsqrts_nf_s, float32_rsqrts_nf, float32)
#ifdef TARGET_AARCH64
+DO_3OP(gvec_fdiv_h, float16_div, float16)
+DO_3OP(gvec_fdiv_s, float32_div, float32)
+DO_3OP(gvec_fdiv_d, float64_div, float64)
+
DO_3OP(gvec_fmulx_h, helper_advsimd_mulxh, float16)
DO_3OP(gvec_fmulx_s, helper_vfp_mulxs, float32)
DO_3OP(gvec_fmulx_d, helper_vfp_mulxd, float64)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 26/42] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (24 preceding siblings ...)
2024-05-28 14:07 ` [PULL 25/42] target/arm: Convert FADD, FSUB, FDIV, FMUL " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 27/42] target/arm: Introduce vfp_load_reg16 Peter Maydell
` (16 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 4 +
target/arm/tcg/a64.decode | 17 ++++
target/arm/tcg/translate-a64.c | 168 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 4 +
4 files changed, 113 insertions(+), 80 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 2b027333053..7ee15b96512 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -748,15 +748,19 @@ DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmax_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmin_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmin_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmin_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmaxnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmaxnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnum_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnum_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnum_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_recps_nf_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_recps_nf_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 82daafbef52..e2678d919e5 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -704,6 +704,11 @@ FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
+FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
+FMAXNM_s 0001 1110 ..1 ..... 0110 10 ..... ..... @rrr_hsd
+FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
+
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
@@ -721,6 +726,18 @@ FDIV_v 0.10 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
FMUL_v 0.10 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMUL_v 0.10 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+FMAX_v 0.00 1110 010 ..... 00110 1 ..... ..... @qrrr_h
+FMAX_v 0.00 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMIN_v 0.00 1110 110 ..... 00110 1 ..... ..... @qrrr_h
+FMIN_v 0.00 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMAXNM_v 0.00 1110 010 ..... 00000 1 ..... ..... @qrrr_h
+FMAXNM_v 0.00 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
+
+FMINNM_v 0.00 1110 110 ..... 00000 1 ..... ..... @qrrr_h
+FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 97c3d758d62..6f8207d842b 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4915,6 +4915,34 @@ static const FPScalar f_scalar_fmul = {
};
TRANS(FMUL_s, do_fp3_scalar, a, &f_scalar_fmul)
+static const FPScalar f_scalar_fmax = {
+ gen_helper_advsimd_maxh,
+ gen_helper_vfp_maxs,
+ gen_helper_vfp_maxd,
+};
+TRANS(FMAX_s, do_fp3_scalar, a, &f_scalar_fmax)
+
+static const FPScalar f_scalar_fmin = {
+ gen_helper_advsimd_minh,
+ gen_helper_vfp_mins,
+ gen_helper_vfp_mind,
+};
+TRANS(FMIN_s, do_fp3_scalar, a, &f_scalar_fmin)
+
+static const FPScalar f_scalar_fmaxnm = {
+ gen_helper_advsimd_maxnumh,
+ gen_helper_vfp_maxnums,
+ gen_helper_vfp_maxnumd,
+};
+TRANS(FMAXNM_s, do_fp3_scalar, a, &f_scalar_fmaxnm)
+
+static const FPScalar f_scalar_fminnm = {
+ gen_helper_advsimd_minnumh,
+ gen_helper_vfp_minnums,
+ gen_helper_vfp_minnumd,
+};
+TRANS(FMINNM_s, do_fp3_scalar, a, &f_scalar_fminnm)
+
static const FPScalar f_scalar_fmulx = {
gen_helper_advsimd_mulxh,
gen_helper_vfp_mulxs,
@@ -4978,6 +5006,34 @@ static gen_helper_gvec_3_ptr * const f_vector_fmul[3] = {
};
TRANS(FMUL_v, do_fp3_vector, a, f_vector_fmul)
+static gen_helper_gvec_3_ptr * const f_vector_fmax[3] = {
+ gen_helper_gvec_fmax_h,
+ gen_helper_gvec_fmax_s,
+ gen_helper_gvec_fmax_d,
+};
+TRANS(FMAX_v, do_fp3_vector, a, f_vector_fmax)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmin[3] = {
+ gen_helper_gvec_fmin_h,
+ gen_helper_gvec_fmin_s,
+ gen_helper_gvec_fmin_d,
+};
+TRANS(FMIN_v, do_fp3_vector, a, f_vector_fmin)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnm[3] = {
+ gen_helper_gvec_fmaxnum_h,
+ gen_helper_gvec_fmaxnum_s,
+ gen_helper_gvec_fmaxnum_d,
+};
+TRANS(FMAXNM_v, do_fp3_vector, a, f_vector_fmaxnm)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminnm[3] = {
+ gen_helper_gvec_fminnum_h,
+ gen_helper_gvec_fminnum_s,
+ gen_helper_gvec_fminnum_d,
+};
+TRANS(FMINNM_v, do_fp3_vector, a, f_vector_fminnm)
+
static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
gen_helper_gvec_fmulx_h,
gen_helper_gvec_fmulx_s,
@@ -6891,18 +6947,6 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
tcg_op2 = read_fp_sreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negs(tcg_res, tcg_res);
@@ -6912,6 +6956,10 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -6933,18 +6981,6 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
tcg_op2 = read_fp_dreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
gen_helper_vfp_negd(tcg_res, tcg_res);
@@ -6954,6 +6990,10 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -6975,18 +7015,6 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
tcg_op2 = read_fp_hreg(s, rm);
switch (opcode) {
- case 0x4: /* FMAX */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5: /* FMIN */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x6: /* FMAXNM */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7: /* FMINNM */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x8: /* FNMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
@@ -6996,6 +7024,10 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
case 0x1: /* FDIV */
case 0x2: /* FADD */
case 0x3: /* FSUB */
+ case 0x4: /* FMAX */
+ case 0x5: /* FMIN */
+ case 0x6: /* FMAXNM */
+ case 0x7: /* FMINNM */
g_assert_not_reached();
}
@@ -9221,24 +9253,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
tcg_res, fpst);
break;
- case 0x18: /* FMAXNM */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1e: /* FMAX */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x38: /* FMINNM */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3e: /* FMIN */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9259,9 +9279,13 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
g_assert_not_reached();
@@ -9290,21 +9314,9 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1e: /* FMAX */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x18: /* FMAXNM */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x38: /* FMINNM */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3e: /* FMIN */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9325,9 +9337,13 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
g_assert_not_reached();
@@ -11251,11 +11267,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x7d: /* FACGT */
case 0x19: /* FMLA */
case 0x39: /* FMLS */
- case 0x18: /* FMAXNM */
case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x3e: /* FMIN */
case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
@@ -11286,9 +11298,13 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
default:
+ case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1e: /* FMAX */
+ case 0x38: /* FMINNM */
case 0x3a: /* FSUB */
+ case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
case 0x5f: /* FDIV */
unallocated_encoding(s);
@@ -11632,14 +11648,10 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
case 0x7: /* FRECPS */
- case 0x8: /* FMINNM */
case 0x9: /* FMLS */
- case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11656,9 +11668,13 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
pairwise = true;
break;
default:
+ case 0x0: /* FMAXNM */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x6: /* FMAX */
+ case 0x8: /* FMINNM */
case 0xa: /* FSUB */
+ case 0xe: /* FMIN */
case 0x13: /* FMUL */
case 0x17: /* FDIV */
unallocated_encoding(s);
@@ -11726,9 +11742,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x0: /* FMAXNM */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1: /* FMLA */
read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
@@ -11737,15 +11750,9 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x6: /* FMAX */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x8: /* FMINNM */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x9: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
@@ -11753,9 +11760,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
fpst);
break;
- case 0xe: /* FMIN */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11776,9 +11780,13 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x0: /* FMAXNM */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x6: /* FMAX */
+ case 0x8: /* FMINNM */
case 0xa: /* FSUB */
+ case 0xe: /* FMIN */
case 0x13: /* FMUL */
case 0x17: /* FDIV */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 41065363710..99ef6760719 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1231,15 +1231,19 @@ DO_3OP(gvec_facgt_s, float32_acgt, float32)
DO_3OP(gvec_fmax_h, float16_max, float16)
DO_3OP(gvec_fmax_s, float32_max, float32)
+DO_3OP(gvec_fmax_d, float64_max, float64)
DO_3OP(gvec_fmin_h, float16_min, float16)
DO_3OP(gvec_fmin_s, float32_min, float32)
+DO_3OP(gvec_fmin_d, float64_min, float64)
DO_3OP(gvec_fmaxnum_h, float16_maxnum, float16)
DO_3OP(gvec_fmaxnum_s, float32_maxnum, float32)
+DO_3OP(gvec_fmaxnum_d, float64_maxnum, float64)
DO_3OP(gvec_fminnum_h, float16_minnum, float16)
DO_3OP(gvec_fminnum_s, float32_minnum, float32)
+DO_3OP(gvec_fminnum_d, float64_minnum, float64)
DO_3OP(gvec_recps_nf_h, float16_recps_nf, float16)
DO_3OP(gvec_recps_nf_s, float32_recps_nf, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 27/42] target/arm: Introduce vfp_load_reg16
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (25 preceding siblings ...)
2024-05-28 14:07 ` [PULL 26/42] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 28/42] target/arm: Expand vfp neg and abs inline Peter Maydell
` (15 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Load and zero-extend float16 into a TCGv_i32 before
all scalar operations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20240524232121.284515-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-vfp.c | 39 +++++++++++++++++++---------------
1 file changed, 22 insertions(+), 17 deletions(-)
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
index b9af03b7c35..8e755fcde8a 100644
--- a/target/arm/tcg/translate-vfp.c
+++ b/target/arm/tcg/translate-vfp.c
@@ -48,6 +48,12 @@ static inline void vfp_store_reg32(TCGv_i32 var, int reg)
tcg_gen_st_i32(var, tcg_env, vfp_reg_offset(false, reg));
}
+static inline void vfp_load_reg16(TCGv_i32 var, int reg)
+{
+ tcg_gen_ld16u_i32(var, tcg_env,
+ vfp_reg_offset(false, reg) + HOST_BIG_ENDIAN * 2);
+}
+
/*
* The imm8 encodes the sign bit, enough bits to represent an exponent in
* the range 01....1xx to 10....0xx, and the most significant 4 bits of
@@ -902,8 +908,7 @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
if (a->l) {
/* VFP to general purpose register */
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vn);
- tcg_gen_andi_i32(tmp, tmp, 0xffff);
+ vfp_load_reg16(tmp, a->vn);
store_reg(s, a->rt, tmp);
} else {
/* general purpose register to VFP */
@@ -1453,11 +1458,11 @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
fd = tcg_temp_new_i32();
fpst = fpstatus_ptr(FPST_FPCR_F16);
- vfp_load_reg32(f0, vn);
- vfp_load_reg32(f1, vm);
+ vfp_load_reg16(f0, vn);
+ vfp_load_reg16(f1, vm);
if (reads_vd) {
- vfp_load_reg32(fd, vd);
+ vfp_load_reg16(fd, vd);
}
fn(fd, f0, f1, fpst);
vfp_store_reg32(fd, vd);
@@ -1633,7 +1638,7 @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
}
f0 = tcg_temp_new_i32();
- vfp_load_reg32(f0, vm);
+ vfp_load_reg16(f0, vm);
fn(f0, f0);
vfp_store_reg32(f0, vd);
@@ -2106,13 +2111,13 @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vm = tcg_temp_new_i32();
vd = tcg_temp_new_i32();
- vfp_load_reg32(vn, a->vn);
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vn, a->vn);
+ vfp_load_reg16(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
gen_helper_vfp_negh(vn, vn);
}
- vfp_load_reg32(vd, a->vd);
+ vfp_load_reg16(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
gen_helper_vfp_negh(vd, vd);
@@ -2456,11 +2461,11 @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
vd = tcg_temp_new_i32();
vm = tcg_temp_new_i32();
- vfp_load_reg32(vd, a->vd);
+ vfp_load_reg16(vd, a->vd);
if (a->z) {
tcg_gen_movi_i32(vm, 0);
} else {
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vm, a->vm);
}
if (a->e) {
@@ -2700,7 +2705,7 @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_rinth(tmp, tmp, fpst);
vfp_store_reg32(tmp, a->vd);
@@ -2773,7 +2778,7 @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
tcg_rmode = gen_set_rmode(FPROUNDING_ZERO, fpst);
gen_helper_rinth(tmp, tmp, fpst);
@@ -2853,7 +2858,7 @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
}
tmp = tcg_temp_new_i32();
- vfp_load_reg32(tmp, a->vm);
+ vfp_load_reg16(tmp, a->vm);
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_rinth_exact(tmp, tmp, fpst);
vfp_store_reg32(tmp, a->vd);
@@ -3270,7 +3275,7 @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
fpst = fpstatus_ptr(FPST_FPCR_F16);
vm = tcg_temp_new_i32();
- vfp_load_reg32(vm, a->vm);
+ vfp_load_reg16(vm, a->vm);
if (a->s) {
if (a->rz) {
@@ -3383,8 +3388,8 @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
/* Insert low half of Vm into high half of Vd */
rm = tcg_temp_new_i32();
rd = tcg_temp_new_i32();
- vfp_load_reg32(rm, a->vm);
- vfp_load_reg32(rd, a->vd);
+ vfp_load_reg16(rm, a->vm);
+ vfp_load_reg16(rd, a->vd);
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
vfp_store_reg32(rd, a->vd);
return true;
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 28/42] target/arm: Expand vfp neg and abs inline
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (26 preceding siblings ...)
2024-05-28 14:07 ` [PULL 27/42] target/arm: Introduce vfp_load_reg16 Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 29/42] target/arm: Convert FNMUL to decodetree Peter Maydell
` (14 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 6 ----
target/arm/tcg/translate.h | 30 +++++++++++++++++++
target/arm/tcg/translate-a64.c | 44 +++++++++++++--------------
target/arm/tcg/translate-vfp.c | 54 +++++++++++++++++-----------------
target/arm/vfp_helper.c | 30 -------------------
5 files changed, 79 insertions(+), 85 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 7ee15b96512..0fd01c9c52d 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -132,12 +132,6 @@ DEF_HELPER_3(vfp_maxnumd, f64, f64, f64, ptr)
DEF_HELPER_3(vfp_minnumh, f16, f16, f16, ptr)
DEF_HELPER_3(vfp_minnums, f32, f32, f32, ptr)
DEF_HELPER_3(vfp_minnumd, f64, f64, f64, ptr)
-DEF_HELPER_1(vfp_negh, f16, f16)
-DEF_HELPER_1(vfp_negs, f32, f32)
-DEF_HELPER_1(vfp_negd, f64, f64)
-DEF_HELPER_1(vfp_absh, f16, f16)
-DEF_HELPER_1(vfp_abss, f32, f32)
-DEF_HELPER_1(vfp_absd, f64, f64)
DEF_HELPER_2(vfp_sqrth, f16, f16, env)
DEF_HELPER_2(vfp_sqrts, f32, f32, env)
DEF_HELPER_2(vfp_sqrtd, f64, f64, env)
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index ecfa242eef3..b05a9eb6685 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -406,6 +406,36 @@ static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
*/
uint64_t vfp_expand_imm(int size, uint8_t imm8);
+static inline void gen_vfp_absh(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_andi_i32(d, s, INT16_MAX);
+}
+
+static inline void gen_vfp_abss(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_andi_i32(d, s, INT32_MAX);
+}
+
+static inline void gen_vfp_absd(TCGv_i64 d, TCGv_i64 s)
+{
+ tcg_gen_andi_i64(d, s, INT64_MAX);
+}
+
+static inline void gen_vfp_negh(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_xori_i32(d, s, 1u << 15);
+}
+
+static inline void gen_vfp_negs(TCGv_i32 d, TCGv_i32 s)
+{
+ tcg_gen_xori_i32(d, s, 1u << 31);
+}
+
+static inline void gen_vfp_negd(TCGv_i64 d, TCGv_i64 s)
+{
+ tcg_gen_xori_i64(d, s, 1ull << 63);
+}
+
/* Vector operations shared between ARM and AArch64. */
void gen_gvec_ceq0(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 6f8207d842b..878f83298f5 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -6591,10 +6591,10 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_mov_i32(tcg_res, tcg_op);
break;
case 0x1: /* FABS */
- tcg_gen_andi_i32(tcg_res, tcg_op, 0x7fff);
+ gen_vfp_absh(tcg_res, tcg_op);
break;
case 0x2: /* FNEG */
- tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
+ gen_vfp_negh(tcg_res, tcg_op);
break;
case 0x3: /* FSQRT */
fpst = fpstatus_ptr(FPST_FPCR_F16);
@@ -6645,10 +6645,10 @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_mov_i32(tcg_res, tcg_op);
goto done;
case 0x1: /* FABS */
- gen_helper_vfp_abss(tcg_res, tcg_op);
+ gen_vfp_abss(tcg_res, tcg_op);
goto done;
case 0x2: /* FNEG */
- gen_helper_vfp_negs(tcg_res, tcg_op);
+ gen_vfp_negs(tcg_res, tcg_op);
goto done;
case 0x3: /* FSQRT */
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
@@ -6720,10 +6720,10 @@ static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
switch (opcode) {
case 0x1: /* FABS */
- gen_helper_vfp_absd(tcg_res, tcg_op);
+ gen_vfp_absd(tcg_res, tcg_op);
goto done;
case 0x2: /* FNEG */
- gen_helper_vfp_negd(tcg_res, tcg_op);
+ gen_vfp_negd(tcg_res, tcg_op);
goto done;
case 0x3: /* FSQRT */
gen_helper_vfp_sqrtd(tcg_res, tcg_op, tcg_env);
@@ -6949,7 +6949,7 @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_negs(tcg_res, tcg_res);
+ gen_vfp_negs(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -6983,7 +6983,7 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_negd(tcg_res, tcg_res);
+ gen_vfp_negd(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -7017,7 +7017,7 @@ static void handle_fp_2src_half(DisasContext *s, int opcode,
switch (opcode) {
case 0x8: /* FNMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
+ gen_vfp_negh(tcg_res, tcg_res);
break;
default:
case 0x0: /* FMUL */
@@ -7102,11 +7102,11 @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
* flipped if it is a negated-input.
*/
if (o1 == true) {
- gen_helper_vfp_negs(tcg_op3, tcg_op3);
+ gen_vfp_negs(tcg_op3, tcg_op3);
}
if (o0 != o1) {
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
+ gen_vfp_negs(tcg_op1, tcg_op1);
}
gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
@@ -7134,11 +7134,11 @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
* flipped if it is a negated-input.
*/
if (o1 == true) {
- gen_helper_vfp_negd(tcg_op3, tcg_op3);
+ gen_vfp_negd(tcg_op3, tcg_op3);
}
if (o0 != o1) {
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
+ gen_vfp_negd(tcg_op1, tcg_op1);
}
gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
@@ -9246,7 +9246,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
switch (fpopcode) {
case 0x39: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negd(tcg_op1, tcg_op1);
+ gen_vfp_negd(tcg_op1, tcg_op1);
/* fall through */
case 0x19: /* FMLA */
read_vec_element(s, tcg_res, rd, pass, MO_64);
@@ -9270,7 +9270,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_absd(tcg_res, tcg_res);
+ gen_vfp_absd(tcg_res, tcg_res);
break;
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -9304,7 +9304,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
switch (fpopcode) {
case 0x39: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negs(tcg_op1, tcg_op1);
+ gen_vfp_negs(tcg_op1, tcg_op1);
/* fall through */
case 0x19: /* FMLA */
read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
@@ -9328,7 +9328,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_helper_vfp_abss(tcg_res, tcg_res);
+ gen_vfp_abss(tcg_res, tcg_res);
break;
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -9741,10 +9741,10 @@ static void handle_2misc_64(DisasContext *s, int opcode, bool u,
}
break;
case 0x2f: /* FABS */
- gen_helper_vfp_absd(tcg_rd, tcg_rn);
+ gen_vfp_absd(tcg_rd, tcg_rn);
break;
case 0x6f: /* FNEG */
- gen_helper_vfp_negd(tcg_rd, tcg_rn);
+ gen_vfp_negd(tcg_rd, tcg_rn);
break;
case 0x7f: /* FSQRT */
gen_helper_vfp_sqrtd(tcg_rd, tcg_rn, tcg_env);
@@ -12567,10 +12567,10 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
}
break;
case 0x2f: /* FABS */
- gen_helper_vfp_abss(tcg_res, tcg_op);
+ gen_vfp_abss(tcg_res, tcg_op);
break;
case 0x6f: /* FNEG */
- gen_helper_vfp_negs(tcg_res, tcg_op);
+ gen_vfp_negs(tcg_res, tcg_op);
break;
case 0x7f: /* FSQRT */
gen_helper_vfp_sqrts(tcg_res, tcg_op, tcg_env);
@@ -13291,7 +13291,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
switch (16 * u + opcode) {
case 0x05: /* FMLS */
/* As usual for ARM, separate negation for fused multiply-add */
- gen_helper_vfp_negd(tcg_op, tcg_op);
+ gen_vfp_negd(tcg_op, tcg_op);
/* fall through */
case 0x01: /* FMLA */
read_vec_element(s, tcg_res, rd, pass, MO_64);
diff --git a/target/arm/tcg/translate-vfp.c b/target/arm/tcg/translate-vfp.c
index 8e755fcde8a..39ec971ff70 100644
--- a/target/arm/tcg/translate-vfp.c
+++ b/target/arm/tcg/translate-vfp.c
@@ -1768,7 +1768,7 @@ static void gen_VMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(tmp, tmp);
+ gen_vfp_negh(tmp, tmp);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1786,7 +1786,7 @@ static void gen_VMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(tmp, tmp);
+ gen_vfp_negs(tmp, tmp);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1804,7 +1804,7 @@ static void gen_VMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(tmp, tmp);
+ gen_vfp_negd(tmp, tmp);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1824,7 +1824,7 @@ static void gen_VNMLS_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1844,7 +1844,7 @@ static void gen_VNMLS_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1864,7 +1864,7 @@ static void gen_VNMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1879,8 +1879,8 @@ static void gen_VNMLA_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_mulh(tmp, vn, vm, fpst);
- gen_helper_vfp_negh(tmp, tmp);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(tmp, tmp);
+ gen_vfp_negh(vd, vd);
gen_helper_vfp_addh(vd, vd, tmp, fpst);
}
@@ -1895,8 +1895,8 @@ static void gen_VNMLA_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
TCGv_i32 tmp = tcg_temp_new_i32();
gen_helper_vfp_muls(tmp, vn, vm, fpst);
- gen_helper_vfp_negs(tmp, tmp);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(tmp, tmp);
+ gen_vfp_negs(vd, vd);
gen_helper_vfp_adds(vd, vd, tmp, fpst);
}
@@ -1911,8 +1911,8 @@ static void gen_VNMLA_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
TCGv_i64 tmp = tcg_temp_new_i64();
gen_helper_vfp_muld(tmp, vn, vm, fpst);
- gen_helper_vfp_negd(tmp, tmp);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(tmp, tmp);
+ gen_vfp_negd(vd, vd);
gen_helper_vfp_addd(vd, vd, tmp, fpst);
}
@@ -1940,7 +1940,7 @@ static void gen_VNMUL_hp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_mulh(vd, vn, vm, fpst);
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
}
static bool trans_VNMUL_hp(DisasContext *s, arg_VNMUL_sp *a)
@@ -1952,7 +1952,7 @@ static void gen_VNMUL_sp(TCGv_i32 vd, TCGv_i32 vn, TCGv_i32 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_muls(vd, vn, vm, fpst);
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
}
static bool trans_VNMUL_sp(DisasContext *s, arg_VNMUL_sp *a)
@@ -1964,7 +1964,7 @@ static void gen_VNMUL_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
{
/* VNMUL: -(fn * fm) */
gen_helper_vfp_muld(vd, vn, vm, fpst);
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
}
static bool trans_VNMUL_dp(DisasContext *s, arg_VNMUL_dp *a)
@@ -2115,12 +2115,12 @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vfp_load_reg16(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negh(vn, vn);
+ gen_vfp_negh(vn, vn);
}
vfp_load_reg16(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negh(vd, vd);
+ gen_vfp_negh(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR_F16);
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
@@ -2174,12 +2174,12 @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
vfp_load_reg32(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negs(vn, vn);
+ gen_vfp_negs(vn, vn);
}
vfp_load_reg32(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negs(vd, vd);
+ gen_vfp_negs(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
@@ -2239,12 +2239,12 @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
vfp_load_reg64(vm, a->vm);
if (neg_n) {
/* VFNMS, VFMS */
- gen_helper_vfp_negd(vn, vn);
+ gen_vfp_negd(vn, vn);
}
vfp_load_reg64(vd, a->vd);
if (neg_d) {
/* VFNMA, VFNMS */
- gen_helper_vfp_negd(vd, vd);
+ gen_vfp_negd(vd, vd);
}
fpst = fpstatus_ptr(FPST_FPCR);
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
@@ -2414,13 +2414,13 @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
+DO_VFP_2OP(VABS, hp, gen_vfp_absh, aa32_fp16_arith)
+DO_VFP_2OP(VABS, sp, gen_vfp_abss, aa32_fpsp_v2)
+DO_VFP_2OP(VABS, dp, gen_vfp_absd, aa32_fpdp_v2)
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
+DO_VFP_2OP(VNEG, hp, gen_vfp_negh, aa32_fp16_arith)
+DO_VFP_2OP(VNEG, sp, gen_vfp_negs, aa32_fpsp_v2)
+DO_VFP_2OP(VNEG, dp, gen_vfp_negd, aa32_fpdp_v2)
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
{
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
index 3e5e37abbe8..ce26b8a71a1 100644
--- a/target/arm/vfp_helper.c
+++ b/target/arm/vfp_helper.c
@@ -281,36 +281,6 @@ VFP_BINOP(minnum)
VFP_BINOP(maxnum)
#undef VFP_BINOP
-dh_ctype_f16 VFP_HELPER(neg, h)(dh_ctype_f16 a)
-{
- return float16_chs(a);
-}
-
-float32 VFP_HELPER(neg, s)(float32 a)
-{
- return float32_chs(a);
-}
-
-float64 VFP_HELPER(neg, d)(float64 a)
-{
- return float64_chs(a);
-}
-
-dh_ctype_f16 VFP_HELPER(abs, h)(dh_ctype_f16 a)
-{
- return float16_abs(a);
-}
-
-float32 VFP_HELPER(abs, s)(float32 a)
-{
- return float32_abs(a);
-}
-
-float64 VFP_HELPER(abs, d)(float64 a)
-{
- return float64_abs(a);
-}
-
dh_ctype_f16 VFP_HELPER(sqrt, h)(dh_ctype_f16 a, CPUARMState *env)
{
return float16_sqrt(a, &env->vfp.fp_status_f16);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 29/42] target/arm: Convert FNMUL to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (27 preceding siblings ...)
2024-05-28 14:07 ` [PULL 28/42] target/arm: Expand vfp neg and abs inline Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 30/42] target/arm: Convert FMLA, FMLS " Peter Maydell
` (13 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
This is the last instruction within disas_fp_2src,
so remove that and its subroutines.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 1 +
target/arm/tcg/translate-a64.c | 177 +++++----------------------------
2 files changed, 27 insertions(+), 151 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index e2678d919e5..cde4b86303d 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -703,6 +703,7 @@ FADD_s 0001 1110 ..1 ..... 0010 10 ..... ..... @rrr_hsd
FSUB_s 0001 1110 ..1 ..... 0011 10 ..... ..... @rrr_hsd
FDIV_s 0001 1110 ..1 ..... 0001 10 ..... ..... @rrr_hsd
FMUL_s 0001 1110 ..1 ..... 0000 10 ..... ..... @rrr_hsd
+FNMUL_s 0001 1110 ..1 ..... 1000 10 ..... ..... @rrr_hsd
FMAX_s 0001 1110 ..1 ..... 0100 10 ..... ..... @rrr_hsd
FMIN_s 0001 1110 ..1 ..... 0101 10 ..... ..... @rrr_hsd
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 878f83298f5..5ba30ba7c86 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4950,6 +4950,31 @@ static const FPScalar f_scalar_fmulx = {
};
TRANS(FMULX_s, do_fp3_scalar, a, &f_scalar_fmulx)
+static void gen_fnmul_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_mulh(d, n, m, s);
+ gen_vfp_negh(d, d);
+}
+
+static void gen_fnmul_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_muls(d, n, m, s);
+ gen_vfp_negs(d, d);
+}
+
+static void gen_fnmul_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
+{
+ gen_helper_vfp_muld(d, n, m, s);
+ gen_vfp_negd(d, d);
+}
+
+static const FPScalar f_scalar_fnmul = {
+ gen_fnmul_h,
+ gen_fnmul_s,
+ gen_fnmul_d,
+};
+TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -6932,156 +6957,6 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
}
}
-/* Floating-point data-processing (2 source) - single precision */
-static void handle_fp_2src_single(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i32();
- fpst = fpstatus_ptr(FPST_FPCR);
- tcg_op1 = read_fp_sreg(s, rn);
- tcg_op2 = read_fp_sreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negs(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (2 source) - double precision */
-static void handle_fp_2src_double(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i64 tcg_op1;
- TCGv_i64 tcg_op2;
- TCGv_i64 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i64();
- fpst = fpstatus_ptr(FPST_FPCR);
- tcg_op1 = read_fp_dreg(s, rn);
- tcg_op2 = read_fp_dreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negd(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_dreg(s, rd, tcg_res);
-}
-
-/* Floating-point data-processing (2 source) - half precision */
-static void handle_fp_2src_half(DisasContext *s, int opcode,
- int rd, int rn, int rm)
-{
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
- TCGv_ptr fpst;
-
- tcg_res = tcg_temp_new_i32();
- fpst = fpstatus_ptr(FPST_FPCR_F16);
- tcg_op1 = read_fp_hreg(s, rn);
- tcg_op2 = read_fp_hreg(s, rm);
-
- switch (opcode) {
- case 0x8: /* FNMUL */
- gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_negh(tcg_res, tcg_res);
- break;
- default:
- case 0x0: /* FMUL */
- case 0x1: /* FDIV */
- case 0x2: /* FADD */
- case 0x3: /* FSUB */
- case 0x4: /* FMAX */
- case 0x5: /* FMIN */
- case 0x6: /* FMAXNM */
- case 0x7: /* FMINNM */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
-/* Floating point data-processing (2 source)
- * 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
- * | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | opcode | 1 0 | Rn | Rd |
- * +---+---+---+-----------+------+---+------+--------+-----+------+------+
- */
-static void disas_fp_2src(DisasContext *s, uint32_t insn)
-{
- int mos = extract32(insn, 29, 3);
- int type = extract32(insn, 22, 2);
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int opcode = extract32(insn, 12, 4);
-
- if (opcode > 8 || mos) {
- unallocated_encoding(s);
- return;
- }
-
- switch (type) {
- case 0:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_single(s, opcode, rd, rn, rm);
- break;
- case 1:
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_double(s, opcode, rd, rn, rm);
- break;
- case 3:
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- handle_fp_2src_half(s, opcode, rd, rn, rm);
- break;
- default:
- unallocated_encoding(s);
- }
-}
-
/* Floating-point data-processing (3 source) - single precision */
static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
int rd, int rn, int rm, int ra)
@@ -7685,7 +7560,7 @@ static void disas_data_proc_fp(DisasContext *s, uint32_t insn)
break;
case 2:
/* Floating point data-processing (2 source) */
- disas_fp_2src(s, insn);
+ unallocated_encoding(s); /* in decodetree */
break;
case 3:
/* Floating point conditional select */
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 30/42] target/arm: Convert FMLA, FMLS to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (28 preceding siblings ...)
2024-05-28 14:07 ` [PULL 29/42] target/arm: Convert FNMUL to decodetree Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 31/42] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Peter Maydell
` (12 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 2 +
target/arm/tcg/a64.decode | 22 +++
target/arm/tcg/translate-a64.c | 241 +++++++++++++++++----------------
target/arm/tcg/vec_helper.c | 14 ++
4 files changed, 163 insertions(+), 116 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 0fd01c9c52d..e021c185178 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -770,9 +770,11 @@ DEF_HELPER_FLAGS_5(gvec_fmls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfma_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfma_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_vfma_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfms_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_vfms_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_vfms_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_ftsmul_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index cde4b86303d..11527bb5e5e 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -742,12 +742,26 @@ FMINNM_v 0.00 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
FMULX_v 0.00 1110 010 ..... 00011 1 ..... ..... @qrrr_h
FMULX_v 0.00 1110 0.1 ..... 11011 1 ..... ..... @qrrr_sd
+FMLA_v 0.00 1110 010 ..... 00001 1 ..... ..... @qrrr_h
+FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
+
+FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
+FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMUL_si 0101 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMUL_si 0101 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
+FMLA_si 0101 1111 00 .. .... 0001 . 0 ..... ..... @rrx_h
+FMLA_si 0101 1111 10 .. .... 0001 . 0 ..... ..... @rrx_s
+FMLA_si 0101 1111 11 0. .... 0001 . 0 ..... ..... @rrx_d
+
+FMLS_si 0101 1111 00 .. .... 0101 . 0 ..... ..... @rrx_h
+FMLS_si 0101 1111 10 .. .... 0101 . 0 ..... ..... @rrx_s
+FMLS_si 0101 1111 11 0. .... 0101 . 0 ..... ..... @rrx_d
+
FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
@@ -758,6 +772,14 @@ FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMUL_vi 0.00 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMUL_vi 0.00 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+FMLA_vi 0.00 1111 00 .. .... 0001 . 0 ..... ..... @qrrx_h
+FMLA_vi 0.00 1111 10 . ..... 0001 . 0 ..... ..... @qrrx_s
+FMLA_vi 0.00 1111 11 0 ..... 0001 . 0 ..... ..... @qrrx_d
+
+FMLS_vi 0.00 1111 00 .. .... 0101 . 0 ..... ..... @qrrx_h
+FMLS_vi 0.00 1111 10 . ..... 0101 . 0 ..... ..... @qrrx_s
+FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
+
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 5ba30ba7c86..f84c12378dc 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5066,6 +5066,20 @@ static gen_helper_gvec_3_ptr * const f_vector_fmulx[3] = {
};
TRANS(FMULX_v, do_fp3_vector, a, f_vector_fmulx)
+static gen_helper_gvec_3_ptr * const f_vector_fmla[3] = {
+ gen_helper_gvec_vfma_h,
+ gen_helper_gvec_vfma_s,
+ gen_helper_gvec_vfma_d,
+};
+TRANS(FMLA_v, do_fp3_vector, a, f_vector_fmla)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
+ gen_helper_gvec_vfms_h,
+ gen_helper_gvec_vfms_s,
+ gen_helper_gvec_vfms_d,
+};
+TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5115,6 +5129,64 @@ static bool do_fp3_scalar_idx(DisasContext *s, arg_rrx_e *a, const FPScalar *f)
TRANS(FMUL_si, do_fp3_scalar_idx, a, &f_scalar_fmul)
TRANS(FMULX_si, do_fp3_scalar_idx, a, &f_scalar_fmulx)
+static bool do_fmla_scalar_idx(DisasContext *s, arg_rrx_e *a, bool neg)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = read_fp_dreg(s, a->rd);
+ TCGv_i64 t1 = read_fp_dreg(s, a->rn);
+ TCGv_i64 t2 = tcg_temp_new_i64();
+
+ read_vec_element(s, t2, a->rm, a->idx, MO_64);
+ if (neg) {
+ gen_vfp_negd(t1, t1);
+ }
+ gen_helper_vfp_muladdd(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_sreg(s, a->rd);
+ TCGv_i32 t1 = read_fp_sreg(s, a->rn);
+ TCGv_i32 t2 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_32);
+ if (neg) {
+ gen_vfp_negs(t1, t1);
+ }
+ gen_helper_vfp_muladds(t0, t1, t2, t0, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = read_fp_hreg(s, a->rd);
+ TCGv_i32 t1 = read_fp_hreg(s, a->rn);
+ TCGv_i32 t2 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t2, a->rm, a->idx, MO_16);
+ if (neg) {
+ gen_vfp_negh(t1, t1);
+ }
+ gen_helper_advsimd_muladdh(t0, t1, t2, t0,
+ fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FMLA_si, do_fmla_scalar_idx, a, false)
+TRANS(FMLS_si, do_fmla_scalar_idx, a, true)
+
static bool do_fp3_vector_idx(DisasContext *s, arg_qrrx_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5157,6 +5229,42 @@ static gen_helper_gvec_3_ptr * const f_vector_idx_fmulx[3] = {
};
TRANS(FMULX_vi, do_fp3_vector_idx, a, f_vector_idx_fmulx)
+static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
+{
+ static gen_helper_gvec_4_ptr * const fns[3] = {
+ gen_helper_gvec_fmla_idx_h,
+ gen_helper_gvec_fmla_idx_s,
+ gen_helper_gvec_fmla_idx_d,
+ };
+ MemOp esz = a->esz;
+
+ switch (esz) {
+ case MO_64:
+ if (!a->q) {
+ return false;
+ }
+ break;
+ case MO_32:
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_op4_fpst(s, a->q, a->rd, a->rn, a->rm, a->rd,
+ esz == MO_16, (a->idx << 1) | neg,
+ fns[esz - 1]);
+ }
+ return true;
+}
+
+TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
+TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -9119,15 +9227,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element(s, tcg_op2, rm, pass, MO_64);
switch (fpopcode) {
- case 0x39: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negd(tcg_op1, tcg_op1);
- /* fall through */
- case 0x19: /* FMLA */
- read_vec_element(s, tcg_res, rd, pass, MO_64);
- gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
- tcg_res, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9155,10 +9254,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -9177,15 +9278,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (fpopcode) {
- case 0x39: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negs(tcg_op1, tcg_op1);
- /* fall through */
- case 0x19: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
- gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
- tcg_res, fpst);
- break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -9213,10 +9305,12 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
break;
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -11140,8 +11234,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
case 0x7d: /* FACGT */
- case 0x19: /* FMLA */
- case 0x39: /* FMLS */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
@@ -11174,10 +11266,12 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
default:
case 0x18: /* FMAXNM */
+ case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
+ case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
@@ -11523,10 +11617,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x1: /* FMLA */
case 0x4: /* FCMEQ */
case 0x7: /* FRECPS */
- case 0x9: /* FMLS */
case 0xf: /* FRSQRTS */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11544,10 +11636,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
break;
default:
case 0x0: /* FMAXNM */
+ case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
+ case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
@@ -11617,24 +11711,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x1: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
- fpst);
- break;
case 0x4: /* FCMEQ */
gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x9: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
- read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
- fpst);
- break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -11656,10 +11738,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
break;
default:
case 0x0: /* FMAXNM */
+ case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
+ case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
@@ -12880,10 +12964,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
case 0x0c: /* SQDMULH */
case 0x0d: /* SQRDMULH */
break;
- case 0x01: /* FMLA */
- case 0x05: /* FMLS */
- is_fp = 1;
- break;
case 0x1d: /* SQRDMLAH */
case 0x1f: /* SQRDMLSH */
if (!dc_isar_feature(aa64_rdm, s)) {
@@ -12950,6 +13030,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
/* is_fp, but we pass tcg_env not fp_status. */
break;
default:
+ case 0x01: /* FMLA */
+ case 0x05: /* FMLS */
case 0x09: /* FMUL */
case 0x19: /* FMULX */
unallocated_encoding(s);
@@ -12958,20 +13040,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
switch (is_fp) {
case 1: /* normal fp */
- /* convert insn encoded size to MemOp size */
- switch (size) {
- case 0: /* half-precision */
- size = MO_16;
- is_fp16 = true;
- break;
- case MO_32: /* single precision */
- case MO_64: /* double precision */
- break;
- default:
- unallocated_encoding(s);
- return;
- }
- break;
+ unallocated_encoding(s); /* in decodetree */
+ return;
case 2: /* complex fp */
/* Each indexable element is a complex pair. */
@@ -13150,38 +13220,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
if (size == 3) {
- TCGv_i64 tcg_idx = tcg_temp_new_i64();
- int pass;
-
- assert(is_fp && is_q && !is_long);
-
- read_vec_element(s, tcg_idx, rm, index, MO_64);
-
- for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
- TCGv_i64 tcg_op = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op, rn, pass, MO_64);
-
- switch (16 * u + opcode) {
- case 0x05: /* FMLS */
- /* As usual for ARM, separate negation for fused multiply-add */
- gen_vfp_negd(tcg_op, tcg_op);
- /* fall through */
- case 0x01: /* FMLA */
- read_vec_element(s, tcg_res, rd, pass, MO_64);
- gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
- break;
- default:
- case 0x09: /* FMUL */
- case 0x19: /* FMULX */
- g_assert_not_reached();
- }
-
- write_vec_element(s, tcg_res, rd, pass, MO_64);
- }
-
- clear_vec_high(s, !is_scalar, rd);
+ g_assert_not_reached();
} else if (!is_long) {
/* 32 bit floating point, or 16 or 32 bit integer.
* For the 16 bit scalar case we use the usual Neon helpers and
@@ -13237,38 +13276,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
genfn(tcg_res, tcg_op, tcg_res);
break;
}
- case 0x05: /* FMLS */
- case 0x01: /* FMLA */
- read_vec_element_i32(s, tcg_res, rd, pass,
- is_scalar ? size : MO_32);
- switch (size) {
- case 1:
- if (opcode == 0x5) {
- /* As usual for ARM, separate negation for fused
- * multiply-add */
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80008000);
- }
- if (is_scalar) {
- gen_helper_advsimd_muladdh(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- } else {
- gen_helper_advsimd_muladd2h(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- }
- break;
- case 2:
- if (opcode == 0x5) {
- /* As usual for ARM, separate negation for
- * fused multiply-add */
- tcg_gen_xori_i32(tcg_op, tcg_op, 0x80000000);
- }
- gen_helper_vfp_muladds(tcg_res, tcg_op, tcg_idx,
- tcg_res, fpst);
- break;
- default:
- g_assert_not_reached();
- }
- break;
case 0x0c: /* SQDMULH */
if (size == 1) {
gen_helper_neon_qdmulh_s16(tcg_res, tcg_env,
@@ -13310,6 +13317,8 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
break;
default:
+ case 0x01: /* FMLA */
+ case 0x05: /* FMLS */
case 0x09: /* FMUL */
case 0x19: /* FMULX */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 99ef6760719..b925b9f21be 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1309,6 +1309,12 @@ static float32 float32_muladd_f(float32 dest, float32 op1, float32 op2,
return float32_muladd(op1, op2, dest, 0, stat);
}
+static float64 float64_muladd_f(float64 dest, float64 op1, float64 op2,
+ float_status *stat)
+{
+ return float64_muladd(op1, op2, dest, 0, stat);
+}
+
static float16 float16_mulsub_f(float16 dest, float16 op1, float16 op2,
float_status *stat)
{
@@ -1321,6 +1327,12 @@ static float32 float32_mulsub_f(float32 dest, float32 op1, float32 op2,
return float32_muladd(float32_chs(op1), op2, dest, 0, stat);
}
+static float64 float64_mulsub_f(float64 dest, float64 op1, float64 op2,
+ float_status *stat)
+{
+ return float64_muladd(float64_chs(op1), op2, dest, 0, stat);
+}
+
#define DO_MULADD(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
@@ -1340,9 +1352,11 @@ DO_MULADD(gvec_fmls_s, float32_mulsub_nf, float32)
DO_MULADD(gvec_vfma_h, float16_muladd_f, float16)
DO_MULADD(gvec_vfma_s, float32_muladd_f, float32)
+DO_MULADD(gvec_vfma_d, float64_muladd_f, float64)
DO_MULADD(gvec_vfms_h, float16_mulsub_f, float16)
DO_MULADD(gvec_vfms_s, float32_mulsub_f, float32)
+DO_MULADD(gvec_vfms_d, float64_mulsub_f, float64)
/* For the indexed ops, SVE applies the index per 128-bit vector segment.
* For AdvSIMD, there is of course only one such vector segment.
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 31/42] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (29 preceding siblings ...)
2024-05-28 14:07 ` [PULL 30/42] target/arm: Convert FMLA, FMLS " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 32/42] target/arm: Convert FABD " Peter Maydell
` (11 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 5 +
target/arm/tcg/a64.decode | 30 ++++++
target/arm/tcg/translate-a64.c | 188 +++++++++++++++++++--------------
target/arm/tcg/vec_helper.c | 30 ++++++
4 files changed, 174 insertions(+), 79 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index e021c185178..8d076011c18 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -727,18 +727,23 @@ DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fceq_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fcge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fcgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fcgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facge_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facge_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_facge_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facgt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_facgt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_facgt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmax_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 11527bb5e5e..7fc3277be67 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -713,6 +713,21 @@ FMINNM_s 0001 1110 ..1 ..... 0111 10 ..... ..... @rrr_hsd
FMULX_s 0101 1110 010 ..... 00011 1 ..... ..... @rrr_h
FMULX_s 0101 1110 0.1 ..... 11011 1 ..... ..... @rrr_sd
+FCMEQ_s 0101 1110 010 ..... 00100 1 ..... ..... @rrr_h
+FCMEQ_s 0101 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FCMGE_s 0111 1110 010 ..... 00100 1 ..... ..... @rrr_h
+FCMGE_s 0111 1110 0.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FCMGT_s 0111 1110 110 ..... 00100 1 ..... ..... @rrr_h
+FCMGT_s 0111 1110 1.1 ..... 11100 1 ..... ..... @rrr_sd
+
+FACGE_s 0111 1110 010 ..... 00101 1 ..... ..... @rrr_h
+FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
+
+FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
+FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -748,6 +763,21 @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
+FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FCMGE_v 0.10 1110 010 ..... 00100 1 ..... ..... @qrrr_h
+FCMGE_v 0.10 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FCMGT_v 0.10 1110 110 ..... 00100 1 ..... ..... @qrrr_h
+FCMGT_v 0.10 1110 1.1 ..... 11100 1 ..... ..... @qrrr_sd
+
+FACGE_v 0.10 1110 010 ..... 00101 1 ..... ..... @qrrr_h
+FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
+
+FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
+FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index f84c12378dc..75b0c1a005e 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -4975,6 +4975,41 @@ static const FPScalar f_scalar_fnmul = {
};
TRANS(FNMUL_s, do_fp3_scalar, a, &f_scalar_fnmul)
+static const FPScalar f_scalar_fcmeq = {
+ gen_helper_advsimd_ceq_f16,
+ gen_helper_neon_ceq_f32,
+ gen_helper_neon_ceq_f64,
+};
+TRANS(FCMEQ_s, do_fp3_scalar, a, &f_scalar_fcmeq)
+
+static const FPScalar f_scalar_fcmge = {
+ gen_helper_advsimd_cge_f16,
+ gen_helper_neon_cge_f32,
+ gen_helper_neon_cge_f64,
+};
+TRANS(FCMGE_s, do_fp3_scalar, a, &f_scalar_fcmge)
+
+static const FPScalar f_scalar_fcmgt = {
+ gen_helper_advsimd_cgt_f16,
+ gen_helper_neon_cgt_f32,
+ gen_helper_neon_cgt_f64,
+};
+TRANS(FCMGT_s, do_fp3_scalar, a, &f_scalar_fcmgt)
+
+static const FPScalar f_scalar_facge = {
+ gen_helper_advsimd_acge_f16,
+ gen_helper_neon_acge_f32,
+ gen_helper_neon_acge_f64,
+};
+TRANS(FACGE_s, do_fp3_scalar, a, &f_scalar_facge)
+
+static const FPScalar f_scalar_facgt = {
+ gen_helper_advsimd_acgt_f16,
+ gen_helper_neon_acgt_f32,
+ gen_helper_neon_acgt_f64,
+};
+TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5080,6 +5115,41 @@ static gen_helper_gvec_3_ptr * const f_vector_fmls[3] = {
};
TRANS(FMLS_v, do_fp3_vector, a, f_vector_fmls)
+static gen_helper_gvec_3_ptr * const f_vector_fcmeq[3] = {
+ gen_helper_gvec_fceq_h,
+ gen_helper_gvec_fceq_s,
+ gen_helper_gvec_fceq_d,
+};
+TRANS(FCMEQ_v, do_fp3_vector, a, f_vector_fcmeq)
+
+static gen_helper_gvec_3_ptr * const f_vector_fcmge[3] = {
+ gen_helper_gvec_fcge_h,
+ gen_helper_gvec_fcge_s,
+ gen_helper_gvec_fcge_d,
+};
+TRANS(FCMGE_v, do_fp3_vector, a, f_vector_fcmge)
+
+static gen_helper_gvec_3_ptr * const f_vector_fcmgt[3] = {
+ gen_helper_gvec_fcgt_h,
+ gen_helper_gvec_fcgt_s,
+ gen_helper_gvec_fcgt_d,
+};
+TRANS(FCMGT_v, do_fp3_vector, a, f_vector_fcmgt)
+
+static gen_helper_gvec_3_ptr * const f_vector_facge[3] = {
+ gen_helper_gvec_facge_h,
+ gen_helper_gvec_facge_s,
+ gen_helper_gvec_facge_d,
+};
+TRANS(FACGE_v, do_fp3_vector, a, f_vector_facge)
+
+static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
+ gen_helper_gvec_facgt_h,
+ gen_helper_gvec_facgt_s,
+ gen_helper_gvec_facgt_d,
+};
+TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9227,43 +9297,33 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element(s, tcg_op2, rm, pass, MO_64);
switch (fpopcode) {
- case 0x1c: /* FCMEQ */
- gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5c: /* FCMGE */
- gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5d: /* FACGE */
- gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
gen_vfp_absd(tcg_res, tcg_res);
break;
- case 0x7c: /* FCMGT */
- gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7d: /* FACGT */
- gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7c: /* FCMGT */
+ case 0x7d: /* FACGT */
g_assert_not_reached();
}
@@ -9278,43 +9338,33 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (fpopcode) {
- case 0x1c: /* FCMEQ */
- gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1f: /* FRECPS */
gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x5c: /* FCMGE */
- gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x5d: /* FACGE */
- gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7a: /* FABD */
gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
gen_vfp_abss(tcg_res, tcg_res);
break;
- case 0x7c: /* FCMGT */
- gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x7d: /* FACGT */
- gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7c: /* FCMGT */
+ case 0x7d: /* FACGT */
g_assert_not_reached();
}
@@ -9355,15 +9405,15 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
+ case 0x7a: /* FABD */
+ break;
+ default:
+ case 0x1b: /* FMULX */
case 0x5d: /* FACGE */
case 0x7d: /* FACGT */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
case 0x7c: /* FCMGT */
- case 0x7a: /* FABD */
- break;
- default:
- case 0x1b: /* FMULX */
unallocated_encoding(s);
return;
}
@@ -9516,17 +9566,17 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
TCGv_i32 tcg_res;
switch (fpopcode) {
- case 0x04: /* FCMEQ (reg) */
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
break;
default:
case 0x03: /* FMULX */
+ case 0x04: /* FCMEQ (reg) */
+ case 0x14: /* FCMGE (reg) */
+ case 0x15: /* FACGE */
+ case 0x1c: /* FCMGT (reg) */
+ case 0x1d: /* FACGT */
unallocated_encoding(s);
return;
}
@@ -9546,33 +9596,23 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
tcg_res = tcg_temp_new_i32();
switch (fpopcode) {
- case 0x04: /* FCMEQ (reg) */
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x07: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x0f: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x14: /* FCMGE (reg) */
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x15: /* FACGE */
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
break;
- case 0x1c: /* FCMGT (reg) */
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1d: /* FACGT */
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x03: /* FMULX */
+ case 0x04: /* FCMEQ (reg) */
+ case 0x14: /* FCMGE (reg) */
+ case 0x15: /* FACGE */
+ case 0x1c: /* FCMGT (reg) */
+ case 0x1d: /* FACGT */
g_assert_not_reached();
}
@@ -11232,12 +11272,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x5d: /* FACGE */
- case 0x7d: /* FACGT */
- case 0x1c: /* FCMEQ */
- case 0x5c: /* FCMGE */
case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
if (!fp_access_check(s)) {
return;
}
@@ -11269,13 +11304,18 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x19: /* FMLA */
case 0x1a: /* FADD */
case 0x1b: /* FMULX */
+ case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x5b: /* FMUL */
+ case 0x5c: /* FCMGE */
+ case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7d: /* FACGT */
+ case 0x7c: /* FCMGT */
unallocated_encoding(s);
return;
}
@@ -11617,14 +11657,9 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
int pass;
switch (fpopcode) {
- case 0x4: /* FCMEQ */
case 0x7: /* FRECPS */
case 0xf: /* FRSQRTS */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
pairwise = false;
break;
case 0x10: /* FMAXNMP */
@@ -11639,13 +11674,18 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
+ case 0x14: /* FCMGE */
+ case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1c: /* FCMGT */
+ case 0x1d: /* FACGT */
unallocated_encoding(s);
return;
}
@@ -11711,43 +11751,33 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
switch (fpopcode) {
- case 0x4: /* FCMEQ */
- gen_helper_advsimd_ceq_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x7: /* FRECPS */
gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x14: /* FCMGE */
- gen_helper_advsimd_cge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x15: /* FACGE */
- gen_helper_advsimd_acge_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0x1a: /* FABD */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
break;
- case 0x1c: /* FCMGT */
- gen_helper_advsimd_cgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x1d: /* FACGT */
- gen_helper_advsimd_acgt_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
case 0x2: /* FADD */
case 0x3: /* FMULX */
+ case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0x13: /* FMUL */
+ case 0x14: /* FCMGE */
+ case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1c: /* FCMGT */
+ case 0x1d: /* FACGT */
g_assert_not_reached();
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index b925b9f21be..dabefa3526d 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -971,6 +971,11 @@ static uint32_t float32_ceq(float32 op1, float32 op2, float_status *stat)
return -float32_eq_quiet(op1, op2, stat);
}
+static uint64_t float64_ceq(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_eq_quiet(op1, op2, stat);
+}
+
static uint16_t float16_cge(float16 op1, float16 op2, float_status *stat)
{
return -float16_le(op2, op1, stat);
@@ -981,6 +986,11 @@ static uint32_t float32_cge(float32 op1, float32 op2, float_status *stat)
return -float32_le(op2, op1, stat);
}
+static uint64_t float64_cge(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_le(op2, op1, stat);
+}
+
static uint16_t float16_cgt(float16 op1, float16 op2, float_status *stat)
{
return -float16_lt(op2, op1, stat);
@@ -991,6 +1001,11 @@ static uint32_t float32_cgt(float32 op1, float32 op2, float_status *stat)
return -float32_lt(op2, op1, stat);
}
+static uint64_t float64_cgt(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_lt(op2, op1, stat);
+}
+
static uint16_t float16_acge(float16 op1, float16 op2, float_status *stat)
{
return -float16_le(float16_abs(op2), float16_abs(op1), stat);
@@ -1001,6 +1016,11 @@ static uint32_t float32_acge(float32 op1, float32 op2, float_status *stat)
return -float32_le(float32_abs(op2), float32_abs(op1), stat);
}
+static uint64_t float64_acge(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_le(float64_abs(op2), float64_abs(op1), stat);
+}
+
static uint16_t float16_acgt(float16 op1, float16 op2, float_status *stat)
{
return -float16_lt(float16_abs(op2), float16_abs(op1), stat);
@@ -1011,6 +1031,11 @@ static uint32_t float32_acgt(float32 op1, float32 op2, float_status *stat)
return -float32_lt(float32_abs(op2), float32_abs(op1), stat);
}
+static uint64_t float64_acgt(float64 op1, float64 op2, float_status *stat)
+{
+ return -float64_lt(float64_abs(op2), float64_abs(op1), stat);
+}
+
static int16_t vfp_tosszh(float16 x, void *fpstp)
{
float_status *fpst = fpstp;
@@ -1216,18 +1241,23 @@ DO_3OP(gvec_fabd_s, float32_abd, float32)
DO_3OP(gvec_fceq_h, float16_ceq, float16)
DO_3OP(gvec_fceq_s, float32_ceq, float32)
+DO_3OP(gvec_fceq_d, float64_ceq, float64)
DO_3OP(gvec_fcge_h, float16_cge, float16)
DO_3OP(gvec_fcge_s, float32_cge, float32)
+DO_3OP(gvec_fcge_d, float64_cge, float64)
DO_3OP(gvec_fcgt_h, float16_cgt, float16)
DO_3OP(gvec_fcgt_s, float32_cgt, float32)
+DO_3OP(gvec_fcgt_d, float64_cgt, float64)
DO_3OP(gvec_facge_h, float16_acge, float16)
DO_3OP(gvec_facge_s, float32_acge, float32)
+DO_3OP(gvec_facge_d, float64_acge, float64)
DO_3OP(gvec_facgt_h, float16_acgt, float16)
DO_3OP(gvec_facgt_s, float32_acgt, float32)
+DO_3OP(gvec_facgt_d, float64_acgt, float64)
DO_3OP(gvec_fmax_h, float16_max, float16)
DO_3OP(gvec_fmax_s, float32_max, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 32/42] target/arm: Convert FABD to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (30 preceding siblings ...)
2024-05-28 14:07 ` [PULL 31/42] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 33/42] target/arm: Convert FRECPS, FRSQRTS " Peter Maydell
` (10 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 1 +
target/arm/tcg/a64.decode | 6 ++++
target/arm/tcg/translate-a64.c | 60 ++++++++++++++++++++++------------
target/arm/tcg/vec_helper.c | 6 ++++
4 files changed, 53 insertions(+), 20 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 8d076011c18..ff6e3094f41 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -724,6 +724,7 @@ DEF_HELPER_FLAGS_5(gvec_fmul_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fabd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fabd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fabd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fceq_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7fc3277be67..a852b5f06f0 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -728,6 +728,9 @@ FACGE_s 0111 1110 0.1 ..... 11101 1 ..... ..... @rrr_sd
FACGT_s 0111 1110 110 ..... 00101 1 ..... ..... @rrr_h
FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
+FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
+FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -778,6 +781,9 @@ FACGE_v 0.10 1110 0.1 ..... 11101 1 ..... ..... @qrrr_sd
FACGT_v 0.10 1110 110 ..... 00101 1 ..... ..... @qrrr_h
FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
+FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
+FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 75b0c1a005e..633384d2a56 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5010,6 +5010,31 @@ static const FPScalar f_scalar_facgt = {
};
TRANS(FACGT_s, do_fp3_scalar, a, &f_scalar_facgt)
+static void gen_fabd_h(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subh(d, n, m, s);
+ gen_vfp_absh(d, d);
+}
+
+static void gen_fabd_s(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subs(d, n, m, s);
+ gen_vfp_abss(d, d);
+}
+
+static void gen_fabd_d(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_ptr s)
+{
+ gen_helper_vfp_subd(d, n, m, s);
+ gen_vfp_absd(d, d);
+}
+
+static const FPScalar f_scalar_fabd = {
+ gen_fabd_h,
+ gen_fabd_s,
+ gen_fabd_d,
+};
+TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5150,6 +5175,13 @@ static gen_helper_gvec_3_ptr * const f_vector_facgt[3] = {
};
TRANS(FACGT_v, do_fp3_vector, a, f_vector_facgt)
+static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
+ gen_helper_gvec_fabd_h,
+ gen_helper_gvec_fabd_s,
+ gen_helper_gvec_fabd_d,
+};
+TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9303,10 +9335,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x7a: /* FABD */
- gen_helper_vfp_subd(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_absd(tcg_res, tcg_res);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
@@ -9322,6 +9350,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
case 0x7d: /* FACGT */
g_assert_not_reached();
@@ -9344,10 +9373,6 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3f: /* FRSQRTS */
gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x7a: /* FABD */
- gen_helper_vfp_subs(tcg_res, tcg_op1, tcg_op2, fpst);
- gen_vfp_abss(tcg_res, tcg_res);
- break;
default:
case 0x18: /* FMAXNM */
case 0x19: /* FMLA */
@@ -9363,6 +9388,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
case 0x7d: /* FACGT */
g_assert_not_reached();
@@ -9405,7 +9431,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x7a: /* FABD */
break;
default:
case 0x1b: /* FMULX */
@@ -9413,6 +9438,7 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x7d: /* FACGT */
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
+ case 0x7a: /* FABD */
case 0x7c: /* FCMGT */
unallocated_encoding(s);
return;
@@ -9568,13 +9594,13 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
switch (fpopcode) {
case 0x07: /* FRECPS */
case 0x0f: /* FRSQRTS */
- case 0x1a: /* FABD */
break;
default:
case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x14: /* FCMGE (reg) */
case 0x15: /* FACGE */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT (reg) */
case 0x1d: /* FACGT */
unallocated_encoding(s);
@@ -9602,15 +9628,12 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
case 0x0f: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FABD */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
- break;
default:
case 0x03: /* FMULX */
case 0x04: /* FCMEQ (reg) */
case 0x14: /* FCMGE (reg) */
case 0x15: /* FACGE */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT (reg) */
case 0x1d: /* FACGT */
g_assert_not_reached();
@@ -11272,7 +11295,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
return;
case 0x1f: /* FRECPS */
case 0x3f: /* FRSQRTS */
- case 0x7a: /* FABD */
if (!fp_access_check(s)) {
return;
}
@@ -11314,6 +11336,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
case 0x5f: /* FDIV */
+ case 0x7a: /* FABD */
case 0x7d: /* FACGT */
case 0x7c: /* FCMGT */
unallocated_encoding(s);
@@ -11659,7 +11682,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x7: /* FRECPS */
case 0xf: /* FRSQRTS */
- case 0x1a: /* FABD */
pairwise = false;
break;
case 0x10: /* FMAXNMP */
@@ -11684,6 +11706,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
unallocated_encoding(s);
@@ -11757,10 +11780,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0xf: /* FRSQRTS */
gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0x1a: /* FABD */
- gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
- tcg_gen_andi_i32(tcg_res, tcg_res, 0x7fff);
- break;
default:
case 0x0: /* FMAXNM */
case 0x1: /* FMLA */
@@ -11776,6 +11795,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
case 0x17: /* FDIV */
+ case 0x1a: /* FABD */
case 0x1c: /* FCMGT */
case 0x1d: /* FACGT */
g_assert_not_reached();
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index dabefa3526d..e9d7922f303 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -1154,6 +1154,11 @@ static float32 float32_abd(float32 op1, float32 op2, float_status *stat)
return float32_abs(float32_sub(op1, op2, stat));
}
+static float64 float64_abd(float64 op1, float64 op2, float_status *stat)
+{
+ return float64_abs(float64_sub(op1, op2, stat));
+}
+
/*
* Reciprocal step. These are the AArch32 version which uses a
* non-fused multiply-and-subtract.
@@ -1238,6 +1243,7 @@ DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
DO_3OP(gvec_fabd_h, float16_abd, float16)
DO_3OP(gvec_fabd_s, float32_abd, float32)
+DO_3OP(gvec_fabd_d, float64_abd, float64)
DO_3OP(gvec_fceq_h, float16_ceq, float16)
DO_3OP(gvec_fceq_s, float32_ceq, float32)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 33/42] target/arm: Convert FRECPS, FRSQRTS to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (31 preceding siblings ...)
2024-05-28 14:07 ` [PULL 32/42] target/arm: Convert FABD " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 34/42] target/arm: Convert FADDP " Peter Maydell
` (9 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
These are the last instructions within handle_3same_float
and disas_simd_scalar_three_reg_same_fp16 so remove them.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 12 ++
target/arm/tcg/translate-a64.c | 293 ++++-----------------------------
2 files changed, 46 insertions(+), 259 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index a852b5f06f0..84cb38f1dd0 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -731,6 +731,12 @@ FACGT_s 0111 1110 1.1 ..... 11101 1 ..... ..... @rrr_sd
FABD_s 0111 1110 110 ..... 00010 1 ..... ..... @rrr_h
FABD_s 0111 1110 1.1 ..... 11010 1 ..... ..... @rrr_sd
+FRECPS_s 0101 1110 010 ..... 00111 1 ..... ..... @rrr_h
+FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
+
+FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
+FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -784,6 +790,12 @@ FACGT_v 0.10 1110 1.1 ..... 11101 1 ..... ..... @qrrr_sd
FABD_v 0.10 1110 110 ..... 00010 1 ..... ..... @qrrr_h
FABD_v 0.10 1110 1.1 ..... 11010 1 ..... ..... @qrrr_sd
+FRECPS_v 0.00 1110 010 ..... 00111 1 ..... ..... @qrrr_h
+FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
+
+FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
+FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 633384d2a56..a7537a5104f 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5035,6 +5035,20 @@ static const FPScalar f_scalar_fabd = {
};
TRANS(FABD_s, do_fp3_scalar, a, &f_scalar_fabd)
+static const FPScalar f_scalar_frecps = {
+ gen_helper_recpsf_f16,
+ gen_helper_recpsf_f32,
+ gen_helper_recpsf_f64,
+};
+TRANS(FRECPS_s, do_fp3_scalar, a, &f_scalar_frecps)
+
+static const FPScalar f_scalar_frsqrts = {
+ gen_helper_rsqrtsf_f16,
+ gen_helper_rsqrtsf_f32,
+ gen_helper_rsqrtsf_f64,
+};
+TRANS(FRSQRTS_s, do_fp3_scalar, a, &f_scalar_frsqrts)
+
static bool do_fp3_vector(DisasContext *s, arg_qrrr_e *a,
gen_helper_gvec_3_ptr * const fns[3])
{
@@ -5182,6 +5196,20 @@ static gen_helper_gvec_3_ptr * const f_vector_fabd[3] = {
};
TRANS(FABD_v, do_fp3_vector, a, f_vector_fabd)
+static gen_helper_gvec_3_ptr * const f_vector_frecps[3] = {
+ gen_helper_gvec_recps_h,
+ gen_helper_gvec_recps_s,
+ gen_helper_gvec_recps_d,
+};
+TRANS(FRECPS_v, do_fp3_vector, a, f_vector_frecps)
+
+static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
+ gen_helper_gvec_rsqrts_h,
+ gen_helper_gvec_rsqrts_s,
+ gen_helper_gvec_rsqrts_d,
+};
+TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -9308,107 +9336,6 @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
}
}
-/* Handle the 3-same-operands float operations; shared by the scalar
- * and vector encodings. The caller must filter out any encodings
- * not allocated for the encoding it is dealing with.
- */
-static void handle_3same_float(DisasContext *s, int size, int elements,
- int fpopcode, int rd, int rn, int rm)
-{
- int pass;
- TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
-
- for (pass = 0; pass < elements; pass++) {
- if (size) {
- /* Double */
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
-
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3f: /* FRSQRTS */
- gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5f: /* FDIV */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- case 0x7d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_vec_element(s, tcg_res, rd, pass, MO_64);
- } else {
- /* Single */
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
-
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x3f: /* FRSQRTS */
- gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5f: /* FDIV */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- case 0x7d: /* FACGT */
- g_assert_not_reached();
- }
-
- if (elements == 1) {
- /* scalar single so clear high part */
- TCGv_i64 tcg_tmp = tcg_temp_new_i64();
-
- tcg_gen_extu_i32_i64(tcg_tmp, tcg_res);
- write_vec_element(s, tcg_tmp, rd, pass, MO_64);
- } else {
- write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
- }
- }
- }
-
- clear_vec_high(s, elements * (size ? 8 : 4) > 8, rd);
-}
-
/* AdvSIMD scalar three same
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+--------+---+------+------+
@@ -9425,33 +9352,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
bool u = extract32(insn, 29, 1);
TCGv_i64 tcg_rd;
- if (opcode >= 0x18) {
- /* Floating point: U, size[1] and opcode indicate operation */
- int fpopcode = opcode | (extract32(size, 1, 1) << 5) | (u << 6);
- switch (fpopcode) {
- case 0x1f: /* FRECPS */
- case 0x3f: /* FRSQRTS */
- break;
- default:
- case 0x1b: /* FMULX */
- case 0x5d: /* FACGE */
- case 0x7d: /* FACGT */
- case 0x1c: /* FCMEQ */
- case 0x5c: /* FCMGE */
- case 0x7a: /* FABD */
- case 0x7c: /* FCMGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- handle_3same_float(s, extract32(size, 0, 1), 1, fpopcode, rd, rn, rm);
- return;
- }
-
switch (opcode) {
case 0x1: /* SQADD, UQADD */
case 0x5: /* SQSUB, UQSUB */
@@ -9568,80 +9468,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
write_fp_dreg(s, rd, tcg_rd);
}
-/* AdvSIMD scalar three same FP16
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
- * | 0 1 | U | 1 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
- * +-----+---+-----------+---+-----+------+-----+--------+---+----+----+
- * v: 0101 1110 0100 0000 0000 0100 0000 0000 => 5e400400
- * m: 1101 1111 0110 0000 1100 0100 0000 0000 => df60c400
- */
-static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
- uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int opcode = extract32(insn, 11, 3);
- int rm = extract32(insn, 16, 5);
- bool u = extract32(insn, 29, 1);
- bool a = extract32(insn, 23, 1);
- int fpopcode = opcode | (a << 3) | (u << 4);
- TCGv_ptr fpst;
- TCGv_i32 tcg_op1;
- TCGv_i32 tcg_op2;
- TCGv_i32 tcg_res;
-
- switch (fpopcode) {
- case 0x07: /* FRECPS */
- case 0x0f: /* FRSQRTS */
- break;
- default:
- case 0x03: /* FMULX */
- case 0x04: /* FCMEQ (reg) */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(FPST_FPCR_F16);
-
- tcg_op1 = read_fp_hreg(s, rn);
- tcg_op2 = read_fp_hreg(s, rm);
- tcg_res = tcg_temp_new_i32();
-
- switch (fpopcode) {
- case 0x07: /* FRECPS */
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x0f: /* FRSQRTS */
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x03: /* FMULX */
- case 0x04: /* FCMEQ (reg) */
- case 0x14: /* FCMGE (reg) */
- case 0x15: /* FACGE */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT (reg) */
- case 0x1d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_fp_sreg(s, rd, tcg_res);
-}
-
/* AdvSIMD scalar three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +-----+---+-----------+------+---+------+---+--------+---+----+----+
@@ -11114,7 +10940,7 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
/* Pairwise op subgroup of C3.6.16.
*
- * This is called directly or via the handle_3same_float for float pairwise
+ * This is called directly for float pairwise
* operations where the opcode and size are calculated differently.
*/
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
@@ -11271,10 +11097,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
- int datasize = is_q ? 128 : 64;
- int esize = 32 << size;
- int elements = datasize / esize;
-
if (size == 1 && !is_q) {
unallocated_encoding(s);
return;
@@ -11293,13 +11115,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
rn, rm, rd);
return;
- case 0x1f: /* FRECPS */
- case 0x3f: /* FRSQRTS */
- if (!fp_access_check(s)) {
- return;
- }
- handle_3same_float(s, size, elements, fpopcode, rd, rn, rm);
- return;
case 0x1d: /* FMLAL */
case 0x3d: /* FMLSL */
@@ -11328,10 +11143,12 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x1b: /* FMULX */
case 0x1c: /* FCMEQ */
case 0x1e: /* FMAX */
+ case 0x1f: /* FRECPS */
case 0x38: /* FMINNM */
case 0x39: /* FMLS */
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
+ case 0x3f: /* FRSQRTS */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
@@ -11673,17 +11490,11 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
* together indicate the operation.
*/
int fpopcode = opcode | (a << 3) | (u << 4);
- int datasize = is_q ? 128 : 64;
- int elements = datasize / 16;
bool pairwise;
TCGv_ptr fpst;
int pass;
switch (fpopcode) {
- case 0x7: /* FRECPS */
- case 0xf: /* FRSQRTS */
- pairwise = false;
- break;
case 0x10: /* FMAXNMP */
case 0x12: /* FADDP */
case 0x16: /* FMAXP */
@@ -11698,10 +11509,12 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0x3: /* FMULX */
case 0x4: /* FCMEQ */
case 0x6: /* FMAX */
+ case 0x7: /* FRECPS */
case 0x8: /* FMINNM */
case 0x9: /* FMLS */
case 0xa: /* FSUB */
case 0xe: /* FMIN */
+ case 0xf: /* FRSQRTS */
case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11765,44 +11578,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
}
} else {
- for (pass = 0; pass < elements; pass++) {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
- read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
-
- switch (fpopcode) {
- case 0x7: /* FRECPS */
- gen_helper_recpsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FRSQRTS */
- gen_helper_rsqrtsf_f16(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x0: /* FMAXNM */
- case 0x1: /* FMLA */
- case 0x2: /* FADD */
- case 0x3: /* FMULX */
- case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
- case 0x8: /* FMINNM */
- case 0x9: /* FMLS */
- case 0xa: /* FSUB */
- case 0xe: /* FMIN */
- case 0x13: /* FMUL */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
- case 0x17: /* FDIV */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
- g_assert_not_reached();
- }
-
- write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
- }
+ g_assert_not_reached();
}
clear_vec_high(s, is_q, rd);
@@ -13572,7 +13348,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
- { 0x5e400400, 0xdf60c400, disas_simd_scalar_three_reg_same_fp16 },
{ 0x00000000, 0x00000000, NULL }
};
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 34/42] target/arm: Convert FADDP to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (32 preceding siblings ...)
2024-05-28 14:07 ` [PULL 33/42] target/arm: Convert FRECPS, FRSQRTS " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 35/42] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Peter Maydell
` (8 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 4 ++
target/arm/tcg/a64.decode | 12 +++++
target/arm/tcg/translate-a64.c | 87 ++++++++++++++++++++++++++--------
target/arm/tcg/vec_helper.c | 23 +++++++++
4 files changed, 105 insertions(+), 21 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index ff6e3094f41..8441b49d1f0 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1048,6 +1048,10 @@ DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 84cb38f1dd0..d2a02365e15 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -29,6 +29,7 @@
&ri rd imm
&rri_sf rd rn imm sf
&i imm
+&rr_e rd rn esz
&rrr_e rd rn rm esz
&rrx_e rd rn rm idx esz
&qrr_e q rd rn esz
@@ -36,6 +37,9 @@
&qrrx_e q rd rn rm idx esz
&qrrrr_e q rd rn rm ra esz
+@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
+@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
+
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@@ -737,6 +741,11 @@ FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
+### Advanced SIMD scalar pairwise
+
+FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
+FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -796,6 +805,9 @@ FRECPS_v 0.00 1110 0.1 ..... 11111 1 ..... ..... @qrrr_sd
FRSQRTS_v 0.00 1110 110 ..... 00111 1 ..... ..... @qrrr_h
FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
+FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
+FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index a7537a5104f..78949ab34f0 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5210,6 +5210,13 @@ static gen_helper_gvec_3_ptr * const f_vector_frsqrts[3] = {
};
TRANS(FRSQRTS_v, do_fp3_vector, a, f_vector_frsqrts)
+static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
+ gen_helper_gvec_faddp_h,
+ gen_helper_gvec_faddp_s,
+ gen_helper_gvec_faddp_d,
+};
+TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5395,6 +5402,56 @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+/*
+ * Advanced SIMD scalar pairwise
+ */
+
+static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
+{
+ switch (a->esz) {
+ case MO_64:
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t0, a->rn, 0, MO_64);
+ read_vec_element(s, t1, a->rn, 1, MO_64);
+ f->gen_d(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_dreg(s, a->rd, t0);
+ }
+ break;
+ case MO_32:
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, MO_32);
+ read_vec_element_i32(s, t1, a->rn, 1, MO_32);
+ f->gen_s(t0, t0, t1, fpstatus_ptr(FPST_FPCR));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ case MO_16:
+ if (!dc_isar_feature(aa64_fp16, s)) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ TCGv_i32 t1 = tcg_temp_new_i32();
+
+ read_vec_element_i32(s, t0, a->rn, 0, MO_16);
+ read_vec_element_i32(s, t1, a->rn, 1, MO_16);
+ f->gen_h(t0, t0, t1, fpstatus_ptr(FPST_FPCR_F16));
+ write_fp_sreg(s, a->rd, t0);
+ }
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return true;
+}
+
+TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -8357,7 +8414,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
fpst = NULL;
break;
case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
case 0xf: /* FMAXP */
case 0x2c: /* FMINNMP */
case 0x2f: /* FMINP */
@@ -8380,6 +8436,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
break;
default:
+ case 0xd: /* FADDP */
unallocated_encoding(s);
return;
}
@@ -8399,9 +8456,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8412,6 +8466,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
@@ -8429,9 +8484,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8442,6 +8494,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
} else {
@@ -8449,9 +8502,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0xc: /* FMAXNMP */
gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
break;
- case 0xd: /* FADDP */
- gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
case 0xf: /* FMAXP */
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -8462,6 +8512,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0xd: /* FADDP */
g_assert_not_reached();
}
}
@@ -10982,9 +11033,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x58: /* FMAXNMP */
gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
- case 0x5a: /* FADDP */
- gen_helper_vfp_addd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x5e: /* FMAXP */
gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -10995,6 +11043,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x5a: /* FADDP */
g_assert_not_reached();
}
}
@@ -11052,9 +11101,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x58: /* FMAXNMP */
gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
- case 0x5a: /* FADDP */
- gen_helper_vfp_adds(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x5e: /* FMAXP */
gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -11065,6 +11111,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x5a: /* FADDP */
g_assert_not_reached();
}
@@ -11104,7 +11151,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
case 0x5e: /* FMAXP */
case 0x78: /* FMINNMP */
case 0x7e: /* FMINP */
@@ -11149,6 +11195,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x3f: /* FRSQRTS */
+ case 0x5a: /* FADDP */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
@@ -11496,7 +11543,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
switch (fpopcode) {
case 0x10: /* FMAXNMP */
- case 0x12: /* FADDP */
case 0x16: /* FMAXP */
case 0x18: /* FMINNMP */
case 0x1e: /* FMINP */
@@ -11515,6 +11561,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
case 0xa: /* FSUB */
case 0xe: /* FMIN */
case 0xf: /* FRSQRTS */
+ case 0x12: /* FADDP */
case 0x13: /* FMUL */
case 0x14: /* FCMGE */
case 0x15: /* FACGE */
@@ -11556,9 +11603,6 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
fpst);
break;
- case 0x12: /* FADDP */
- gen_helper_advsimd_addh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
case 0x16: /* FMAXP */
gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
@@ -11570,6 +11614,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
break;
default:
+ case 0x12: /* FADDP */
g_assert_not_reached();
}
}
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index e9d7922f303..28989c7d7a7 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2237,6 +2237,29 @@ DO_NEON_PAIRWISE(neon_pmin, min)
#undef DO_NEON_PAIRWISE
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
+{ \
+ ARMVectorReg scratch; \
+ intptr_t oprsz = simd_oprsz(desc); \
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
+ TYPE *d = vd, *n = vn, *m = vm; \
+ if (unlikely(d == m)) { \
+ m = memcpy(&scratch, m, oprsz); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)], stat); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)], stat); \
+ } \
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
+}
+
+DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
+DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
+DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 35/42] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (33 preceding siblings ...)
2024-05-28 14:07 ` [PULL 34/42] target/arm: Convert FADDP " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 36/42] target/arm: Use gvec for neon faddp, fmaxp, fminp Peter Maydell
` (7 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
These are the last instructions within disas_simd_three_reg_same_fp16,
so remove it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 16 ++
target/arm/tcg/a64.decode | 24 +++
target/arm/tcg/translate-a64.c | 296 ++++++---------------------------
target/arm/tcg/vec_helper.c | 16 ++
4 files changed, 107 insertions(+), 245 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 8441b49d1f0..32684773299 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1052,6 +1052,22 @@ DEF_HELPER_FLAGS_5(gvec_faddp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_faddp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_faddp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fmaxnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index d2a02365e15..43557fdccc6 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -746,6 +746,18 @@ FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
FADDP_s 0111 1110 0.11 0000 1101 10 ..... ..... @rr_sd
+FMAXP_s 0101 1110 0011 0000 1111 10 ..... ..... @rr_h
+FMAXP_s 0111 1110 0.11 0000 1111 10 ..... ..... @rr_sd
+
+FMINP_s 0101 1110 1011 0000 1111 10 ..... ..... @rr_h
+FMINP_s 0111 1110 1.11 0000 1111 10 ..... ..... @rr_sd
+
+FMAXNMP_s 0101 1110 0011 0000 1100 10 ..... ..... @rr_h
+FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
+
+FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
+FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -808,6 +820,18 @@ FRSQRTS_v 0.00 1110 1.1 ..... 11111 1 ..... ..... @qrrr_sd
FADDP_v 0.10 1110 010 ..... 00010 1 ..... ..... @qrrr_h
FADDP_v 0.10 1110 0.1 ..... 11010 1 ..... ..... @qrrr_sd
+FMAXP_v 0.10 1110 010 ..... 00110 1 ..... ..... @qrrr_h
+FMAXP_v 0.10 1110 0.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMINP_v 0.10 1110 110 ..... 00110 1 ..... ..... @qrrr_h
+FMINP_v 0.10 1110 1.1 ..... 11110 1 ..... ..... @qrrr_sd
+
+FMAXNMP_v 0.10 1110 010 ..... 00000 1 ..... ..... @qrrr_h
+FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
+
+FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
+FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 78949ab34f0..07415bd2855 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5217,6 +5217,34 @@ static gen_helper_gvec_3_ptr * const f_vector_faddp[3] = {
};
TRANS(FADDP_v, do_fp3_vector, a, f_vector_faddp)
+static gen_helper_gvec_3_ptr * const f_vector_fmaxp[3] = {
+ gen_helper_gvec_fmaxp_h,
+ gen_helper_gvec_fmaxp_s,
+ gen_helper_gvec_fmaxp_d,
+};
+TRANS(FMAXP_v, do_fp3_vector, a, f_vector_fmaxp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminp[3] = {
+ gen_helper_gvec_fminp_h,
+ gen_helper_gvec_fminp_s,
+ gen_helper_gvec_fminp_d,
+};
+TRANS(FMINP_v, do_fp3_vector, a, f_vector_fminp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fmaxnmp[3] = {
+ gen_helper_gvec_fmaxnump_h,
+ gen_helper_gvec_fmaxnump_s,
+ gen_helper_gvec_fmaxnump_d,
+};
+TRANS(FMAXNMP_v, do_fp3_vector, a, f_vector_fmaxnmp)
+
+static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
+ gen_helper_gvec_fminnump_h,
+ gen_helper_gvec_fminnump_s,
+ gen_helper_gvec_fminnump_d,
+};
+TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5452,6 +5480,10 @@ static bool do_fp3_scalar_pair(DisasContext *s, arg_rr_e *a, const FPScalar *f)
}
TRANS(FADDP_s, do_fp3_scalar_pair, a, &f_scalar_fadd)
+TRANS(FMAXP_s, do_fp3_scalar_pair, a, &f_scalar_fmax)
+TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
+TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
+TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
@@ -8393,7 +8425,6 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
int opcode = extract32(insn, 12, 5);
int rn = extract32(insn, 5, 5);
int rd = extract32(insn, 0, 5);
- TCGv_ptr fpst;
/* For some ops (the FP ones), size[1] is part of the encoding.
* For ADDP strictly it is not but size[1] is always 1 for valid
@@ -8410,33 +8441,13 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
if (!fp_access_check(s)) {
return;
}
-
- fpst = NULL;
break;
+ default:
case 0xc: /* FMAXNMP */
+ case 0xd: /* FADDP */
case 0xf: /* FMAXP */
case 0x2c: /* FMINNMP */
case 0x2f: /* FMINP */
- /* FP op, size[0] is 32 or 64 bit*/
- if (!u) {
- if ((size & 1) || !dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- } else {
- size = MO_16;
- }
- } else {
- size = extract32(size, 0, 1) ? MO_64 : MO_32;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(size == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
- break;
- default:
- case 0xd: /* FADDP */
unallocated_encoding(s);
return;
}
@@ -8453,71 +8464,18 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
case 0x3b: /* ADDP */
tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
break;
- case 0xc: /* FMAXNMP */
- gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0xc: /* FMAXNMP */
case 0xd: /* FADDP */
+ case 0xf: /* FMAXP */
+ case 0x2c: /* FMINNMP */
+ case 0x2f: /* FMINP */
g_assert_not_reached();
}
write_fp_dreg(s, rd, tcg_res);
} else {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res = tcg_temp_new_i32();
-
- read_vec_element_i32(s, tcg_op1, rn, 0, size);
- read_vec_element_i32(s, tcg_op2, rn, 1, size);
-
- if (size == MO_16) {
- switch (opcode) {
- case 0xc: /* FMAXNMP */
- gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0xd: /* FADDP */
- g_assert_not_reached();
- }
- } else {
- switch (opcode) {
- case 0xc: /* FMAXNMP */
- gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0xf: /* FMAXP */
- gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2c: /* FMINNMP */
- gen_helper_vfp_minnums(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- case 0x2f: /* FMINP */
- gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0xd: /* FADDP */
- g_assert_not_reached();
- }
- }
-
- write_fp_sreg(s, rd, tcg_res);
+ g_assert_not_reached();
}
}
@@ -10997,16 +10955,8 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
int size, int rn, int rm, int rd)
{
- TCGv_ptr fpst;
int pass;
- /* Floating point operations need fpst */
- if (opcode >= 0x58) {
- fpst = fpstatus_ptr(FPST_FPCR);
- } else {
- fpst = NULL;
- }
-
if (!fp_access_check(s)) {
return;
}
@@ -11030,20 +10980,12 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
case 0x17: /* ADDP */
tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
break;
- case 0x58: /* FMAXNMP */
- gen_helper_vfp_maxnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x5e: /* FMAXP */
- gen_helper_vfp_maxd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x78: /* FMINNMP */
- gen_helper_vfp_minnumd(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x7e: /* FMINP */
- gen_helper_vfp_mind(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
+ case 0x5e: /* FMAXP */
+ case 0x78: /* FMINNMP */
+ case 0x7e: /* FMINP */
g_assert_not_reached();
}
}
@@ -11097,21 +11039,12 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
genfn = fns[size][u];
break;
}
- /* The FP operations are all on single floats (32 bit) */
- case 0x58: /* FMAXNMP */
- gen_helper_vfp_maxnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x5e: /* FMAXP */
- gen_helper_vfp_maxs(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x78: /* FMINNMP */
- gen_helper_vfp_minnums(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x7e: /* FMINP */
- gen_helper_vfp_mins(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
default:
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
+ case 0x5e: /* FMAXP */
+ case 0x78: /* FMINNMP */
+ case 0x7e: /* FMINP */
g_assert_not_reached();
}
@@ -11150,18 +11083,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
}
switch (fpopcode) {
- case 0x58: /* FMAXNMP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- if (size && !is_q) {
- unallocated_encoding(s);
- return;
- }
- handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
- rn, rm, rd);
- return;
-
case 0x1d: /* FMLAL */
case 0x3d: /* FMLSL */
case 0x59: /* FMLAL2 */
@@ -11195,14 +11116,18 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x3a: /* FSUB */
case 0x3e: /* FMIN */
case 0x3f: /* FRSQRTS */
+ case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
case 0x5b: /* FMUL */
case 0x5c: /* FCMGE */
case 0x5d: /* FACGE */
+ case 0x5e: /* FMAXP */
case 0x5f: /* FDIV */
+ case 0x78: /* FMINNMP */
case 0x7a: /* FABD */
case 0x7d: /* FACGT */
case 0x7c: /* FCMGT */
+ case 0x7e: /* FMINP */
unallocated_encoding(s);
return;
}
@@ -11511,124 +11436,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
}
}
-/*
- * Advanced SIMD three same (ARMv8.2 FP16 variants)
- *
- * 31 30 29 28 24 23 22 21 20 16 15 14 13 11 10 9 5 4 0
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
- * | 0 | Q | U | 0 1 1 1 0 | a | 1 0 | Rm | 0 0 | opcode | 1 | Rn | Rd |
- * +---+---+---+-----------+---------+------+-----+--------+---+------+------+
- *
- * This includes FMULX, FCMEQ (register), FRECPS, FRSQRTS, FCMGE
- * (register), FACGE, FABD, FCMGT (register) and FACGT.
- *
- */
-static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
-{
- int opcode = extract32(insn, 11, 3);
- int u = extract32(insn, 29, 1);
- int a = extract32(insn, 23, 1);
- int is_q = extract32(insn, 30, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- /*
- * For these floating point ops, the U, a and opcode bits
- * together indicate the operation.
- */
- int fpopcode = opcode | (a << 3) | (u << 4);
- bool pairwise;
- TCGv_ptr fpst;
- int pass;
-
- switch (fpopcode) {
- case 0x10: /* FMAXNMP */
- case 0x16: /* FMAXP */
- case 0x18: /* FMINNMP */
- case 0x1e: /* FMINP */
- pairwise = true;
- break;
- default:
- case 0x0: /* FMAXNM */
- case 0x1: /* FMLA */
- case 0x2: /* FADD */
- case 0x3: /* FMULX */
- case 0x4: /* FCMEQ */
- case 0x6: /* FMAX */
- case 0x7: /* FRECPS */
- case 0x8: /* FMINNM */
- case 0x9: /* FMLS */
- case 0xa: /* FSUB */
- case 0xe: /* FMIN */
- case 0xf: /* FRSQRTS */
- case 0x12: /* FADDP */
- case 0x13: /* FMUL */
- case 0x14: /* FCMGE */
- case 0x15: /* FACGE */
- case 0x17: /* FDIV */
- case 0x1a: /* FABD */
- case 0x1c: /* FCMGT */
- case 0x1d: /* FACGT */
- unallocated_encoding(s);
- return;
- }
-
- if (!dc_isar_feature(aa64_fp16, s)) {
- unallocated_encoding(s);
- return;
- }
-
- if (!fp_access_check(s)) {
- return;
- }
-
- fpst = fpstatus_ptr(FPST_FPCR_F16);
-
- if (pairwise) {
- int maxpass = is_q ? 8 : 4;
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- TCGv_i32 tcg_res[8];
-
- for (pass = 0; pass < maxpass; pass++) {
- int passreg = pass < (maxpass / 2) ? rn : rm;
- int passelt = (pass << 1) & (maxpass - 1);
-
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
- tcg_res[pass] = tcg_temp_new_i32();
-
- switch (fpopcode) {
- case 0x10: /* FMAXNMP */
- gen_helper_advsimd_maxnumh(tcg_res[pass], tcg_op1, tcg_op2,
- fpst);
- break;
- case 0x16: /* FMAXP */
- gen_helper_advsimd_maxh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- case 0x18: /* FMINNMP */
- gen_helper_advsimd_minnumh(tcg_res[pass], tcg_op1, tcg_op2,
- fpst);
- break;
- case 0x1e: /* FMINP */
- gen_helper_advsimd_minh(tcg_res[pass], tcg_op1, tcg_op2, fpst);
- break;
- default:
- case 0x12: /* FADDP */
- g_assert_not_reached();
- }
- }
-
- for (pass = 0; pass < maxpass; pass++) {
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
- }
- } else {
- g_assert_not_reached();
- }
-
- clear_vec_high(s, is_q, rd);
-}
-
/* AdvSIMD three same extra
* 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
* +---+---+---+-----------+------+---+------+---+--------+---+----+----+
@@ -13391,7 +13198,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
- { 0x0e400400, 0x9f60c400, disas_simd_three_reg_same_fp16 },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
{ 0x00000000, 0x00000000, NULL }
};
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 28989c7d7a7..79e1fdcaa9f 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2260,6 +2260,22 @@ DO_3OP_PAIR(gvec_faddp_h, float16_add, float16, H2)
DO_3OP_PAIR(gvec_faddp_s, float32_add, float32, H4)
DO_3OP_PAIR(gvec_faddp_d, float64_add, float64, )
+DO_3OP_PAIR(gvec_fmaxp_h, float16_max, float16, H2)
+DO_3OP_PAIR(gvec_fmaxp_s, float32_max, float32, H4)
+DO_3OP_PAIR(gvec_fmaxp_d, float64_max, float64, )
+
+DO_3OP_PAIR(gvec_fminp_h, float16_min, float16, H2)
+DO_3OP_PAIR(gvec_fminp_s, float32_min, float32, H4)
+DO_3OP_PAIR(gvec_fminp_d, float64_min, float64, )
+
+DO_3OP_PAIR(gvec_fmaxnump_h, float16_maxnum, float16, H2)
+DO_3OP_PAIR(gvec_fmaxnump_s, float32_maxnum, float32, H4)
+DO_3OP_PAIR(gvec_fmaxnump_d, float64_maxnum, float64, )
+
+DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
+DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
+DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 36/42] target/arm: Use gvec for neon faddp, fmaxp, fminp
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (34 preceding siblings ...)
2024-05-28 14:07 ` [PULL 35/42] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 37/42] target/arm: Convert ADDP to decodetree Peter Maydell
` (6 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 7 -----
target/arm/tcg/translate-neon.c | 55 ++-------------------------------
target/arm/tcg/vec_helper.c | 45 ---------------------------
3 files changed, 3 insertions(+), 104 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 32684773299..065460ea80e 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -650,13 +650,6 @@ DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_paddh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmaxh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pminh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_padds, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmaxs, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(neon_pmins, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
-
DEF_HELPER_FLAGS_4(gvec_sstoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_sitos, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_ustoh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 144f18ba22e..2326a05a0aa 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -1144,6 +1144,9 @@ DO_3S_FP_GVEC(VFMA, gen_helper_gvec_vfma_s, gen_helper_gvec_vfma_h)
DO_3S_FP_GVEC(VFMS, gen_helper_gvec_vfms_s, gen_helper_gvec_vfms_h)
DO_3S_FP_GVEC(VRECPS, gen_helper_gvec_recps_nf_s, gen_helper_gvec_recps_nf_h)
DO_3S_FP_GVEC(VRSQRTS, gen_helper_gvec_rsqrts_nf_s, gen_helper_gvec_rsqrts_nf_h)
+DO_3S_FP_GVEC(VPADD, gen_helper_gvec_faddp_s, gen_helper_gvec_faddp_h)
+DO_3S_FP_GVEC(VPMAX, gen_helper_gvec_fmaxp_s, gen_helper_gvec_fmaxp_h)
+DO_3S_FP_GVEC(VPMIN, gen_helper_gvec_fminp_s, gen_helper_gvec_fminp_h)
WRAP_FP_GVEC(gen_VMAXNM_fp32_3s, FPST_STD, gen_helper_gvec_fmaxnum_s)
WRAP_FP_GVEC(gen_VMAXNM_fp16_3s, FPST_STD_F16, gen_helper_gvec_fmaxnum_h)
@@ -1180,58 +1183,6 @@ static bool trans_VMINNM_fp_3s(DisasContext *s, arg_3same *a)
return do_3same(s, a, gen_VMINNM_fp32_3s);
}
-static bool do_3same_fp_pair(DisasContext *s, arg_3same *a,
- gen_helper_gvec_3_ptr *fn)
-{
- /* FP pairwise operations */
- TCGv_ptr fpstatus;
-
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
- return false;
- }
-
- /* UNDEF accesses to D16-D31 if they don't exist. */
- if (!dc_isar_feature(aa32_simd_r32, s) &&
- ((a->vd | a->vn | a->vm) & 0x10)) {
- return false;
- }
-
- if (!vfp_access_check(s)) {
- return true;
- }
-
- assert(a->q == 0); /* enforced by decode patterns */
-
-
- fpstatus = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
- vfp_reg_offset(1, a->vn),
- vfp_reg_offset(1, a->vm),
- fpstatus, 8, 8, 0, fn);
-
- return true;
-}
-
-/*
- * For all the functions using this macro, size == 1 means fp16,
- * which is an architecture extension we don't implement yet.
- */
-#define DO_3S_FP_PAIR(INSN,FUNC) \
- static bool trans_##INSN##_fp_3s(DisasContext *s, arg_3same *a) \
- { \
- if (a->size == MO_16) { \
- if (!dc_isar_feature(aa32_fp16_arith, s)) { \
- return false; \
- } \
- return do_3same_fp_pair(s, a, FUNC##h); \
- } \
- return do_3same_fp_pair(s, a, FUNC##s); \
- }
-
-DO_3S_FP_PAIR(VPADD, gen_helper_neon_padd)
-DO_3S_FP_PAIR(VPMAX, gen_helper_neon_pmax)
-DO_3S_FP_PAIR(VPMIN, gen_helper_neon_pmin)
-
static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
{
/* Handle a 2-reg-shift insn which can be vectorized. */
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 79e1fdcaa9f..26a9ca9c14a 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2192,51 +2192,6 @@ DO_ABA(gvec_uaba_d, uint64_t)
#undef DO_ABA
-#define DO_NEON_PAIRWISE(NAME, OP) \
- void HELPER(NAME##s)(void *vd, void *vn, void *vm, \
- void *stat, uint32_t oprsz) \
- { \
- float_status *fpst = stat; \
- float32 *d = vd; \
- float32 *n = vn; \
- float32 *m = vm; \
- float32 r0, r1; \
- \
- /* Read all inputs before writing outputs in case vm == vd */ \
- r0 = float32_##OP(n[H4(0)], n[H4(1)], fpst); \
- r1 = float32_##OP(m[H4(0)], m[H4(1)], fpst); \
- \
- d[H4(0)] = r0; \
- d[H4(1)] = r1; \
- } \
- \
- void HELPER(NAME##h)(void *vd, void *vn, void *vm, \
- void *stat, uint32_t oprsz) \
- { \
- float_status *fpst = stat; \
- float16 *d = vd; \
- float16 *n = vn; \
- float16 *m = vm; \
- float16 r0, r1, r2, r3; \
- \
- /* Read all inputs before writing outputs in case vm == vd */ \
- r0 = float16_##OP(n[H2(0)], n[H2(1)], fpst); \
- r1 = float16_##OP(n[H2(2)], n[H2(3)], fpst); \
- r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
- r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
- \
- d[H2(0)] = r0; \
- d[H2(1)] = r1; \
- d[H2(2)] = r2; \
- d[H2(3)] = r3; \
- }
-
-DO_NEON_PAIRWISE(neon_padd, add)
-DO_NEON_PAIRWISE(neon_pmax, max)
-DO_NEON_PAIRWISE(neon_pmin, min)
-
-#undef DO_NEON_PAIRWISE
-
#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 37/42] target/arm: Convert ADDP to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (35 preceding siblings ...)
2024-05-28 14:07 ` [PULL 36/42] target/arm: Use gvec for neon faddp, fmaxp, fminp Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 38/42] target/arm: Use gvec for neon padd Peter Maydell
` (5 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 5 ++
target/arm/tcg/translate.h | 3 +
target/arm/tcg/a64.decode | 6 ++
target/arm/tcg/gengvec.c | 12 ++++
target/arm/tcg/translate-a64.c | 128 ++++++---------------------------
target/arm/tcg/vec_helper.c | 30 ++++++++
6 files changed, 77 insertions(+), 107 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 065460ea80e..d3579a101f4 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1061,6 +1061,11 @@ DEF_HELPER_FLAGS_5(gvec_fminnump_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i
DEF_HELPER_FLAGS_5(gvec_fminnump_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fminnump_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index b05a9eb6685..04771f483b6 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -514,6 +514,9 @@ void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+
/*
* Forward to the isar_feature_* tests given a DisasContext pointer.
*/
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 43557fdccc6..84f5bcc0e08 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -38,6 +38,7 @@
&qrrrr_e q rd rn rm ra esz
@rr_h ........ ... ..... ...... rn:5 rd:5 &rr_e esz=1
+@rr_d ........ ... ..... ...... rn:5 rd:5 &rr_e esz=3
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@@ -56,6 +57,7 @@
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
+@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
&qrrx_e esz=1 idx=%hlm
@@ -758,6 +760,8 @@ FMAXNMP_s 0111 1110 0.11 0000 1100 10 ..... ..... @rr_sd
FMINNMP_s 0101 1110 1011 0000 1100 10 ..... ..... @rr_h
FMINNMP_s 0111 1110 1.11 0000 1100 10 ..... ..... @rr_sd
+ADDP_s 0101 1110 1111 0001 1011 10 ..... ..... @rr_d
+
### Advanced SIMD three same
FADD_v 0.00 1110 010 ..... 00010 1 ..... ..... @qrrr_h
@@ -832,6 +836,8 @@ FMAXNMP_v 0.10 1110 0.1 ..... 11000 1 ..... ..... @qrrr_sd
FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
+ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 7a1856253ff..f010dd5a0e8 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1610,3 +1610,15 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
};
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
+
+void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_addp_b,
+ gen_helper_gvec_addp_h,
+ gen_helper_gvec_addp_s,
+ gen_helper_gvec_addp_d,
+ };
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 07415bd2855..b8add91112d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5245,6 +5245,8 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
};
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -5485,6 +5487,20 @@ TRANS(FMINP_s, do_fp3_scalar_pair, a, &f_scalar_fmin)
TRANS(FMAXNMP_s, do_fp3_scalar_pair, a, &f_scalar_fmaxnm)
TRANS(FMINNMP_s, do_fp3_scalar_pair, a, &f_scalar_fminnm)
+static bool trans_ADDP_s(DisasContext *s, arg_rr_e *a)
+{
+ if (fp_access_check(s)) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
+
+ read_vec_element(s, t0, a->rn, 0, MO_64);
+ read_vec_element(s, t1, a->rn, 1, MO_64);
+ tcg_gen_add_i64(t0, t0, t1);
+ write_fp_dreg(s, a->rd, t0);
+ }
+ return true;
+}
+
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
* Note that it is the caller's responsibility to ensure that the
* shift amount is in range (ie 0..31 or 0..63) and provide the ARM
@@ -8412,73 +8428,6 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
}
}
-/* AdvSIMD scalar pairwise
- * 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
- * | 0 1 | U | 1 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
- * +-----+---+-----------+------+-----------+--------+-----+------+------+
- */
-static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
-{
- int u = extract32(insn, 29, 1);
- int size = extract32(insn, 22, 2);
- int opcode = extract32(insn, 12, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- /* For some ops (the FP ones), size[1] is part of the encoding.
- * For ADDP strictly it is not but size[1] is always 1 for valid
- * encodings.
- */
- opcode |= (extract32(size, 1, 1) << 5);
-
- switch (opcode) {
- case 0x3b: /* ADDP */
- if (u || size != 3) {
- unallocated_encoding(s);
- return;
- }
- if (!fp_access_check(s)) {
- return;
- }
- break;
- default:
- case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
- case 0xf: /* FMAXP */
- case 0x2c: /* FMINNMP */
- case 0x2f: /* FMINP */
- unallocated_encoding(s);
- return;
- }
-
- if (size == MO_64) {
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- TCGv_i64 tcg_res = tcg_temp_new_i64();
-
- read_vec_element(s, tcg_op1, rn, 0, MO_64);
- read_vec_element(s, tcg_op2, rn, 1, MO_64);
-
- switch (opcode) {
- case 0x3b: /* ADDP */
- tcg_gen_add_i64(tcg_res, tcg_op1, tcg_op2);
- break;
- default:
- case 0xc: /* FMAXNMP */
- case 0xd: /* FADDP */
- case 0xf: /* FMAXP */
- case 0x2c: /* FMINNMP */
- case 0x2f: /* FMINP */
- g_assert_not_reached();
- }
-
- write_fp_dreg(s, rd, tcg_res);
- } else {
- g_assert_not_reached();
- }
-}
-
/*
* Common SSHR[RA]/USHR[RA] - Shift right (optional rounding/accumulate)
*
@@ -10965,34 +10914,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
* adjacent elements being operated on to produce an element in the result.
*/
if (size == 3) {
- TCGv_i64 tcg_res[2];
-
- for (pass = 0; pass < 2; pass++) {
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
- int passreg = (pass == 0) ? rn : rm;
-
- read_vec_element(s, tcg_op1, passreg, 0, MO_64);
- read_vec_element(s, tcg_op2, passreg, 1, MO_64);
- tcg_res[pass] = tcg_temp_new_i64();
-
- switch (opcode) {
- case 0x17: /* ADDP */
- tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
- break;
- default:
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- g_assert_not_reached();
- }
- }
-
- for (pass = 0; pass < 2; pass++) {
- write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
- }
+ g_assert_not_reached();
} else {
int maxpass = is_q ? 4 : 2;
TCGv_i32 tcg_res[4];
@@ -11009,16 +10931,6 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
tcg_res[pass] = tcg_temp_new_i32();
switch (opcode) {
- case 0x17: /* ADDP */
- {
- static NeonGenTwoOpFn * const fns[3] = {
- gen_helper_neon_padd_u8,
- gen_helper_neon_padd_u16,
- tcg_gen_add_i32,
- };
- genfn = fns[size];
- break;
- }
case 0x14: /* SMAXP, UMAXP */
{
static NeonGenTwoOpFn * const fns[3][2] = {
@@ -11040,6 +10952,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
break;
}
default:
+ case 0x17: /* ADDP */
case 0x58: /* FMAXNMP */
case 0x5a: /* FADDP */
case 0x5e: /* FMAXP */
@@ -11401,7 +11314,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x17: /* ADDP */
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
{
@@ -11433,6 +11345,9 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x17: /* ADDP */
+ unallocated_encoding(s);
+ break;
}
}
@@ -13195,7 +13110,6 @@ static const AArch64DecodeTable data_proc_simd[] = {
{ 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
- { 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
{ 0x5f000000, 0xdf000400, disas_simd_indexed }, /* scalar indexed */
{ 0x5f000400, 0xdf800400, disas_simd_scalar_shift_imm },
{ 0x0e780800, 0x8f7e0c00, disas_simd_two_reg_misc_fp16 },
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 26a9ca9c14a..5069899415c 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2231,6 +2231,36 @@ DO_3OP_PAIR(gvec_fminnump_h, float16_minnum, float16, H2)
DO_3OP_PAIR(gvec_fminnump_s, float32_minnum, float32, H4)
DO_3OP_PAIR(gvec_fminnump_d, float64_minnum, float64, )
+#undef DO_3OP_PAIR
+
+#define DO_3OP_PAIR(NAME, FUNC, TYPE, H) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{ \
+ ARMVectorReg scratch; \
+ intptr_t oprsz = simd_oprsz(desc); \
+ intptr_t half = oprsz / sizeof(TYPE) / 2; \
+ TYPE *d = vd, *n = vn, *m = vm; \
+ if (unlikely(d == m)) { \
+ m = memcpy(&scratch, m, oprsz); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i)] = FUNC(n[H(i * 2)], n[H(i * 2 + 1)]); \
+ } \
+ for (intptr_t i = 0; i < half; ++i) { \
+ d[H(i + half)] = FUNC(m[H(i * 2)], m[H(i * 2 + 1)]); \
+ } \
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
+}
+
+#define ADD(A, B) (A + B)
+DO_3OP_PAIR(gvec_addp_b, ADD, uint8_t, H1)
+DO_3OP_PAIR(gvec_addp_h, ADD, uint16_t, H2)
+DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
+DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
+#undef ADD
+
+#undef DO_3OP_PAIR
+
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
{ \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 38/42] target/arm: Use gvec for neon padd
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (36 preceding siblings ...)
2024-05-28 14:07 ` [PULL 37/42] target/arm: Convert ADDP to decodetree Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 39/42] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Peter Maydell
` (4 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-33-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 2 --
target/arm/tcg/neon_helper.c | 5 -----
target/arm/tcg/translate-neon.c | 3 +--
3 files changed, 1 insertion(+), 9 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index d3579a101f4..51ed49aa50c 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -354,8 +354,6 @@ DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
DEF_HELPER_2(neon_add_u8, i32, i32, i32)
DEF_HELPER_2(neon_add_u16, i32, i32, i32)
-DEF_HELPER_2(neon_padd_u8, i32, i32, i32)
-DEF_HELPER_2(neon_padd_u16, i32, i32, i32)
DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
DEF_HELPER_2(neon_sub_u16, i32, i32, i32)
DEF_HELPER_2(neon_mul_u8, i32, i32, i32)
diff --git a/target/arm/tcg/neon_helper.c b/target/arm/tcg/neon_helper.c
index bc6c4a54e9d..a0b51c88096 100644
--- a/target/arm/tcg/neon_helper.c
+++ b/target/arm/tcg/neon_helper.c
@@ -745,11 +745,6 @@ uint32_t HELPER(neon_add_u16)(uint32_t a, uint32_t b)
return (a + b) ^ mask;
}
-#define NEON_FN(dest, src1, src2) dest = src1 + src2
-NEON_POP(padd_u8, neon_u8, 4)
-NEON_POP(padd_u16, neon_u16, 2)
-#undef NEON_FN
-
#define NEON_FN(dest, src1, src2) dest = src1 - src2
NEON_VOP(sub_u8, neon_u8, 4)
NEON_VOP(sub_u16, neon_u16, 2)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 2326a05a0aa..6c5a7a98e1b 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -830,6 +830,7 @@ DO_3SAME_NO_SZ_3(VABD_S, gen_gvec_sabd)
DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
+DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -1070,13 +1071,11 @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
-#define gen_helper_neon_padd_u32 tcg_gen_add_i32
DO_3SAME_PAIR(VPMAX_S, pmax_s)
DO_3SAME_PAIR(VPMIN_S, pmin_s)
DO_3SAME_PAIR(VPMAX_U, pmax_u)
DO_3SAME_PAIR(VPMIN_U, pmin_u)
-DO_3SAME_PAIR(VPADD, padd_u)
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 39/42] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (37 preceding siblings ...)
2024-05-28 14:07 ` [PULL 38/42] target/arm: Use gvec for neon padd Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 40/42] target/arm: Use gvec for neon pmax, pmin Peter Maydell
` (3 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
These are the last instructions within handle_simd_3same_pair
so remove it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-34-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/helper.h | 16 +++++
target/arm/tcg/translate.h | 8 +++
target/arm/tcg/a64.decode | 4 ++
target/arm/tcg/gengvec.c | 48 +++++++++++++
target/arm/tcg/translate-a64.c | 119 +++++----------------------------
target/arm/tcg/vec_helper.c | 16 +++++
6 files changed, 109 insertions(+), 102 deletions(-)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 51ed49aa50c..f830531dd3d 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1064,6 +1064,22 @@ DEF_HELPER_FLAGS_4(gvec_addp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_addp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_addp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_sminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_sminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_sminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_umaxp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umaxp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umaxp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_uminp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_uminp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_uminp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
#ifdef TARGET_AARCH64
#include "tcg/helper-a64.h"
#include "tcg/helper-sve.h"
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
index 04771f483b6..3abdbedfe5c 100644
--- a/target/arm/tcg/translate.h
+++ b/target/arm/tcg/translate.h
@@ -516,6 +516,14 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
/*
* Forward to the isar_feature_* tests given a DisasContext pointer.
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 84f5bcc0e08..22dfe8568d6 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -837,6 +837,10 @@ FMINNMP_v 0.10 1110 110 ..... 00000 1 ..... ..... @qrrr_h
FMINNMP_v 0.10 1110 1.1 ..... 11000 1 ..... ..... @qrrr_sd
ADDP_v 0.00 1110 ..1 ..... 10111 1 ..... ..... @qrrr_e
+SMAXP_v 0.00 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
+SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
+UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
+UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index f010dd5a0e8..22c9d17dce4 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -1622,3 +1622,51 @@ void gen_gvec_addp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
};
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
+
+void gen_gvec_smaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_smaxp_b,
+ gen_helper_gvec_smaxp_h,
+ gen_helper_gvec_smaxp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_sminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_sminp_b,
+ gen_helper_gvec_sminp_h,
+ gen_helper_gvec_sminp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_umaxp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_umaxp_b,
+ gen_helper_gvec_umaxp_h,
+ gen_helper_gvec_umaxp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
+
+void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
+{
+ static gen_helper_gvec_3 * const fns[4] = {
+ gen_helper_gvec_uminp_b,
+ gen_helper_gvec_uminp_h,
+ gen_helper_gvec_uminp_s,
+ };
+ tcg_debug_assert(vece <= MO_32);
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
+}
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index b8add91112d..9fe70a939bc 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -1352,6 +1352,17 @@ static bool do_gvec_fn3(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
return true;
}
+static bool do_gvec_fn3_no64(DisasContext *s, arg_qrrr_e *a, GVecGen3Fn *fn)
+{
+ if (a->esz == MO_64) {
+ return false;
+ }
+ if (fp_access_check(s)) {
+ gen_gvec_fn3(s, a->q, a->rd, a->rn, a->rm, fn, a->esz);
+ }
+ return true;
+}
+
static bool do_gvec_fn4(DisasContext *s, arg_qrrrr_e *a, GVecGen4Fn *fn)
{
if (!a->q && a->esz == MO_64) {
@@ -5246,6 +5257,10 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
+TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
+TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
+TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
+TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
/*
* Advanced SIMD scalar/vector x indexed element
@@ -10896,84 +10911,6 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
}
}
-/* Pairwise op subgroup of C3.6.16.
- *
- * This is called directly for float pairwise
- * operations where the opcode and size are calculated differently.
- */
-static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
- int size, int rn, int rm, int rd)
-{
- int pass;
-
- if (!fp_access_check(s)) {
- return;
- }
-
- /* These operations work on the concatenated rm:rn, with each pair of
- * adjacent elements being operated on to produce an element in the result.
- */
- if (size == 3) {
- g_assert_not_reached();
- } else {
- int maxpass = is_q ? 4 : 2;
- TCGv_i32 tcg_res[4];
-
- for (pass = 0; pass < maxpass; pass++) {
- TCGv_i32 tcg_op1 = tcg_temp_new_i32();
- TCGv_i32 tcg_op2 = tcg_temp_new_i32();
- NeonGenTwoOpFn *genfn = NULL;
- int passreg = pass < (maxpass / 2) ? rn : rm;
- int passelt = (is_q && (pass & 1)) ? 2 : 0;
-
- read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32);
- read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32);
- tcg_res[pass] = tcg_temp_new_i32();
-
- switch (opcode) {
- case 0x14: /* SMAXP, UMAXP */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_pmax_s8, gen_helper_neon_pmax_u8 },
- { gen_helper_neon_pmax_s16, gen_helper_neon_pmax_u16 },
- { tcg_gen_smax_i32, tcg_gen_umax_i32 },
- };
- genfn = fns[size][u];
- break;
- }
- case 0x15: /* SMINP, UMINP */
- {
- static NeonGenTwoOpFn * const fns[3][2] = {
- { gen_helper_neon_pmin_s8, gen_helper_neon_pmin_u8 },
- { gen_helper_neon_pmin_s16, gen_helper_neon_pmin_u16 },
- { tcg_gen_smin_i32, tcg_gen_umin_i32 },
- };
- genfn = fns[size][u];
- break;
- }
- default:
- case 0x17: /* ADDP */
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5e: /* FMAXP */
- case 0x78: /* FMINNMP */
- case 0x7e: /* FMINP */
- g_assert_not_reached();
- }
-
- /* FP ops called directly, otherwise call now */
- if (genfn) {
- genfn(tcg_res[pass], tcg_op1, tcg_op2);
- }
- }
-
- for (pass = 0; pass < maxpass; pass++) {
- write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
- }
- clear_vec_high(s, is_q, rd);
- }
-}
-
/* Floating point op subgroup of C3.6.16. */
static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
{
@@ -11314,30 +11251,6 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x14: /* SMAXP, UMAXP */
- case 0x15: /* SMINP, UMINP */
- {
- /* Pairwise operations */
- int is_q = extract32(insn, 30, 1);
- int u = extract32(insn, 29, 1);
- int size = extract32(insn, 22, 2);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
- if (opcode == 0x17) {
- if (u || (size == 3 && !is_q)) {
- unallocated_encoding(s);
- return;
- }
- } else {
- if (size == 3) {
- unallocated_encoding(s);
- return;
- }
- }
- handle_simd_3same_pair(s, is_q, u, opcode, size, rn, rm, rd);
- break;
- }
case 0x18 ... 0x31:
/* floating point ops, sz[1] and U are part of opcode */
disas_simd_3same_float(s, insn);
@@ -11345,6 +11258,8 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x14: /* SMAXP, UMAXP */
+ case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
unallocated_encoding(s);
break;
diff --git a/target/arm/tcg/vec_helper.c b/target/arm/tcg/vec_helper.c
index 5069899415c..56fea14edb9 100644
--- a/target/arm/tcg/vec_helper.c
+++ b/target/arm/tcg/vec_helper.c
@@ -2259,6 +2259,22 @@ DO_3OP_PAIR(gvec_addp_s, ADD, uint32_t, H4)
DO_3OP_PAIR(gvec_addp_d, ADD, uint64_t, )
#undef ADD
+DO_3OP_PAIR(gvec_smaxp_b, MAX, int8_t, H1)
+DO_3OP_PAIR(gvec_smaxp_h, MAX, int16_t, H2)
+DO_3OP_PAIR(gvec_smaxp_s, MAX, int32_t, H4)
+
+DO_3OP_PAIR(gvec_umaxp_b, MAX, uint8_t, H1)
+DO_3OP_PAIR(gvec_umaxp_h, MAX, uint16_t, H2)
+DO_3OP_PAIR(gvec_umaxp_s, MAX, uint32_t, H4)
+
+DO_3OP_PAIR(gvec_sminp_b, MIN, int8_t, H1)
+DO_3OP_PAIR(gvec_sminp_h, MIN, int16_t, H2)
+DO_3OP_PAIR(gvec_sminp_s, MIN, int32_t, H4)
+
+DO_3OP_PAIR(gvec_uminp_b, MIN, uint8_t, H1)
+DO_3OP_PAIR(gvec_uminp_h, MIN, uint16_t, H2)
+DO_3OP_PAIR(gvec_uminp_s, MIN, uint32_t, H4)
+
#undef DO_3OP_PAIR
#define DO_VCVT_FIXED(NAME, FUNC, TYPE) \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 40/42] target/arm: Use gvec for neon pmax, pmin
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (38 preceding siblings ...)
2024-05-28 14:07 ` [PULL 39/42] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 41/42] target/arm: Convert FMLAL, FMLSL to decodetree Peter Maydell
` (2 subsequent siblings)
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-neon.c | 78 ++-------------------------------
1 file changed, 4 insertions(+), 74 deletions(-)
diff --git a/target/arm/tcg/translate-neon.c b/target/arm/tcg/translate-neon.c
index 6c5a7a98e1b..18b048611b3 100644
--- a/target/arm/tcg/translate-neon.c
+++ b/target/arm/tcg/translate-neon.c
@@ -831,6 +831,10 @@ DO_3SAME_NO_SZ_3(VABA_S, gen_gvec_saba)
DO_3SAME_NO_SZ_3(VABD_U, gen_gvec_uabd)
DO_3SAME_NO_SZ_3(VABA_U, gen_gvec_uaba)
DO_3SAME_NO_SZ_3(VPADD, gen_gvec_addp)
+DO_3SAME_NO_SZ_3(VPMAX_S, gen_gvec_smaxp)
+DO_3SAME_NO_SZ_3(VPMIN_S, gen_gvec_sminp)
+DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
+DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@@ -1003,80 +1007,6 @@ DO_3SAME_32_ENV(VQSHL_U, qshl_u)
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
-static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
-{
- /* Operations handled pairwise 32 bits at a time */
- TCGv_i32 tmp, tmp2, tmp3;
-
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
- return false;
- }
-
- /* UNDEF accesses to D16-D31 if they don't exist. */
- if (!dc_isar_feature(aa32_simd_r32, s) &&
- ((a->vd | a->vn | a->vm) & 0x10)) {
- return false;
- }
-
- if (a->size == 3) {
- return false;
- }
-
- if (!vfp_access_check(s)) {
- return true;
- }
-
- assert(a->q == 0); /* enforced by decode patterns */
-
- /*
- * Note that we have to be careful not to clobber the source operands
- * in the "vm == vd" case by storing the result of the first pass too
- * early. Since Q is 0 there are always just two passes, so instead
- * of a complicated loop over each pass we just unroll.
- */
- tmp = tcg_temp_new_i32();
- tmp2 = tcg_temp_new_i32();
- tmp3 = tcg_temp_new_i32();
-
- read_neon_element32(tmp, a->vn, 0, MO_32);
- read_neon_element32(tmp2, a->vn, 1, MO_32);
- fn(tmp, tmp, tmp2);
-
- read_neon_element32(tmp3, a->vm, 0, MO_32);
- read_neon_element32(tmp2, a->vm, 1, MO_32);
- fn(tmp3, tmp3, tmp2);
-
- write_neon_element32(tmp, a->vd, 0, MO_32);
- write_neon_element32(tmp3, a->vd, 1, MO_32);
-
- return true;
-}
-
-#define DO_3SAME_PAIR(INSN, func) \
- static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
- { \
- static NeonGenTwoOpFn * const fns[] = { \
- gen_helper_neon_##func##8, \
- gen_helper_neon_##func##16, \
- gen_helper_neon_##func##32, \
- }; \
- if (a->size > 2) { \
- return false; \
- } \
- return do_3same_pair(s, a, fns[a->size]); \
- }
-
-/* 32-bit pairwise ops end up the same as the elementwise versions. */
-#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
-#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
-#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
-#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
-
-DO_3SAME_PAIR(VPMAX_S, pmax_s)
-DO_3SAME_PAIR(VPMIN_S, pmin_s)
-DO_3SAME_PAIR(VPMAX_U, pmax_u)
-DO_3SAME_PAIR(VPMIN_U, pmin_u)
-
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 41/42] target/arm: Convert FMLAL, FMLSL to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (39 preceding siblings ...)
2024-05-28 14:07 ` [PULL 40/42] target/arm: Use gvec for neon pmax, pmin Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 14:07 ` [PULL 42/42] target/arm: Convert disas_simd_3same_logic " Peter Maydell
2024-05-28 18:28 ` [PULL v2 00/42] target-arm queue Richard Henderson
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 10 +++
target/arm/tcg/translate-a64.c | 144 ++++++++++-----------------------
2 files changed, 51 insertions(+), 103 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 22dfe8568d6..7e993ed345f 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -797,6 +797,11 @@ FMLA_v 0.00 1110 0.1 ..... 11001 1 ..... ..... @qrrr_sd
FMLS_v 0.00 1110 110 ..... 00001 1 ..... ..... @qrrr_h
FMLS_v 0.00 1110 1.1 ..... 11001 1 ..... ..... @qrrr_sd
+FMLAL_v 0.00 1110 001 ..... 11101 1 ..... ..... @qrrr_h
+FMLSL_v 0.00 1110 101 ..... 11101 1 ..... ..... @qrrr_h
+FMLAL2_v 0.10 1110 001 ..... 11001 1 ..... ..... @qrrr_h
+FMLSL2_v 0.10 1110 101 ..... 11001 1 ..... ..... @qrrr_h
+
FCMEQ_v 0.00 1110 010 ..... 00100 1 ..... ..... @qrrr_h
FCMEQ_v 0.00 1110 0.1 ..... 11100 1 ..... ..... @qrrr_sd
@@ -877,3 +882,8 @@ FMLS_vi 0.00 1111 11 0 ..... 0101 . 0 ..... ..... @qrrx_d
FMULX_vi 0.10 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
FMULX_vi 0.10 1111 10 . ..... 1001 . 0 ..... ..... @qrrx_s
FMULX_vi 0.10 1111 11 0 ..... 1001 . 0 ..... ..... @qrrx_d
+
+FMLAL_vi 0.00 1111 10 .. .... 0000 . 0 ..... ..... @qrrx_h
+FMLSL_vi 0.00 1111 10 .. .... 0100 . 0 ..... ..... @qrrx_h
+FMLAL2_vi 0.10 1111 10 .. .... 1000 . 0 ..... ..... @qrrx_h
+FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 9fe70a939bc..a4ff1fd2027 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5256,6 +5256,24 @@ static gen_helper_gvec_3_ptr * const f_vector_fminnmp[3] = {
};
TRANS(FMINNMP_v, do_fp3_vector, a, f_vector_fminnmp)
+static bool do_fmlal(DisasContext *s, arg_qrrr_e *a, bool is_s, bool is_2)
+{
+ if (fp_access_check(s)) {
+ int data = (is_2 << 1) | is_s;
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), tcg_env,
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ data, gen_helper_gvec_fmlal_a64);
+ }
+ return true;
+}
+
+TRANS_FEAT(FMLAL_v, aa64_fhm, do_fmlal, a, false, false)
+TRANS_FEAT(FMLSL_v, aa64_fhm, do_fmlal, a, true, false)
+TRANS_FEAT(FMLAL2_v, aa64_fhm, do_fmlal, a, false, true)
+TRANS_FEAT(FMLSL2_v, aa64_fhm, do_fmlal, a, true, true)
+
TRANS(ADDP_v, do_gvec_fn3, a, gen_gvec_addp)
TRANS(SMAXP_v, do_gvec_fn3_no64, a, gen_gvec_smaxp)
TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
@@ -5447,6 +5465,24 @@ static bool do_fmla_vector_idx(DisasContext *s, arg_qrrx_e *a, bool neg)
TRANS(FMLA_vi, do_fmla_vector_idx, a, false)
TRANS(FMLS_vi, do_fmla_vector_idx, a, true)
+static bool do_fmlal_idx(DisasContext *s, arg_qrrx_e *a, bool is_s, bool is_2)
+{
+ if (fp_access_check(s)) {
+ int data = (a->idx << 2) | (is_2 << 1) | is_s;
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+ vec_full_reg_offset(s, a->rn),
+ vec_full_reg_offset(s, a->rm), tcg_env,
+ a->q ? 16 : 8, vec_full_reg_size(s),
+ data, gen_helper_gvec_fmlal_idx_a64);
+ }
+ return true;
+}
+
+TRANS_FEAT(FMLAL_vi, aa64_fhm, do_fmlal_idx, a, false, false)
+TRANS_FEAT(FMLSL_vi, aa64_fhm, do_fmlal_idx, a, true, false)
+TRANS_FEAT(FMLAL2_vi, aa64_fhm, do_fmlal_idx, a, false, true)
+TRANS_FEAT(FMLSL2_vi, aa64_fhm, do_fmlal_idx, a, true, true)
+
/*
* Advanced SIMD scalar pairwise
*/
@@ -10911,78 +10947,6 @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
}
}
-/* Floating point op subgroup of C3.6.16. */
-static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
-{
- /* For floating point ops, the U, size[1] and opcode bits
- * together indicate the operation. size[0] indicates single
- * or double.
- */
- int fpopcode = extract32(insn, 11, 5)
- | (extract32(insn, 23, 1) << 5)
- | (extract32(insn, 29, 1) << 6);
- int is_q = extract32(insn, 30, 1);
- int size = extract32(insn, 22, 1);
- int rm = extract32(insn, 16, 5);
- int rn = extract32(insn, 5, 5);
- int rd = extract32(insn, 0, 5);
-
- if (size == 1 && !is_q) {
- unallocated_encoding(s);
- return;
- }
-
- switch (fpopcode) {
- case 0x1d: /* FMLAL */
- case 0x3d: /* FMLSL */
- case 0x59: /* FMLAL2 */
- case 0x79: /* FMLSL2 */
- if (size & 1 || !dc_isar_feature(aa64_fhm, s)) {
- unallocated_encoding(s);
- return;
- }
- if (fp_access_check(s)) {
- int is_s = extract32(insn, 23, 1);
- int is_2 = extract32(insn, 29, 1);
- int data = (is_2 << 1) | is_s;
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), tcg_env,
- is_q ? 16 : 8, vec_full_reg_size(s),
- data, gen_helper_gvec_fmlal_a64);
- }
- return;
-
- default:
- case 0x18: /* FMAXNM */
- case 0x19: /* FMLA */
- case 0x1a: /* FADD */
- case 0x1b: /* FMULX */
- case 0x1c: /* FCMEQ */
- case 0x1e: /* FMAX */
- case 0x1f: /* FRECPS */
- case 0x38: /* FMINNM */
- case 0x39: /* FMLS */
- case 0x3a: /* FSUB */
- case 0x3e: /* FMIN */
- case 0x3f: /* FRSQRTS */
- case 0x58: /* FMAXNMP */
- case 0x5a: /* FADDP */
- case 0x5b: /* FMUL */
- case 0x5c: /* FCMGE */
- case 0x5d: /* FACGE */
- case 0x5e: /* FMAXP */
- case 0x5f: /* FDIV */
- case 0x78: /* FMINNMP */
- case 0x7a: /* FABD */
- case 0x7d: /* FACGT */
- case 0x7c: /* FCMGT */
- case 0x7e: /* FMINP */
- unallocated_encoding(s);
- return;
- }
-}
-
/* Integer op subgroup of C3.6.16. */
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
{
@@ -11251,16 +11215,13 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3: /* logic ops */
disas_simd_3same_logic(s, insn);
break;
- case 0x18 ... 0x31:
- /* floating point ops, sz[1] and U are part of opcode */
- disas_simd_3same_float(s, insn);
- break;
default:
disas_simd_3same_int(s, insn);
break;
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
+ case 0x18 ... 0x31: /* floating point ops */
unallocated_encoding(s);
break;
}
@@ -12526,22 +12487,15 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
is_fp = 2;
break;
- case 0x00: /* FMLAL */
- case 0x04: /* FMLSL */
- case 0x18: /* FMLAL2 */
- case 0x1c: /* FMLSL2 */
- if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_fhm, s)) {
- unallocated_encoding(s);
- return;
- }
- size = MO_16;
- /* is_fp, but we pass tcg_env not fp_status. */
- break;
default:
+ case 0x00: /* FMLAL */
case 0x01: /* FMLA */
+ case 0x04: /* FMLSL */
case 0x05: /* FMLS */
case 0x09: /* FMUL */
+ case 0x18: /* FMLAL2 */
case 0x19: /* FMULX */
+ case 0x1c: /* FMLSL2 */
unallocated_encoding(s);
return;
}
@@ -12660,22 +12614,6 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
}
return;
- case 0x00: /* FMLAL */
- case 0x04: /* FMLSL */
- case 0x18: /* FMLAL2 */
- case 0x1c: /* FMLSL2 */
- {
- int is_s = extract32(opcode, 2, 1);
- int is_2 = u;
- int data = (index << 2) | (is_2 << 1) | is_s;
- tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
- vec_full_reg_offset(s, rn),
- vec_full_reg_offset(s, rm), tcg_env,
- is_q ? 16 : 8, vec_full_reg_size(s),
- data, gen_helper_gvec_fmlal_idx_a64);
- }
- return;
-
case 0x08: /* MUL */
if (!is_long && !is_scalar) {
static gen_helper_gvec_3 * const fns[3] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PULL 42/42] target/arm: Convert disas_simd_3same_logic to decodetree
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (40 preceding siblings ...)
2024-05-28 14:07 ` [PULL 41/42] target/arm: Convert FMLAL, FMLSL to decodetree Peter Maydell
@ 2024-05-28 14:07 ` Peter Maydell
2024-05-28 18:28 ` [PULL v2 00/42] target-arm queue Richard Henderson
42 siblings, 0 replies; 44+ messages in thread
From: Peter Maydell @ 2024-05-28 14:07 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
This includes AND, ORR, EOR, BIC, ORN, BSF, BIT, BIF.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240524232121.284515-37-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/a64.decode | 10 +++++
target/arm/tcg/translate-a64.c | 68 ++++++++++------------------------
2 files changed, 29 insertions(+), 49 deletions(-)
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 7e993ed345f..f48adef5bba 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -55,6 +55,7 @@
@rrr_q1e3 ........ ... rm:5 ...... rn:5 rd:5 &qrrr_e q=1 esz=3
@rrrr_q1e3 ........ ... rm:5 . ra:5 rn:5 rd:5 &qrrrr_e q=1 esz=3
+@qrrr_b . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=0
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
@@ -847,6 +848,15 @@ SMINP_v 0.00 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
UMAXP_v 0.10 1110 ..1 ..... 10100 1 ..... ..... @qrrr_e
UMINP_v 0.10 1110 ..1 ..... 10101 1 ..... ..... @qrrr_e
+AND_v 0.00 1110 001 ..... 00011 1 ..... ..... @qrrr_b
+BIC_v 0.00 1110 011 ..... 00011 1 ..... ..... @qrrr_b
+ORR_v 0.00 1110 101 ..... 00011 1 ..... ..... @qrrr_b
+ORN_v 0.00 1110 111 ..... 00011 1 ..... ..... @qrrr_b
+EOR_v 0.10 1110 001 ..... 00011 1 ..... ..... @qrrr_b
+BSL_v 0.10 1110 011 ..... 00011 1 ..... ..... @qrrr_b
+BIT_v 0.10 1110 101 ..... 00011 1 ..... ..... @qrrr_b
+BIF_v 0.10 1110 111 ..... 00011 1 ..... ..... @qrrr_b
+
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index a4ff1fd2027..9167e4d0bd6 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -5280,6 +5280,24 @@ TRANS(SMINP_v, do_gvec_fn3_no64, a, gen_gvec_sminp)
TRANS(UMAXP_v, do_gvec_fn3_no64, a, gen_gvec_umaxp)
TRANS(UMINP_v, do_gvec_fn3_no64, a, gen_gvec_uminp)
+TRANS(AND_v, do_gvec_fn3, a, tcg_gen_gvec_and)
+TRANS(BIC_v, do_gvec_fn3, a, tcg_gen_gvec_andc)
+TRANS(ORR_v, do_gvec_fn3, a, tcg_gen_gvec_or)
+TRANS(ORN_v, do_gvec_fn3, a, tcg_gen_gvec_orc)
+TRANS(EOR_v, do_gvec_fn3, a, tcg_gen_gvec_xor)
+
+static bool do_bitsel(DisasContext *s, bool is_q, int d, int a, int b, int c)
+{
+ if (fp_access_check(s)) {
+ gen_gvec_fn4(s, is_q, d, a, b, c, tcg_gen_gvec_bitsel, 0);
+ }
+ return true;
+}
+
+TRANS(BSL_v, do_bitsel, a->q, a->rd, a->rd, a->rn, a->rm)
+TRANS(BIT_v, do_bitsel, a->q, a->rd, a->rm, a->rn, a->rd)
+TRANS(BIF_v, do_bitsel, a->q, a->rd, a->rm, a->rd, a->rn)
+
/*
* Advanced SIMD scalar/vector x indexed element
*/
@@ -10901,52 +10919,6 @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
}
}
-/* Logic op (opcode == 3) subgroup of C3.6.16. */
-static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
-{
- int rd = extract32(insn, 0, 5);
- int rn = extract32(insn, 5, 5);
- int rm = extract32(insn, 16, 5);
- int size = extract32(insn, 22, 2);
- bool is_u = extract32(insn, 29, 1);
- bool is_q = extract32(insn, 30, 1);
-
- if (!fp_access_check(s)) {
- return;
- }
-
- switch (size + 4 * is_u) {
- case 0: /* AND */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_and, 0);
- return;
- case 1: /* BIC */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_andc, 0);
- return;
- case 2: /* ORR */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_or, 0);
- return;
- case 3: /* ORN */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_orc, 0);
- return;
- case 4: /* EOR */
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_xor, 0);
- return;
-
- case 5: /* BSL bitwise select */
- gen_gvec_fn4(s, is_q, rd, rd, rn, rm, tcg_gen_gvec_bitsel, 0);
- return;
- case 6: /* BIT, bitwise insert if true */
- gen_gvec_fn4(s, is_q, rd, rm, rn, rd, tcg_gen_gvec_bitsel, 0);
- return;
- case 7: /* BIF, bitwise insert if false */
- gen_gvec_fn4(s, is_q, rd, rm, rd, rn, tcg_gen_gvec_bitsel, 0);
- return;
-
- default:
- g_assert_not_reached();
- }
-}
-
/* Integer op subgroup of C3.6.16. */
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
{
@@ -11212,12 +11184,10 @@ static void disas_simd_three_reg_same(DisasContext *s, uint32_t insn)
int opcode = extract32(insn, 11, 5);
switch (opcode) {
- case 0x3: /* logic ops */
- disas_simd_3same_logic(s, insn);
- break;
default:
disas_simd_3same_int(s, insn);
break;
+ case 0x3: /* logic ops */
case 0x14: /* SMAXP, UMAXP */
case 0x15: /* SMINP, UMINP */
case 0x17: /* ADDP */
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [PULL v2 00/42] target-arm queue
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
` (41 preceding siblings ...)
2024-05-28 14:07 ` [PULL 42/42] target/arm: Convert disas_simd_3same_logic " Peter Maydell
@ 2024-05-28 18:28 ` Richard Henderson
42 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2024-05-28 18:28 UTC (permalink / raw)
To: Peter Maydell, qemu-devel
On 5/28/24 07:07, Peter Maydell wrote:
> Hi; most of this is the first half of the A64 simd decodetree
> conversion; the rest is a mix of fixes from the last couple of weeks.
>
> v2 uses patches from the v2 decodetree series to avoid a few
> regressions in some A32 insns.
>
> (Richard: I'm still planning to review the second half of the
> v2 decodetree series; I just wanted to get the respin of this
> pullreq out today...)
>
> thanks
> -- PMM
>
> The following changes since commit ad10b4badc1dd5b28305f9b9f1168cf0aa3ae946:
>
> Merge tag 'pull-error-2024-05-27' ofhttps://repo.or.cz/qemu/armbru into staging (2024-05-27 06:40:42 -0700)
>
> are available in the Git repository at:
>
> https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240528
>
> for you to fetch changes up to f240df3c31b40e4cf1af1f156a88efc1a1df406c:
>
> target/arm: Convert disas_simd_3same_logic to decodetree (2024-05-28 14:29:01 +0100)
>
> ----------------------------------------------------------------
> target-arm queue:
> * xlnx_dpdma: fix descriptor endianness bug
> * hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers
> * hw/arm/npcm7xx: remove setting of mp-affinity
> * hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size
> * hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n>
> * hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx()
> * hw: arm: Remove use of tabs in some source files
> * docs/system: Remove ADC from raspi documentation
> * target/arm: Start of the conversion of A64 SIMD to decodetree
Applied, thanks. Please update https://wiki.qemu.org/ChangeLog/9.1 as appropriate.
r~
^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2024-05-28 18:29 UTC | newest]
Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-28 14:07 [PULL v2 00/42] target-arm queue Peter Maydell
2024-05-28 14:07 ` [PULL 01/42] xlnx_dpdma: fix descriptor endianness bug Peter Maydell
2024-05-28 14:07 ` [PULL 02/42] hvf: arm: Fix encodings for ID_AA64PFR1_EL1 and debug System registers Peter Maydell
2024-05-28 14:07 ` [PULL 03/42] hw/arm/npcm7xx: remove setting of mp-affinity Peter Maydell
2024-05-28 14:07 ` [PULL 04/42] hw/char: Correct STM32L4x5 usart register CR2 field ADD_0 size Peter Maydell
2024-05-28 14:07 ` [PULL 05/42] hw/intc/arm_gic: Fix handling of NS view of GICC_APR<n> Peter Maydell
2024-05-28 14:07 ` [PULL 06/42] hw/input/tsc2005: Fix -Wchar-subscripts warning in tsc2005_txrx() Peter Maydell
2024-05-28 14:07 ` [PULL 07/42] hw: arm: Remove use of tabs in some source files Peter Maydell
2024-05-28 14:07 ` [PULL 08/42] docs/system: Remove ADC from raspi documentation Peter Maydell
2024-05-28 14:07 ` [PULL 09/42] target/arm: Use PLD, PLDW, PLI not NOP for t32 Peter Maydell
2024-05-28 14:07 ` [PULL 10/42] target/arm: Zero-extend writeback for fp16 FCVTZS (scalar, integer) Peter Maydell
2024-05-28 14:07 ` [PULL 11/42] target/arm: Fix decode of FMOV (hp) vs MOVI Peter Maydell
2024-05-28 14:07 ` [PULL 12/42] target/arm: Verify sz=0 for Advanced SIMD scalar pairwise (fp16) Peter Maydell
2024-05-28 14:07 ` [PULL 13/42] target/arm: Split out gengvec.c Peter Maydell
2024-05-28 14:07 ` [PULL 14/42] target/arm: Split out gengvec64.c Peter Maydell
2024-05-28 14:07 ` [PULL 15/42] target/arm: Convert Cryptographic AES to decodetree Peter Maydell
2024-05-28 14:07 ` [PULL 16/42] target/arm: Convert Cryptographic 3-register SHA " Peter Maydell
2024-05-28 14:07 ` [PULL 17/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
2024-05-28 14:07 ` [PULL 18/42] target/arm: Convert Cryptographic 3-register SHA512 " Peter Maydell
2024-05-28 14:07 ` [PULL 19/42] target/arm: Convert Cryptographic 2-register " Peter Maydell
2024-05-28 14:07 ` [PULL 20/42] target/arm: Convert Cryptographic 4-register " Peter Maydell
2024-05-28 14:07 ` [PULL 21/42] target/arm: Convert Cryptographic 3-register, imm2 " Peter Maydell
2024-05-28 14:07 ` [PULL 22/42] target/arm: Convert XAR " Peter Maydell
2024-05-28 14:07 ` [PULL 23/42] target/arm: Convert Advanced SIMD copy " Peter Maydell
2024-05-28 14:07 ` [PULL 24/42] target/arm: Convert FMULX " Peter Maydell
2024-05-28 14:07 ` [PULL 25/42] target/arm: Convert FADD, FSUB, FDIV, FMUL " Peter Maydell
2024-05-28 14:07 ` [PULL 26/42] target/arm: Convert FMAX, FMIN, FMAXNM, FMINNM " Peter Maydell
2024-05-28 14:07 ` [PULL 27/42] target/arm: Introduce vfp_load_reg16 Peter Maydell
2024-05-28 14:07 ` [PULL 28/42] target/arm: Expand vfp neg and abs inline Peter Maydell
2024-05-28 14:07 ` [PULL 29/42] target/arm: Convert FNMUL to decodetree Peter Maydell
2024-05-28 14:07 ` [PULL 30/42] target/arm: Convert FMLA, FMLS " Peter Maydell
2024-05-28 14:07 ` [PULL 31/42] target/arm: Convert FCMEQ, FCMGE, FCMGT, FACGE, FACGT " Peter Maydell
2024-05-28 14:07 ` [PULL 32/42] target/arm: Convert FABD " Peter Maydell
2024-05-28 14:07 ` [PULL 33/42] target/arm: Convert FRECPS, FRSQRTS " Peter Maydell
2024-05-28 14:07 ` [PULL 34/42] target/arm: Convert FADDP " Peter Maydell
2024-05-28 14:07 ` [PULL 35/42] target/arm: Convert FMAXP, FMINP, FMAXNMP, FMINNMP " Peter Maydell
2024-05-28 14:07 ` [PULL 36/42] target/arm: Use gvec for neon faddp, fmaxp, fminp Peter Maydell
2024-05-28 14:07 ` [PULL 37/42] target/arm: Convert ADDP to decodetree Peter Maydell
2024-05-28 14:07 ` [PULL 38/42] target/arm: Use gvec for neon padd Peter Maydell
2024-05-28 14:07 ` [PULL 39/42] target/arm: Convert SMAXP, SMINP, UMAXP, UMINP to decodetree Peter Maydell
2024-05-28 14:07 ` [PULL 40/42] target/arm: Use gvec for neon pmax, pmin Peter Maydell
2024-05-28 14:07 ` [PULL 41/42] target/arm: Convert FMLAL, FMLSL to decodetree Peter Maydell
2024-05-28 14:07 ` [PULL 42/42] target/arm: Convert disas_simd_3same_logic " Peter Maydell
2024-05-28 18:28 ` [PULL v2 00/42] target-arm queue Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).