* [PULL v2 00/39] tcg patch queue
@ 2023-09-16 17:12 Richard Henderson
2023-09-16 17:12 ` [PULL v2 21/39] tcg/loongarch64: Implement 128-bit load & store Richard Henderson
2023-09-19 19:12 ` [PULL v2 00/39] tcg patch queue Stefan Hajnoczi
0 siblings, 2 replies; 3+ messages in thread
From: Richard Henderson @ 2023-09-16 17:12 UTC (permalink / raw)
To: qemu-devel
v2: tcg/loongarch64 patch set without last minute tweaks.
r~
The following changes since commit 005ad32358f12fe9313a4a01918a55e60d4f39e5:
Merge tag 'pull-tpm-2023-09-12-3' of https://github.com/stefanberger/qemu-tpm into staging (2023-09-13 13:41:57 -0400)
are available in the Git repository at:
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20230915-2
for you to fetch changes up to a97a83753c90d79ed15a716610af23fabd84aaed:
tcg: Map code_gen_buffer with PROT_BTI (2023-09-16 14:57:16 +0000)
----------------------------------------------------------------
*: Delete checks for old host definitions
tcg/loongarch64: Generate LSX instructions
fpu: Add conversions between bfloat16 and [u]int8
fpu: Handle m68k extended precision denormals properly
accel/tcg: Improve cputlb i/o organization
accel/tcg: Simplify tlb_plugin_lookup
accel/tcg: Remove false-negative halted assertion
tcg: Add gvec compare with immediate and scalar operand
tcg/aarch64: Emit BTI insns at jump landing pads
----------------------------------------------------------------
Akihiko Odaki (3):
util: Delete checks for old host definitions
softmmu: Delete checks for old host definitions
thunk: Delete checks for old host definitions
Jiajie Chen (16):
tcg/loongarch64: Import LSX instructions
tcg/loongarch64: Lower basic tcg vec ops to LSX
tcg: pass vece to tcg_target_const_match()
tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt
tcg/loongarch64: Lower add/sub_vec to vadd/vsub
tcg/loongarch64: Lower vector bitwise operations
tcg/loongarch64: Lower neg_vec to vneg
tcg/loongarch64: Lower mul_vec to vmul
tcg/loongarch64: Lower vector min max ops
tcg/loongarch64: Lower vector saturated ops
tcg/loongarch64: Lower vector shift vector ops
tcg/loongarch64: Lower bitsel_vec to vbitsel
tcg/loongarch64: Lower vector shift integer ops
tcg/loongarch64: Lower rotv_vec ops to LSX
tcg/loongarch64: Lower rotli_vec to vrotri
tcg/loongarch64: Implement 128-bit load & store
LIU Zhiwei (2):
accel/tcg: Fix the comment for CPUTLBEntryFull
fpu: Add conversions between bfloat16 and [u]int8
Nicholas Piggin (1):
accel/tcg: mttcg remove false-negative halted assertion
Richard Henderson (17):
tcg: Add gvec compare with immediate and scalar operand
target/arm: Use tcg_gen_gvec_cmpi for compare vs 0
accel/tcg: Simplify tlb_plugin_lookup
accel/tcg: Split out io_prepare and io_failed
accel/tcg: Use CPUTLBEntryFull.phys_addr in io_failed
plugin: Simplify struct qemu_plugin_hwaddr
accel/tcg: Merge cpu_transaction_failed into io_failed
accel/tcg: Replace direct use of io_readx/io_writex in do_{ld,st}_1
accel/tcg: Merge io_readx into do_ld_mmio_beN
accel/tcg: Merge io_writex into do_st_mmio_leN
accel/tcg: Introduce do_ld16_mmio_beN
accel/tcg: Introduce do_st16_mmio_leN
fpu: Handle m68k extended precision denormals properly
tcg: Add tcg_out_tb_start backend hook
util/cpuinfo-aarch64: Add CPUINFO_BTI
tcg/aarch64: Emit BTI insns at jump landing pads
tcg: Map code_gen_buffer with PROT_BTI
accel/tcg/tcg-runtime.h | 25 +
host/include/aarch64/host/cpuinfo.h | 1 +
include/exec/cpu-defs.h | 12 +-
include/exec/user/thunk.h | 3 +-
include/fpu/softfloat.h | 12 +
include/hw/core/cpu.h | 13 -
include/qemu/plugin-memory.h | 11 +-
include/qemu/typedefs.h | 1 -
include/tcg/tcg-op-gvec-common.h | 6 +
tcg/loongarch64/tcg-target-con-set.h | 9 +
tcg/loongarch64/tcg-target-con-str.h | 3 +
tcg/loongarch64/tcg-target.h | 40 +-
tcg/loongarch64/tcg-target.opc.h | 12 +
accel/tcg/cputlb.c | 437 ++-
accel/tcg/tcg-accel-ops-mttcg.c | 9 +-
accel/tcg/tcg-runtime-gvec.c | 26 +
fpu/softfloat.c | 67 +-
plugins/api.c | 27 +-
softmmu/async-teardown.c | 3 -
target/arm/tcg/translate.c | 56 +-
tcg/region.c | 41 +-
tcg/tcg-op-gvec.c | 149 +
tcg/tcg.c | 7 +-
tests/tcg/m68k/denormal.c | 53 +
util/cpuinfo-aarch64.c | 7 +
util/oslib-posix.c | 15 +-
fpu/softfloat-parts.c.inc | 7 +-
tcg/aarch64/tcg-target.c.inc | 59 +-
tcg/arm/tcg-target.c.inc | 7 +-
tcg/i386/tcg-target.c.inc | 7 +-
tcg/loongarch64/tcg-insn-defs.c.inc | 6019 +++++++++++++++++++++++++++++++++-
tcg/loongarch64/tcg-target.c.inc | 624 +++-
tcg/mips/tcg-target.c.inc | 7 +-
tcg/ppc/tcg-target.c.inc | 7 +-
tcg/riscv/tcg-target.c.inc | 7 +-
tcg/s390x/tcg-target.c.inc | 7 +-
tcg/sparc64/tcg-target.c.inc | 7 +-
tcg/tci/tcg-target.c.inc | 7 +-
tests/tcg/m68k/Makefile.target | 2 +-
39 files changed, 7419 insertions(+), 393 deletions(-)
create mode 100644 tcg/loongarch64/tcg-target.opc.h
create mode 100644 tests/tcg/m68k/denormal.c
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PULL v2 21/39] tcg/loongarch64: Implement 128-bit load & store
2023-09-16 17:12 [PULL v2 00/39] tcg patch queue Richard Henderson
@ 2023-09-16 17:12 ` Richard Henderson
2023-09-19 19:12 ` [PULL v2 00/39] tcg patch queue Stefan Hajnoczi
1 sibling, 0 replies; 3+ messages in thread
From: Richard Henderson @ 2023-09-16 17:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Jiajie Chen
From: Jiajie Chen <c@jia.je>
If LSX is available, use LSX instructions to implement 128-bit load &
store when MO_128 is required, otherwise use two 64-bit loads & stores.
Signed-off-by: Jiajie Chen <c@jia.je>
Message-Id: <20230908022302.180442-17-c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/loongarch64/tcg-target-con-set.h | 2 +
tcg/loongarch64/tcg-target.h | 2 +-
tcg/loongarch64/tcg-target.c.inc | 59 ++++++++++++++++++++++++++++
3 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h
index 914572d21b..77d62e38e7 100644
--- a/tcg/loongarch64/tcg-target-con-set.h
+++ b/tcg/loongarch64/tcg-target-con-set.h
@@ -18,6 +18,7 @@ C_O0_I1(r)
C_O0_I2(rZ, r)
C_O0_I2(rZ, rZ)
C_O0_I2(w, r)
+C_O0_I3(r, r, r)
C_O1_I1(r, r)
C_O1_I1(w, r)
C_O1_I1(w, w)
@@ -37,3 +38,4 @@ C_O1_I2(w, w, wM)
C_O1_I2(w, w, wA)
C_O1_I3(w, w, w, w)
C_O1_I4(r, rZ, rJ, rZ, rZ)
+C_O2_I1(r, r, r)
diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h
index 67b0a95532..03017672f6 100644
--- a/tcg/loongarch64/tcg-target.h
+++ b/tcg/loongarch64/tcg-target.h
@@ -171,7 +171,7 @@ extern bool use_lsx_instructions;
#define TCG_TARGET_HAS_muluh_i64 1
#define TCG_TARGET_HAS_mulsh_i64 1
-#define TCG_TARGET_HAS_qemu_ldst_i128 0
+#define TCG_TARGET_HAS_qemu_ldst_i128 use_lsx_instructions
#define TCG_TARGET_HAS_v64 0
#define TCG_TARGET_HAS_v128 use_lsx_instructions
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index 82901d678a..fde744e766 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -1081,6 +1081,48 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
}
}
+static void tcg_out_qemu_ldst_i128(TCGContext *s, TCGReg data_lo, TCGReg data_hi,
+ TCGReg addr_reg, MemOpIdx oi, bool is_ld)
+{
+ TCGLabelQemuLdst *ldst;
+ HostAddress h;
+
+ ldst = prepare_host_addr(s, &h, addr_reg, oi, is_ld);
+
+ if (h.aa.atom == MO_128) {
+ /*
+ * Use VLDX/VSTX when 128-bit atomicity is required.
+ * If address is aligned to 16-bytes, the 128-bit load/store is atomic.
+ */
+ if (is_ld) {
+ tcg_out_opc_vldx(s, TCG_VEC_TMP0, h.base, h.index);
+ tcg_out_opc_vpickve2gr_d(s, data_lo, TCG_VEC_TMP0, 0);
+ tcg_out_opc_vpickve2gr_d(s, data_hi, TCG_VEC_TMP0, 1);
+ } else {
+ tcg_out_opc_vinsgr2vr_d(s, TCG_VEC_TMP0, data_lo, 0);
+ tcg_out_opc_vinsgr2vr_d(s, TCG_VEC_TMP0, data_hi, 1);
+ tcg_out_opc_vstx(s, TCG_VEC_TMP0, h.base, h.index);
+ }
+ } else {
+ /* Otherwise use a pair of LD/ST. */
+ tcg_out_opc_add_d(s, TCG_REG_TMP0, h.base, h.index);
+ if (is_ld) {
+ tcg_out_opc_ld_d(s, data_lo, TCG_REG_TMP0, 0);
+ tcg_out_opc_ld_d(s, data_hi, TCG_REG_TMP0, 8);
+ } else {
+ tcg_out_opc_st_d(s, data_lo, TCG_REG_TMP0, 0);
+ tcg_out_opc_st_d(s, data_hi, TCG_REG_TMP0, 8);
+ }
+ }
+
+ if (ldst) {
+ ldst->type = TCG_TYPE_I128;
+ ldst->datalo_reg = data_lo;
+ ldst->datahi_reg = data_hi;
+ ldst->raddr = tcg_splitwx_to_rx(s->code_ptr);
+ }
+}
+
/*
* Entry-points
*/
@@ -1145,6 +1187,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
TCGArg a0 = args[0];
TCGArg a1 = args[1];
TCGArg a2 = args[2];
+ TCGArg a3 = args[3];
int c2 = const_args[2];
switch (opc) {
@@ -1507,6 +1550,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
case INDEX_op_qemu_ld_a64_i64:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64);
break;
+ case INDEX_op_qemu_ld_a32_i128:
+ case INDEX_op_qemu_ld_a64_i128:
+ tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, true);
+ break;
case INDEX_op_qemu_st_a32_i32:
case INDEX_op_qemu_st_a64_i32:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32);
@@ -1515,6 +1562,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
case INDEX_op_qemu_st_a64_i64:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64);
break;
+ case INDEX_op_qemu_st_a32_i128:
+ case INDEX_op_qemu_st_a64_i128:
+ tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, false);
+ break;
case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */
case INDEX_op_mov_i64:
@@ -1996,6 +2047,14 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
case INDEX_op_qemu_st_a64_i64:
return C_O0_I2(rZ, r);
+ case INDEX_op_qemu_ld_a32_i128:
+ case INDEX_op_qemu_ld_a64_i128:
+ return C_O2_I1(r, r, r);
+
+ case INDEX_op_qemu_st_a32_i128:
+ case INDEX_op_qemu_st_a64_i128:
+ return C_O0_I3(r, r, r);
+
case INDEX_op_brcond_i32:
case INDEX_op_brcond_i64:
return C_O0_I2(rZ, rZ);
--
2.34.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PULL v2 00/39] tcg patch queue
2023-09-16 17:12 [PULL v2 00/39] tcg patch queue Richard Henderson
2023-09-16 17:12 ` [PULL v2 21/39] tcg/loongarch64: Implement 128-bit load & store Richard Henderson
@ 2023-09-19 19:12 ` Stefan Hajnoczi
1 sibling, 0 replies; 3+ messages in thread
From: Stefan Hajnoczi @ 2023-09-19 19:12 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 115 bytes --]
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/8.2 for any user-visible changes.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-09-19 19:13 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-16 17:12 [PULL v2 00/39] tcg patch queue Richard Henderson
2023-09-16 17:12 ` [PULL v2 21/39] tcg/loongarch64: Implement 128-bit load & store Richard Henderson
2023-09-19 19:12 ` [PULL v2 00/39] tcg patch queue Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).