* [PULL 00/54] Remove 32-bit host support
@ 2026-01-18 22:03 Richard Henderson
2026-01-18 22:03 ` [PULL 01/54] gitlab-ci: Drop build-wasm32-32bit Richard Henderson
` (54 more replies)
0 siblings, 55 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel
The following changes since commit 42a5675aa9dd718f395ca3279098051dfdbbc6e1:
Merge tag 'accel-20260116' of https://github.com/philmd/qemu into staging (2026-01-16 22:26:36 +1100)
are available in the Git repository at:
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20260119
for you to fetch changes up to 239b9d0488b270f5781fd7cd7139262c165d0351:
include/qemu/atomic: Drop aligned_{u}int64_t (2026-01-17 10:46:51 +1100)
----------------------------------------------------------------
Remove support for 32-bit hosts.
----------------------------------------------------------------
Richard Henderson (54):
gitlab-ci: Drop build-wasm32-32bit
tests/docker/dockerfiles: Drop wasm32 from emsdk-wasm-cross.docker
gitlab: Remove 32-bit host testing
meson: Reject 32-bit hosts
meson: Drop cpu == wasm32 tests
*: Remove arm host support
bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT
*: Remove __i386__ tests
*: Remove i386 host support
host/include/x86_64/bufferiszero: Remove no SSE2 fallback
meson: Remove cpu == x86 tests
*: Remove ppc host support
tcg/i386: Remove TCG_TARGET_REG_BITS tests
tcg/x86_64: Rename from i386
tcg/ppc64: Rename from ppc
meson: Drop host_arch rename for mips64
meson: Drop host_arch rename for riscv64
meson: Remove cpu == riscv32 tests
tcg: Make TCG_TARGET_REG_BITS common
tcg: Replace TCG_TARGET_REG_BITS / 8
*: Drop TCG_TARGET_REG_BITS test for prefer_i64
tcg: Remove INDEX_op_brcond2_i32
tcg: Remove INDEX_op_setcond2_i32
tcg: Remove INDEX_op_dup2_vec
tcg/tci: Drop TCG_TARGET_REG_BITS tests
tcg/tci: Remove glue TCG_TARGET_REG_BITS renames
tcg: Drop TCG_TARGET_REG_BITS test in region.c
tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op.c
tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-gvec.c
tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-ldst.c
tcg: Drop TCG_TARGET_REG_BITS tests in tcg.c
tcg: Drop TCG_TARGET_REG_BITS tests in tcg-internal.h
tcg: Drop TCG_TARGET_REG_BITS test in tcg-has.h
include/tcg: Drop TCG_TARGET_REG_BITS tests
target/i386/tcg: Drop TCG_TARGET_REG_BITS test
target/riscv: Drop TCG_TARGET_REG_BITS test
accel/tcg/runtime: Remove 64-bit shift helpers
accel/tcg/runtime: Remove helper_nonatomic_cmpxchgo
tcg: Unconditionally define atomic64 helpers
accel/tcg: Drop CONFIG_ATOMIC64 checks from ldst_atomicicy.c.inc
accel/tcg: Drop CONFIG_ATOMIC64 test from translator.c
linux-user/arm: Drop CONFIG_ATOMIC64 test
linux-user/hppa: Drop CONFIG_ATOMIC64 test
target/arm: Drop CONFIG_ATOMIC64 tests
target/hppa: Drop CONFIG_ATOMIC64 test
target/m68k: Drop CONFIG_ATOMIC64 tests
target/s390x: Drop CONFIG_ATOMIC64 tests
target/s390x: Simplify atomicity check in do_csst
migration: Drop use of Stat64
block: Drop use of Stat64
util: Remove stats64
include/qemu/atomic: Drop qatomic_{read,set}_[iu]64
meson: Remove CONFIG_ATOMIC64
include/qemu/atomic: Drop aligned_{u}int64_t
accel/tcg/atomic_template.h | 4 +-
accel/tcg/tcg-runtime.h | 23 -
bsd-user/syscall_defs.h | 2 +-
host/include/i386/host/cpuinfo.h | 41 -
host/include/i386/host/crypto/aes-round.h | 152 -
host/include/i386/host/crypto/clmul.h | 29 -
host/include/ppc/host/cpuinfo.h | 30 -
host/include/ppc/host/crypto/aes-round.h | 182 -
host/include/ppc64/host/cpuinfo.h | 31 +-
host/include/ppc64/host/crypto/aes-round.h | 183 +-
host/include/{riscv => riscv64}/host/cpuinfo.h | 0
host/include/x86_64/host/cpuinfo.h | 42 +-
host/include/x86_64/host/crypto/aes-round.h | 153 +-
host/include/x86_64/host/crypto/clmul.h | 30 +-
include/accel/tcg/cpu-ldst-common.h | 9 -
include/block/block_int-common.h | 3 +-
include/qemu/atomic.h | 39 +-
include/qemu/cacheflush.h | 2 +-
include/qemu/osdep.h | 6 +-
include/qemu/processor.h | 2 +-
include/qemu/stats64.h | 199 --
include/qemu/timer.h | 9 -
include/system/cpu-timers-internal.h | 2 +-
include/tcg/helper-info.h | 2 +-
.../tcg/target-reg-bits.h | 8 +-
include/tcg/tcg-op.h | 9 +-
include/tcg/tcg-opc.h | 9 +-
include/tcg/tcg.h | 29 +-
linux-user/include/host/arm/host-signal.h | 43 -
linux-user/include/host/i386/host-signal.h | 38 -
.../include/host/{mips => mips64}/host-signal.h | 0
linux-user/include/host/ppc/host-signal.h | 39 -
.../include/host/{riscv => riscv64}/host-signal.h | 0
migration/migration-stats.h | 36 +-
tcg/aarch64/tcg-target-reg-bits.h | 12 -
tcg/arm/tcg-target-con-set.h | 47 -
tcg/arm/tcg-target-con-str.h | 26 -
tcg/arm/tcg-target-has.h | 73 -
tcg/arm/tcg-target-mo.h | 13 -
tcg/arm/tcg-target-reg-bits.h | 12 -
tcg/arm/tcg-target.h | 73 -
tcg/i386/tcg-target-reg-bits.h | 16 -
tcg/loongarch64/tcg-target-reg-bits.h | 21 -
tcg/mips/tcg-target-reg-bits.h | 16 -
tcg/{mips => mips64}/tcg-target-con-set.h | 0
tcg/{mips => mips64}/tcg-target-con-str.h | 0
tcg/{mips => mips64}/tcg-target-has.h | 0
tcg/{mips => mips64}/tcg-target-mo.h | 0
tcg/{mips => mips64}/tcg-target.h | 0
tcg/{ppc => ppc64}/tcg-target-con-set.h | 0
tcg/{ppc => ppc64}/tcg-target-con-str.h | 0
tcg/{ppc => ppc64}/tcg-target-has.h | 0
tcg/{ppc => ppc64}/tcg-target-mo.h | 0
tcg/{ppc => ppc64}/tcg-target.h | 0
tcg/riscv/tcg-target-reg-bits.h | 19 -
tcg/{riscv => riscv64}/tcg-target-con-set.h | 0
tcg/{riscv => riscv64}/tcg-target-con-str.h | 0
tcg/{riscv => riscv64}/tcg-target-has.h | 0
tcg/{riscv => riscv64}/tcg-target-mo.h | 0
tcg/{riscv => riscv64}/tcg-target.h | 0
tcg/s390x/tcg-target-reg-bits.h | 17 -
tcg/sparc64/tcg-target-reg-bits.h | 12 -
tcg/tcg-has.h | 5 -
tcg/tcg-internal.h | 21 +-
tcg/tci/tcg-target-has.h | 2 -
tcg/tci/tcg-target-mo.h | 2 +-
tcg/tci/tcg-target-reg-bits.h | 18 -
tcg/{i386 => x86_64}/tcg-target-con-set.h | 0
tcg/{i386 => x86_64}/tcg-target-con-str.h | 0
tcg/{i386 => x86_64}/tcg-target-has.h | 8 +-
tcg/{i386 => x86_64}/tcg-target-mo.h | 0
tcg/{i386 => x86_64}/tcg-target.h | 13 +-
accel/kvm/kvm-all.c | 2 +-
accel/qtest/qtest.c | 4 +-
accel/tcg/cputlb.c | 37 +-
accel/tcg/icount-common.c | 25 +-
accel/tcg/tcg-runtime.c | 15 -
accel/tcg/translator.c | 4 +-
accel/tcg/user-exec.c | 2 -
block/io.c | 10 +-
block/qapi.c | 2 +-
disas/disas-host.c | 9 -
hw/display/xenfb.c | 10 +-
hw/virtio/virtio-mem.c | 2 +-
linux-user/arm/cpu_loop.c | 19 +-
linux-user/hppa/cpu_loop.c | 14 +-
linux-user/mmap.c | 2 +-
linux-user/syscall.c | 9 -
migration/cpu-throttle.c | 4 +-
migration/migration-stats.c | 16 +-
migration/migration.c | 24 +-
migration/multifd-nocomp.c | 2 +-
migration/multifd-zero-page.c | 4 +-
migration/multifd.c | 12 +-
migration/qemu-file.c | 6 +-
migration/ram.c | 30 +-
migration/rdma.c | 8 +-
system/dirtylimit.c | 2 +-
target/arm/ptw.c | 18 +-
target/arm/tcg/gengvec.c | 32 +-
target/arm/tcg/gengvec64.c | 4 +-
target/arm/tcg/translate-sve.c | 26 +-
target/hppa/op_helper.c | 6 +-
target/i386/cpu.c | 10 -
target/m68k/op_helper.c | 7 +-
target/s390x/tcg/mem_helper.c | 18 +-
tcg/optimize.c | 322 --
tcg/region.c | 12 -
tcg/tcg-op-gvec.c | 113 +-
tcg/tcg-op-ldst.c | 130 +-
tcg/tcg-op-vec.c | 14 +-
tcg/tcg-op.c | 765 +----
tcg/tcg.c | 376 +--
tcg/tci.c | 73 +-
tests/unit/test-rcu-list.c | 17 +-
util/atomic64.c | 85 -
util/cacheflush.c | 4 +-
util/qsp.c | 12 +-
util/stats64.c | 148 -
.gitlab-ci.d/buildtest.yml | 9 -
.gitlab-ci.d/container-cross.yml | 20 -
.gitlab-ci.d/containers.yml | 3 -
.gitlab-ci.d/crossbuilds.yml | 45 -
MAINTAINERS | 16 +-
accel/tcg/atomic_common.c.inc | 32 -
accel/tcg/ldst_atomicity.c.inc | 149 +-
common-user/host/arm/safe-syscall.inc.S | 108 -
common-user/host/i386/safe-syscall.inc.S | 127 -
.../host/{mips => mips64}/safe-syscall.inc.S | 0
common-user/host/ppc/safe-syscall.inc.S | 107 -
.../host/{riscv => riscv64}/safe-syscall.inc.S | 0
configure | 52 +-
docs/about/deprecated.rst | 29 -
docs/about/removed-features.rst | 6 +
docs/devel/tcg-ops.rst | 32 +-
host/include/i386/host/bufferiszero.c.inc | 125 -
host/include/x86_64/host/bufferiszero.c.inc | 121 +-
meson.build | 105 +-
target/i386/tcg/emit.c.inc | 39 +-
target/riscv/insn_trans/trans_rvv.c.inc | 56 +-
tcg/arm/tcg-target-opc.h.inc | 16 -
tcg/arm/tcg-target.c.inc | 3489 --------------------
tcg/loongarch64/tcg-target.c.inc | 4 +-
tcg/{mips => mips64}/tcg-target-opc.h.inc | 0
tcg/{mips => mips64}/tcg-target.c.inc | 0
tcg/{ppc => ppc64}/tcg-target-opc.h.inc | 0
tcg/{ppc => ppc64}/tcg-target.c.inc | 2 +-
tcg/{riscv => riscv64}/tcg-target-opc.h.inc | 0
tcg/{riscv => riscv64}/tcg-target.c.inc | 4 +-
tcg/tci/tcg-target.c.inc | 84 +-
tcg/{i386 => x86_64}/tcg-target-opc.h.inc | 0
tcg/{i386 => x86_64}/tcg-target.c.inc | 552 +---
tests/docker/dockerfiles/emsdk-wasm-cross.docker | 15 +-
util/meson.build | 4 -
154 files changed, 1169 insertions(+), 8460 deletions(-)
delete mode 100644 host/include/i386/host/cpuinfo.h
delete mode 100644 host/include/i386/host/crypto/aes-round.h
delete mode 100644 host/include/i386/host/crypto/clmul.h
delete mode 100644 host/include/ppc/host/cpuinfo.h
delete mode 100644 host/include/ppc/host/crypto/aes-round.h
rename host/include/{riscv => riscv64}/host/cpuinfo.h (100%)
delete mode 100644 include/qemu/stats64.h
rename tcg/ppc/tcg-target-reg-bits.h => include/tcg/target-reg-bits.h (71%)
delete mode 100644 linux-user/include/host/arm/host-signal.h
delete mode 100644 linux-user/include/host/i386/host-signal.h
rename linux-user/include/host/{mips => mips64}/host-signal.h (100%)
delete mode 100644 linux-user/include/host/ppc/host-signal.h
rename linux-user/include/host/{riscv => riscv64}/host-signal.h (100%)
delete mode 100644 tcg/aarch64/tcg-target-reg-bits.h
delete mode 100644 tcg/arm/tcg-target-con-set.h
delete mode 100644 tcg/arm/tcg-target-con-str.h
delete mode 100644 tcg/arm/tcg-target-has.h
delete mode 100644 tcg/arm/tcg-target-mo.h
delete mode 100644 tcg/arm/tcg-target-reg-bits.h
delete mode 100644 tcg/arm/tcg-target.h
delete mode 100644 tcg/i386/tcg-target-reg-bits.h
delete mode 100644 tcg/loongarch64/tcg-target-reg-bits.h
delete mode 100644 tcg/mips/tcg-target-reg-bits.h
rename tcg/{mips => mips64}/tcg-target-con-set.h (100%)
rename tcg/{mips => mips64}/tcg-target-con-str.h (100%)
rename tcg/{mips => mips64}/tcg-target-has.h (100%)
rename tcg/{mips => mips64}/tcg-target-mo.h (100%)
rename tcg/{mips => mips64}/tcg-target.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-con-set.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-con-str.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-has.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-mo.h (100%)
rename tcg/{ppc => ppc64}/tcg-target.h (100%)
delete mode 100644 tcg/riscv/tcg-target-reg-bits.h
rename tcg/{riscv => riscv64}/tcg-target-con-set.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-con-str.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-has.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-mo.h (100%)
rename tcg/{riscv => riscv64}/tcg-target.h (100%)
delete mode 100644 tcg/s390x/tcg-target-reg-bits.h
delete mode 100644 tcg/sparc64/tcg-target-reg-bits.h
delete mode 100644 tcg/tci/tcg-target-reg-bits.h
rename tcg/{i386 => x86_64}/tcg-target-con-set.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-con-str.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-has.h (92%)
rename tcg/{i386 => x86_64}/tcg-target-mo.h (100%)
rename tcg/{i386 => x86_64}/tcg-target.h (86%)
delete mode 100644 util/atomic64.c
delete mode 100644 util/stats64.c
delete mode 100644 common-user/host/arm/safe-syscall.inc.S
delete mode 100644 common-user/host/i386/safe-syscall.inc.S
rename common-user/host/{mips => mips64}/safe-syscall.inc.S (100%)
delete mode 100644 common-user/host/ppc/safe-syscall.inc.S
rename common-user/host/{riscv => riscv64}/safe-syscall.inc.S (100%)
delete mode 100644 host/include/i386/host/bufferiszero.c.inc
delete mode 100644 tcg/arm/tcg-target-opc.h.inc
delete mode 100644 tcg/arm/tcg-target.c.inc
rename tcg/{mips => mips64}/tcg-target-opc.h.inc (100%)
rename tcg/{mips => mips64}/tcg-target.c.inc (100%)
rename tcg/{ppc => ppc64}/tcg-target-opc.h.inc (100%)
rename tcg/{ppc => ppc64}/tcg-target.c.inc (99%)
rename tcg/{riscv => riscv64}/tcg-target-opc.h.inc (100%)
rename tcg/{riscv => riscv64}/tcg-target.c.inc (99%)
rename tcg/{i386 => x86_64}/tcg-target-opc.h.inc (100%)
rename tcg/{i386 => x86_64}/tcg-target.c.inc (89%)
^ permalink raw reply [flat|nested] 57+ messages in thread
* [PULL 01/54] gitlab-ci: Drop build-wasm32-32bit
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 02/54] tests/docker/dockerfiles: Drop wasm32 from emsdk-wasm-cross.docker Richard Henderson
` (53 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Kohei Tokunaga, Philippe Mathieu-Daudé
Drop the wasm32 build and container jobs.
Reviewed-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
.gitlab-ci.d/buildtest.yml | 9 ---------
.gitlab-ci.d/container-cross.yml | 7 -------
.gitlab-ci.d/containers.yml | 1 -
3 files changed, 17 deletions(-)
diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index ea0f5bb057..e9b5b05e6e 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -785,15 +785,6 @@ coverity:
# Always manual on forks even if $QEMU_CI == "2"
- when: manual
-build-wasm32-32bit:
- extends: .wasm_build_job_template
- timeout: 2h
- needs:
- - job: wasm32-emsdk-cross-container
- variables:
- IMAGE: emsdk-wasm32-cross
- CONFIGURE_ARGS: --static --cpu=wasm32 --disable-tools --enable-debug --enable-tcg-interpreter
-
build-wasm64-64bit:
extends: .wasm_build_job_template
timeout: 2h
diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index 7022015e95..6bdd482b80 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -86,13 +86,6 @@ win64-fedora-cross-container:
variables:
NAME: fedora-win64-cross
-wasm32-emsdk-cross-container:
- extends: .container_job_template
- variables:
- NAME: emsdk-wasm32-cross
- BUILD_ARGS: --build-arg TARGET_CPU=wasm32
- DOCKERFILE: emsdk-wasm-cross
-
wasm64-emsdk-cross-container:
extends: .container_job_template
variables:
diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml
index dde9a3f840..9b6d33ac13 100644
--- a/.gitlab-ci.d/containers.yml
+++ b/.gitlab-ci.d/containers.yml
@@ -58,7 +58,6 @@ weekly-container-builds:
- tricore-debian-cross-container
- xtensa-debian-cross-container
- win64-fedora-cross-container
- - wasm32-emsdk-cross-container
- wasm64-emsdk-cross-container
# containers
- amd64-alpine-container
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 02/54] tests/docker/dockerfiles: Drop wasm32 from emsdk-wasm-cross.docker
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
2026-01-18 22:03 ` [PULL 01/54] gitlab-ci: Drop build-wasm32-32bit Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 03/54] gitlab: Remove 32-bit host testing Richard Henderson
` (52 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Kohei Tokunaga
We will no longer build wasm32, so drop the docker config.
Streamline the dockerfile to hardcode TARGET_CPU as wasm64.
Reviewed-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
.gitlab-ci.d/container-cross.yml | 1 -
tests/docker/dockerfiles/emsdk-wasm-cross.docker | 15 ++++-----------
2 files changed, 4 insertions(+), 12 deletions(-)
diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index 6bdd482b80..b376c837dc 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -90,5 +90,4 @@ wasm64-emsdk-cross-container:
extends: .container_job_template
variables:
NAME: emsdk-wasm64-cross
- BUILD_ARGS: --build-arg TARGET_CPU=wasm64
DOCKERFILE: emsdk-wasm-cross
diff --git a/tests/docker/dockerfiles/emsdk-wasm-cross.docker b/tests/docker/dockerfiles/emsdk-wasm-cross.docker
index ecd5a02903..8a924816f9 100644
--- a/tests/docker/dockerfiles/emsdk-wasm-cross.docker
+++ b/tests/docker/dockerfiles/emsdk-wasm-cross.docker
@@ -7,7 +7,6 @@ ARG GLIB_VERSION=${GLIB_MINOR_VERSION}.0
ARG PIXMAN_VERSION=0.44.2
ARG FFI_VERSION=v3.5.2
ARG MESON_VERSION=1.5.0
-ARG TARGET_CPU=wasm32
FROM docker.io/emscripten/emsdk:$EMSDK_VERSION_QEMU AS build-base-common
ARG MESON_VERSION
@@ -31,21 +30,16 @@ RUN mkdir /build
WORKDIR /build
RUN mkdir -p $TARGET
-FROM build-base-common AS build-base-wasm32
-
-FROM build-base-common AS build-base-wasm64
+FROM build-base-common AS build-base
ENV CFLAGS="$CFLAGS -sMEMORY64=1"
ENV CXXFLAGS="$CXXFLAGS -sMEMORY64=1"
ENV LDFLAGS="$LDFLAGS -sMEMORY64=1"
-
-FROM build-base-${TARGET_CPU} AS build-base
-ARG TARGET_CPU
RUN <<EOF
cat <<EOT > /cross.meson
[host_machine]
system = 'emscripten'
-cpu_family = '${TARGET_CPU}'
-cpu = '${TARGET_CPU}'
+cpu_family = 'wasm64'
+cpu = 'wasm64'
endian = 'little'
[binaries]
@@ -67,14 +61,13 @@ RUN emconfigure ./configure --prefix=$TARGET --static
RUN emmake make install -j$(nproc)
FROM build-base AS libffi-dev
-ARG TARGET_CPU
ARG FFI_VERSION
RUN mkdir -p /libffi
RUN git clone https://github.com/libffi/libffi /libffi
WORKDIR /libffi
RUN git checkout $FFI_VERSION
RUN autoreconf -fiv
-RUN emconfigure ./configure --host=${TARGET_CPU}-unknown-linux \
+RUN emconfigure ./configure --host=wasm64-unknown-linux \
--prefix=$TARGET --enable-static \
--disable-shared --disable-dependency-tracking \
--disable-builddir --disable-multi-os-directory \
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 03/54] gitlab: Remove 32-bit host testing
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
2026-01-18 22:03 ` [PULL 01/54] gitlab-ci: Drop build-wasm32-32bit Richard Henderson
2026-01-18 22:03 ` [PULL 02/54] tests/docker/dockerfiles: Drop wasm32 from emsdk-wasm-cross.docker Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 04/54] meson: Reject 32-bit hosts Richard Henderson
` (51 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé
These deprecated builds will be disabled.
Remove testing of armhf and i686.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
.gitlab-ci.d/container-cross.yml | 12 ---------
.gitlab-ci.d/containers.yml | 2 --
.gitlab-ci.d/crossbuilds.yml | 45 --------------------------------
3 files changed, 59 deletions(-)
diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml
index b376c837dc..d7ae57fb1f 100644
--- a/.gitlab-ci.d/container-cross.yml
+++ b/.gitlab-ci.d/container-cross.yml
@@ -22,12 +22,6 @@ arm64-debian-cross-container:
variables:
NAME: debian-arm64-cross
-armhf-debian-cross-container:
- extends: .container_job_template
- stage: containers
- variables:
- NAME: debian-armhf-cross
-
hexagon-cross-container:
extends: .container_job_template
stage: containers
@@ -40,12 +34,6 @@ loongarch-debian-cross-container:
variables:
NAME: debian-loongarch-cross
-i686-debian-cross-container:
- extends: .container_job_template
- stage: containers
- variables:
- NAME: debian-i686-cross
-
mips64el-debian-cross-container:
extends: .container_job_template
stage: containers
diff --git a/.gitlab-ci.d/containers.yml b/.gitlab-ci.d/containers.yml
index 9b6d33ac13..6aeccf8be0 100644
--- a/.gitlab-ci.d/containers.yml
+++ b/.gitlab-ci.d/containers.yml
@@ -47,10 +47,8 @@ weekly-container-builds:
- amd64-debian-user-cross-container
- amd64-debian-legacy-cross-container
- arm64-debian-cross-container
- - armhf-debian-cross-container
- hexagon-cross-container
- loongarch-debian-cross-container
- - i686-debian-cross-container
- mips64el-debian-cross-container
- ppc64el-debian-cross-container
- riscv64-debian-cross-container
diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 99dfa7eea6..59ff8b1d87 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -1,13 +1,6 @@
include:
- local: '/.gitlab-ci.d/crossbuild-template.yml'
-cross-armhf-user:
- extends: .cross_user_build_job
- needs:
- - job: armhf-debian-cross-container
- variables:
- IMAGE: debian-armhf-cross
-
cross-arm64-system:
extends: .cross_system_build_job
needs:
@@ -30,44 +23,6 @@ cross-arm64-kvm-only:
IMAGE: debian-arm64-cross
EXTRA_CONFIGURE_OPTS: --disable-tcg --without-default-features
-cross-i686-system:
- extends:
- - .cross_system_build_job
- - .cross_test_artifacts
- needs:
- - job: i686-debian-cross-container
- variables:
- IMAGE: debian-i686-cross
- EXTRA_CONFIGURE_OPTS: --disable-kvm
- MAKE_CHECK_ARGS: check-qtest
-
-cross-i686-user:
- extends:
- - .cross_user_build_job
- - .cross_test_artifacts
- needs:
- - job: i686-debian-cross-container
- variables:
- IMAGE: debian-i686-cross
- MAKE_CHECK_ARGS: check
-
-cross-i686-tci:
- extends:
- - .cross_accel_build_job
- - .cross_test_artifacts
- timeout: 60m
- needs:
- - job: i686-debian-cross-container
- variables:
- IMAGE: debian-i686-cross
- ACCEL: tcg-interpreter
- EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,arm-softmmu,arm-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins --disable-kvm
- # Force tests to run with reduced parallelism, to see whether this
- # reduces the flakiness of this CI job. The CI
- # environment by default shows us 8 CPUs and so we
- # would otherwise be using a parallelism of 9.
- MAKE_CHECK_ARGS: check check-tcg -j2
-
cross-mips64el-system:
extends: .cross_system_build_job
needs:
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 04/54] meson: Reject 32-bit hosts
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (2 preceding siblings ...)
2026-01-18 22:03 ` [PULL 03/54] gitlab: Remove 32-bit host testing Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 05/54] meson: Drop cpu == wasm32 tests Richard Henderson
` (50 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Philippe Mathieu-Daudé, Pierrick Bouvier
32-bit hosts have been deprecated since 10.0.
As the first step, disable any such at configuration time.
Further patches will remove the dead code.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
docs/about/deprecated.rst | 29 -----------------------------
docs/about/removed-features.rst | 6 ++++++
meson.build | 17 ++++-------------
3 files changed, 10 insertions(+), 42 deletions(-)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 7abb3dab59..88efa3aa80 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -186,28 +186,6 @@ maintain our cross-compilation CI tests of the architecture. As we no longer
have CI coverage support may bitrot away before the deprecation process
completes.
-System emulation on 32-bit x86 hosts (since 8.0)
-''''''''''''''''''''''''''''''''''''''''''''''''
-
-Support for 32-bit x86 host deployments is increasingly uncommon in mainstream
-OS distributions given the widespread availability of 64-bit x86 hardware.
-The QEMU project no longer considers 32-bit x86 support for system emulation to
-be an effective use of its limited resources, and thus intends to discontinue
-it. Since all recent x86 hardware from the past >10 years is capable of the
-64-bit x86 extensions, a corresponding 64-bit OS should be used instead.
-
-TCG Plugin support not enabled by default on 32-bit hosts (since 9.2)
-'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
-
-While it is still possible to enable TCG plugin support for 32-bit
-hosts there are a number of potential pitfalls when instrumenting
-64-bit guests. The plugin APIs typically pass most addresses as
-uint64_t but practices like encoding that address in a host pointer
-for passing as user-data will lose data. As most software analysis
-benefits from having plenty of host memory it seems reasonable to
-encourage users to use 64 bit builds of QEMU for analysis work
-whatever targets they are instrumenting.
-
TCG Plugin support not enabled by default with TCI (since 9.2)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
@@ -216,13 +194,6 @@ is going to be so much slower it wouldn't make sense for any serious
instrumentation. Due to implementation differences there will also be
anomalies in things like memory instrumentation.
-32-bit host operating systems (since 10.0)
-''''''''''''''''''''''''''''''''''''''''''
-
-Keeping 32-bit host support alive is a substantial burden for the
-QEMU project. Thus QEMU will in future drop the support for all
-32-bit host systems.
-
System emulator CPUs
--------------------
diff --git a/docs/about/removed-features.rst b/docs/about/removed-features.rst
index e81d79da47..b0d7fa8813 100644
--- a/docs/about/removed-features.rst
+++ b/docs/about/removed-features.rst
@@ -572,6 +572,12 @@ like the ``akita`` or ``terrier``; it has been deprecated in the
kernel since 2001. None of the board types QEMU supports need
``param_struct`` support, so this option has been removed.
+32-bit host operating systems (removed in 11.0)
+'''''''''''''''''''''''''''''''''''''''''''''''
+
+Keeping 32-bit host support alive was a substantial burden for the
+QEMU project. Thus QEMU dropped all support for all 32-bit host systems.
+
User-mode emulator command line arguments
-----------------------------------------
diff --git a/meson.build b/meson.build
index 600c50007d..28f61be675 100644
--- a/meson.build
+++ b/meson.build
@@ -332,6 +332,10 @@ endif
# Compiler flags #
##################
+if cc.sizeof('void *') < 8
+ error('QEMU requires a 64-bit CPU host architecture')
+endif
+
foreach lang : all_languages
compiler = meson.get_compiler(lang)
if compiler.get_id() == 'gcc' and compiler.version().version_compare('>=7.4')
@@ -3247,9 +3251,6 @@ if host_os == 'windows'
endif
endif
-# Detect host pointer size for the target configuration loop.
-host_long_bits = cc.sizeof('void *') * 8
-
# Detect if ConvertStringToBSTR has been defined in _com_util namespace
if host_os == 'windows'
has_convert_string_to_bstr = cxx.links('''
@@ -3360,10 +3361,6 @@ foreach target : target_dirs
target_kconfig = []
foreach sym: accelerators
- # Disallow 64-bit on 32-bit emulation and virtualization
- if host_long_bits < config_target['TARGET_LONG_BITS'].to_int()
- continue
- endif
if sym == 'CONFIG_TCG' or target in accelerator_targets.get(sym, [])
config_target += { sym: 'y' }
config_all_accel += { sym: 'y' }
@@ -5036,12 +5033,6 @@ if host_arch == 'unknown'
message('configure has succeeded and you can continue to build, but')
message('QEMU will use a slow interpreter to emulate the target CPU.')
endif
-elif host_long_bits < 64
- message()
- warning('DEPRECATED HOST CPU')
- message()
- message('Support for 32-bit CPU host architecture ' + cpu + ' is going')
- message('to be dropped in a future QEMU release.')
elif host_arch == 'mips'
message()
warning('DEPRECATED HOST CPU')
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 05/54] meson: Drop cpu == wasm32 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (3 preceding siblings ...)
2026-01-18 22:03 ` [PULL 04/54] meson: Reject 32-bit hosts Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 06/54] *: Remove arm host support Richard Henderson
` (49 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Kohei Tokunaga
The 32-bit wasm32 host is no longer supported.
Reviewed-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
configure | 5 +----
meson.build | 6 +++---
2 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/configure b/configure
index 326d27dab1..ba1b207b56 100755
--- a/configure
+++ b/configure
@@ -425,7 +425,7 @@ elif check_define __aarch64__ ; then
elif check_define __loongarch64 ; then
cpu="loongarch64"
elif check_define EMSCRIPTEN ; then
- error_exit "wasm32 or wasm64 must be specified to the cpu flag"
+ error_exit "wasm64 must be specified to the cpu flag"
else
# Using uname is really broken, but it is just a fallback for architectures
# that are going to use TCI anyway
@@ -523,9 +523,6 @@ case "$cpu" in
linux_arch=x86
CPU_CFLAGS="-m64"
;;
- wasm32)
- CPU_CFLAGS="-m32"
- ;;
wasm64)
CPU_CFLAGS="-m64 -sMEMORY64=$wasm64_memory64"
;;
diff --git a/meson.build b/meson.build
index 28f61be675..082c7a86ca 100644
--- a/meson.build
+++ b/meson.build
@@ -51,7 +51,7 @@ qapi_trace_events = []
bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin']
supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten']
supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86', 'x86_64',
- 'arm', 'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm32', 'wasm64']
+ 'arm', 'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
cpu = host_machine.cpu_family()
@@ -927,7 +927,7 @@ if have_tcg
if not get_option('tcg_interpreter')
error('Unsupported CPU @0@, try --enable-tcg-interpreter'.format(cpu))
endif
- elif host_arch == 'wasm32' or host_arch == 'wasm64'
+ elif host_arch == 'wasm64'
if not get_option('tcg_interpreter')
error('WebAssembly host requires --enable-tcg-interpreter')
endif
@@ -4248,7 +4248,7 @@ if have_rust
bindgen_args_common += ['--merge-extern-blocks']
endif
bindgen_c_args = []
- if host_arch == 'wasm32'
+ if host_arch == 'wasm64'
bindgen_c_args += ['-fvisibility=default']
endif
subdir('rust')
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 06/54] *: Remove arm host support
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (4 preceding siblings ...)
2026-01-18 22:03 ` [PULL 05/54] meson: Drop cpu == wasm32 tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT Richard Henderson
` (48 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Remove tcg/arm.
Remove instances of __arm__, except from tests and imported headers.
Remove arm from supported_cpus.
Remove linux-user/include/host/arm.
Remove common-user/host/arm.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/qemu/osdep.h | 2 +-
linux-user/include/host/arm/host-signal.h | 43 -
tcg/arm/tcg-target-con-set.h | 47 -
tcg/arm/tcg-target-con-str.h | 26 -
tcg/arm/tcg-target-has.h | 73 -
tcg/arm/tcg-target-mo.h | 13 -
tcg/arm/tcg-target-reg-bits.h | 12 -
tcg/arm/tcg-target.h | 73 -
disas/disas-host.c | 3 -
hw/virtio/virtio-mem.c | 2 +-
linux-user/mmap.c | 2 +-
MAINTAINERS | 6 -
common-user/host/arm/safe-syscall.inc.S | 108 -
configure | 7 -
meson.build | 7 +-
tcg/arm/tcg-target-opc.h.inc | 16 -
tcg/arm/tcg-target.c.inc | 3489 ---------------------
17 files changed, 5 insertions(+), 3924 deletions(-)
delete mode 100644 linux-user/include/host/arm/host-signal.h
delete mode 100644 tcg/arm/tcg-target-con-set.h
delete mode 100644 tcg/arm/tcg-target-con-str.h
delete mode 100644 tcg/arm/tcg-target-has.h
delete mode 100644 tcg/arm/tcg-target-mo.h
delete mode 100644 tcg/arm/tcg-target-reg-bits.h
delete mode 100644 tcg/arm/tcg-target.h
delete mode 100644 common-user/host/arm/safe-syscall.inc.S
delete mode 100644 tcg/arm/tcg-target-opc.h.inc
delete mode 100644 tcg/arm/tcg-target.c.inc
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 3cb45a1467..4cdeda0b9c 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -566,7 +566,7 @@ int madvise(char *, size_t, int);
#endif
#if defined(__linux__) && \
- (defined(__x86_64__) || defined(__arm__) || defined(__aarch64__) \
+ (defined(__x86_64__) || defined(__aarch64__) \
|| defined(__powerpc64__) || defined(__riscv))
/* Use 2 MiB alignment so transparent hugepages can be used by KVM.
Valgrind does not support alignments larger than 1 MiB,
diff --git a/linux-user/include/host/arm/host-signal.h b/linux-user/include/host/arm/host-signal.h
deleted file mode 100644
index faba496d24..0000000000
--- a/linux-user/include/host/arm/host-signal.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * host-signal.h: signal info dependent on the host architecture
- *
- * Copyright (c) 2003-2005 Fabrice Bellard
- * Copyright (c) 2021 Linaro Limited
- *
- * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef ARM_HOST_SIGNAL_H
-#define ARM_HOST_SIGNAL_H
-
-/* The third argument to a SA_SIGINFO handler is ucontext_t. */
-typedef ucontext_t host_sigcontext;
-
-static inline uintptr_t host_signal_pc(host_sigcontext *uc)
-{
- return uc->uc_mcontext.arm_pc;
-}
-
-static inline void host_signal_set_pc(host_sigcontext *uc, uintptr_t pc)
-{
- uc->uc_mcontext.arm_pc = pc;
-}
-
-static inline void *host_signal_mask(host_sigcontext *uc)
-{
- return &uc->uc_sigmask;
-}
-
-static inline bool host_signal_write(siginfo_t *info, host_sigcontext *uc)
-{
- /*
- * In the FSR, bit 11 is WnR, assuming a v6 or
- * later processor. On v5 we will always report
- * this as a read, which will fail later.
- */
- uint32_t fsr = uc->uc_mcontext.error_code;
- return extract32(fsr, 11, 1);
-}
-
-#endif
diff --git a/tcg/arm/tcg-target-con-set.h b/tcg/arm/tcg-target-con-set.h
deleted file mode 100644
index 16b1193228..0000000000
--- a/tcg/arm/tcg-target-con-set.h
+++ /dev/null
@@ -1,47 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define Arm target-specific constraint sets.
- * Copyright (c) 2021 Linaro
- */
-
-/*
- * C_On_Im(...) defines a constraint set with <n> outputs and <m> inputs.
- * Each operand should be a sequence of constraint letters as defined by
- * tcg-target-con-str.h; the constraint combination is inclusive or.
- */
-C_O0_I1(r)
-C_O0_I2(r, r)
-C_O0_I2(r, rIN)
-C_O0_I2(q, q)
-C_O0_I2(w, r)
-C_O0_I3(q, q, q)
-C_O0_I3(Q, p, q)
-C_O0_I4(r, r, rI, rI)
-C_O0_I4(Q, p, q, q)
-C_O1_I1(r, q)
-C_O1_I1(r, r)
-C_O1_I1(w, r)
-C_O1_I1(w, w)
-C_O1_I1(w, wr)
-C_O1_I2(r, 0, rZ)
-C_O1_I2(r, q, q)
-C_O1_I2(r, r, r)
-C_O1_I2(r, r, rI)
-C_O1_I2(r, r, rIK)
-C_O1_I2(r, r, rIN)
-C_O1_I2(r, r, ri)
-C_O1_I2(r, rI, r)
-C_O1_I2(r, rI, rIK)
-C_O1_I2(r, rI, rIN)
-C_O1_I2(r, rZ, rZ)
-C_O1_I2(w, 0, w)
-C_O1_I2(w, w, w)
-C_O1_I2(w, w, wO)
-C_O1_I2(w, w, wV)
-C_O1_I2(w, w, wZ)
-C_O1_I3(w, w, w, w)
-C_O1_I4(r, r, r, rI, rI)
-C_O1_I4(r, r, rIN, rIK, 0)
-C_O2_I1(e, p, q)
-C_O2_I2(e, p, q, q)
-C_O2_I2(r, r, r, r)
diff --git a/tcg/arm/tcg-target-con-str.h b/tcg/arm/tcg-target-con-str.h
deleted file mode 100644
index f83f1d3919..0000000000
--- a/tcg/arm/tcg-target-con-str.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define Arm target-specific operand constraints.
- * Copyright (c) 2021 Linaro
- */
-
-/*
- * Define constraint letters for register sets:
- * REGS(letter, register_mask)
- */
-REGS('e', ALL_GENERAL_REGS & 0x5555) /* even regs */
-REGS('r', ALL_GENERAL_REGS)
-REGS('q', ALL_QLDST_REGS)
-REGS('Q', ALL_QLDST_REGS & 0x5555) /* even qldst */
-REGS('w', ALL_VECTOR_REGS)
-
-/*
- * Define constraint letters for constants:
- * CONST(letter, TCG_CT_CONST_* bit set)
- */
-CONST('I', TCG_CT_CONST_ARM)
-CONST('K', TCG_CT_CONST_INV)
-CONST('N', TCG_CT_CONST_NEG)
-CONST('O', TCG_CT_CONST_ORRI)
-CONST('V', TCG_CT_CONST_ANDI)
-CONST('Z', TCG_CT_CONST_ZERO)
diff --git a/tcg/arm/tcg-target-has.h b/tcg/arm/tcg-target-has.h
deleted file mode 100644
index 3bbbde5d59..0000000000
--- a/tcg/arm/tcg-target-has.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific opcode support
- * Copyright (c) 2008 Fabrice Bellard
- * Copyright (c) 2008 Andrzej Zaborowski
- */
-
-#ifndef TCG_TARGET_HAS_H
-#define TCG_TARGET_HAS_H
-
-extern int arm_arch;
-
-#define use_armv7_instructions (__ARM_ARCH >= 7 || arm_arch >= 7)
-
-#ifdef __ARM_ARCH_EXT_IDIV__
-#define use_idiv_instructions 1
-#else
-extern bool use_idiv_instructions;
-#endif
-#ifdef __ARM_NEON__
-#define use_neon_instructions 1
-#else
-extern bool use_neon_instructions;
-#endif
-
-/* optional instructions */
-#define TCG_TARGET_HAS_qemu_ldst_i128 0
-#define TCG_TARGET_HAS_tst 1
-
-#define TCG_TARGET_HAS_v64 use_neon_instructions
-#define TCG_TARGET_HAS_v128 use_neon_instructions
-#define TCG_TARGET_HAS_v256 0
-
-#define TCG_TARGET_HAS_andc_vec 1
-#define TCG_TARGET_HAS_orc_vec 1
-#define TCG_TARGET_HAS_nand_vec 0
-#define TCG_TARGET_HAS_nor_vec 0
-#define TCG_TARGET_HAS_eqv_vec 0
-#define TCG_TARGET_HAS_not_vec 1
-#define TCG_TARGET_HAS_neg_vec 1
-#define TCG_TARGET_HAS_abs_vec 1
-#define TCG_TARGET_HAS_roti_vec 0
-#define TCG_TARGET_HAS_rots_vec 0
-#define TCG_TARGET_HAS_rotv_vec 0
-#define TCG_TARGET_HAS_shi_vec 1
-#define TCG_TARGET_HAS_shs_vec 0
-#define TCG_TARGET_HAS_shv_vec 0
-#define TCG_TARGET_HAS_mul_vec 1
-#define TCG_TARGET_HAS_sat_vec 1
-#define TCG_TARGET_HAS_minmax_vec 1
-#define TCG_TARGET_HAS_bitsel_vec 1
-#define TCG_TARGET_HAS_cmpsel_vec 0
-#define TCG_TARGET_HAS_tst_vec 1
-
-static inline bool
-tcg_target_extract_valid(TCGType type, unsigned ofs, unsigned len)
-{
- if (use_armv7_instructions) {
- return true; /* SBFX or UBFX */
- }
- switch (len) {
- case 8: /* SXTB or UXTB */
- case 16: /* SXTH or UXTH */
- return (ofs % 8) == 0;
- }
- return false;
-}
-
-#define TCG_TARGET_extract_valid tcg_target_extract_valid
-#define TCG_TARGET_sextract_valid tcg_target_extract_valid
-#define TCG_TARGET_deposit_valid(type, ofs, len) use_armv7_instructions
-
-#endif
diff --git a/tcg/arm/tcg-target-mo.h b/tcg/arm/tcg-target-mo.h
deleted file mode 100644
index 12542dfd1c..0000000000
--- a/tcg/arm/tcg-target-mo.h
+++ /dev/null
@@ -1,13 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific memory model
- * Copyright (c) 2008 Fabrice Bellard
- * Copyright (c) 2008 Andrzej Zaborowski
- */
-
-#ifndef TCG_TARGET_MO_H
-#define TCG_TARGET_MO_H
-
-#define TCG_TARGET_DEFAULT_MO 0
-
-#endif
diff --git a/tcg/arm/tcg-target-reg-bits.h b/tcg/arm/tcg-target-reg-bits.h
deleted file mode 100644
index 23b7730a8d..0000000000
--- a/tcg/arm/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2023 Linaro
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#define TCG_TARGET_REG_BITS 32
-
-#endif
diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h
deleted file mode 100644
index 4f9f877121..0000000000
--- a/tcg/arm/tcg-target.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Tiny Code Generator for QEMU
- *
- * Copyright (c) 2008 Fabrice Bellard
- * Copyright (c) 2008 Andrzej Zaborowski
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to deal
- * in the Software without restriction, including without limitation the rights
- * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- * copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
- * THE SOFTWARE.
- */
-
-#ifndef ARM_TCG_TARGET_H
-#define ARM_TCG_TARGET_H
-
-#define TCG_TARGET_INSN_UNIT_SIZE 4
-#define MAX_CODE_GEN_BUFFER_SIZE UINT32_MAX
-
-typedef enum {
- TCG_REG_R0 = 0,
- TCG_REG_R1,
- TCG_REG_R2,
- TCG_REG_R3,
- TCG_REG_R4,
- TCG_REG_R5,
- TCG_REG_R6,
- TCG_REG_R7,
- TCG_REG_R8,
- TCG_REG_R9,
- TCG_REG_R10,
- TCG_REG_R11,
- TCG_REG_R12,
- TCG_REG_R13,
- TCG_REG_R14,
- TCG_REG_PC,
-
- TCG_REG_Q0,
- TCG_REG_Q1,
- TCG_REG_Q2,
- TCG_REG_Q3,
- TCG_REG_Q4,
- TCG_REG_Q5,
- TCG_REG_Q6,
- TCG_REG_Q7,
- TCG_REG_Q8,
- TCG_REG_Q9,
- TCG_REG_Q10,
- TCG_REG_Q11,
- TCG_REG_Q12,
- TCG_REG_Q13,
- TCG_REG_Q14,
- TCG_REG_Q15,
-
- TCG_AREG0 = TCG_REG_R6,
- TCG_REG_CALL_STACK = TCG_REG_R13,
-} TCGReg;
-
-#define TCG_TARGET_NB_REGS 32
-
-#endif
diff --git a/disas/disas-host.c b/disas/disas-host.c
index 4b06f41fa6..88e7d8800c 100644
--- a/disas/disas-host.c
+++ b/disas/disas-host.c
@@ -74,9 +74,6 @@ static void initialize_debug_host(CPUDebug *s)
#elif defined(__sparc__)
s->info.print_insn = print_insn_sparc;
s->info.mach = bfd_mach_sparc_v9b;
-#elif defined(__arm__)
- /* TCG only generates code for arm mode. */
- s->info.cap_arch = CS_ARCH_ARM;
#elif defined(__MIPSEB__)
s->info.print_insn = print_insn_big_mips;
#elif defined(__MIPSEL__)
diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index 41de2ef5a0..c1e2defb68 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -60,7 +60,7 @@ static uint32_t virtio_mem_default_thp_size(void)
{
uint32_t default_thp_size = VIRTIO_MEM_MIN_BLOCK_SIZE;
-#if defined(__x86_64__) || defined(__arm__) || defined(__powerpc64__)
+#if defined(__x86_64__) || defined(__powerpc64__)
default_thp_size = 2 * MiB;
#elif defined(__aarch64__)
if (qemu_real_host_page_size() == 4 * KiB) {
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 4bcfaf7894..07175e11d5 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -1296,7 +1296,7 @@ static inline abi_ulong target_shmlba(CPUArchState *cpu_env)
}
#endif
-#if defined(__arm__) || defined(__mips__) || defined(__sparc__)
+#if defined(__mips__) || defined(__sparc__)
#define HOST_FORCE_SHMLBA 1
#else
#define HOST_FORCE_SHMLBA 0
diff --git a/MAINTAINERS b/MAINTAINERS
index de8246c3ff..1a6e5bbafe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4057,12 +4057,6 @@ S: Maintained
L: qemu-arm@nongnu.org
F: tcg/aarch64/
-ARM TCG target
-M: Richard Henderson <richard.henderson@linaro.org>
-S: Maintained
-L: qemu-arm@nongnu.org
-F: tcg/arm/
-
i386 TCG target
M: Richard Henderson <richard.henderson@linaro.org>
S: Maintained
diff --git a/common-user/host/arm/safe-syscall.inc.S b/common-user/host/arm/safe-syscall.inc.S
deleted file mode 100644
index bbfb89634e..0000000000
--- a/common-user/host/arm/safe-syscall.inc.S
+++ /dev/null
@@ -1,108 +0,0 @@
-/*
- * safe-syscall.inc.S : host-specific assembly fragment
- * to handle signals occurring at the same time as system calls.
- * This is intended to be included by common-user/safe-syscall.S
- *
- * Written by Richard Henderson <rth@twiddle.net>
- * Copyright (C) 2016 Red Hat, Inc.
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
- .global safe_syscall_base
- .global safe_syscall_start
- .global safe_syscall_end
- .type safe_syscall_base, %function
-
- .cfi_sections .debug_frame
-
- .text
- .syntax unified
- .arm
- .align 2
-
- /* This is the entry point for making a system call. The calling
- * convention here is that of a C varargs function with the
- * first argument an 'int *' to the signal_pending flag, the
- * second one the system call number (as a 'long'), and all further
- * arguments being syscall arguments (also 'long').
- */
-safe_syscall_base:
- .fnstart
- .cfi_startproc
- mov r12, sp /* save entry stack */
- push { r4, r5, r6, r7, r8, lr }
- .save { r4, r5, r6, r7, r8, lr }
- .cfi_adjust_cfa_offset 24
- .cfi_rel_offset r4, 0
- .cfi_rel_offset r5, 4
- .cfi_rel_offset r6, 8
- .cfi_rel_offset r7, 12
- .cfi_rel_offset r8, 16
- .cfi_rel_offset lr, 20
-
- /* The syscall calling convention isn't the same as the C one:
- * we enter with r0 == &signal_pending
- * r1 == syscall number
- * r2, r3, [sp+0] ... [sp+12] == syscall arguments
- * and return the result in r0
- * and the syscall instruction needs
- * r7 == syscall number
- * r0 ... r6 == syscall arguments
- * and returns the result in r0
- * Shuffle everything around appropriately.
- * Note the 16 bytes that we pushed to save registers.
- */
- mov r8, r0 /* copy signal_pending */
- mov r7, r1 /* syscall number */
- mov r0, r2 /* syscall args */
- mov r1, r3
- ldm r12, { r2, r3, r4, r5, r6 }
-
- /* This next sequence of code works in conjunction with the
- * rewind_if_safe_syscall_function(). If a signal is taken
- * and the interrupted PC is anywhere between 'safe_syscall_start'
- * and 'safe_syscall_end' then we rewind it to 'safe_syscall_start'.
- * The code sequence must therefore be able to cope with this, and
- * the syscall instruction must be the final one in the sequence.
- */
-safe_syscall_start:
- /* if signal_pending is non-zero, don't do the call */
- ldr r12, [r8] /* signal_pending */
- tst r12, r12
- bne 2f
- swi 0
-safe_syscall_end:
-
- /* code path for having successfully executed the syscall */
-#if defined(__linux__)
- /* Linux kernel returns (small) negative errno. */
- cmp r0, #-4096
- neghi r0, r0
- bhi 1f
-#elif defined(__FreeBSD__)
- /* FreeBSD kernel returns positive errno and C bit set. */
- bcs 1f
-#else
-#error "unsupported os"
-#endif
- pop { r4, r5, r6, r7, r8, pc }
-
- /* code path when we didn't execute the syscall */
-2: mov r0, #QEMU_ERESTARTSYS
-
- /* code path setting errno */
-1: pop { r4, r5, r6, r7, r8, lr }
- .cfi_adjust_cfa_offset -24
- .cfi_restore r4
- .cfi_restore r5
- .cfi_restore r6
- .cfi_restore r7
- .cfi_restore r8
- .cfi_restore lr
- b safe_syscall_set_errno_tail
-
- .fnend
- .cfi_endproc
- .size safe_syscall_base, .-safe_syscall_base
diff --git a/configure b/configure
index ba1b207b56..0742f1212d 100755
--- a/configure
+++ b/configure
@@ -418,8 +418,6 @@ elif check_define __riscv ; then
else
cpu="riscv32"
fi
-elif check_define __arm__ ; then
- cpu="arm"
elif check_define __aarch64__ ; then
cpu="aarch64"
elif check_define __loongarch64 ; then
@@ -451,11 +449,6 @@ case "$cpu" in
linux_arch=arm64
;;
- armv*b|armv*l|arm)
- cpu=arm
- host_arch=arm
- ;;
-
i386|i486|i586|i686)
cpu="i386"
host_arch=i386
diff --git a/meson.build b/meson.build
index 082c7a86ca..137b2dcdc7 100644
--- a/meson.build
+++ b/meson.build
@@ -51,7 +51,7 @@ qapi_trace_events = []
bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin']
supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten']
supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86', 'x86_64',
- 'arm', 'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
+ 'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
cpu = host_machine.cpu_family()
@@ -304,9 +304,6 @@ if cpu == 'x86'
xen_targets = ['i386-softmmu']
elif cpu == 'x86_64'
xen_targets = ['i386-softmmu', 'x86_64-softmmu']
-elif cpu == 'arm'
- # i386 emulator provides xenpv machine type for multiple architectures
- xen_targets = ['i386-softmmu']
elif cpu == 'aarch64'
# i386 emulator provides xenpv machine type for multiple architectures
xen_targets = ['i386-softmmu', 'x86_64-softmmu', 'aarch64-softmmu']
@@ -3156,7 +3153,7 @@ endif
config_host_data.set('CONFIG_AVX2_OPT', have_avx2)
config_host_data.set('CONFIG_AVX512BW_OPT', have_avx512bw)
-# For both AArch64 and AArch32, detect if builtins are available.
+# For AArch64, detect if builtins are available.
config_host_data.set('CONFIG_ARM_AES_BUILTIN', cc.compiles('''
#include <arm_neon.h>
#ifndef __ARM_FEATURE_AES
diff --git a/tcg/arm/tcg-target-opc.h.inc b/tcg/arm/tcg-target-opc.h.inc
deleted file mode 100644
index 70394e0282..0000000000
--- a/tcg/arm/tcg-target-opc.h.inc
+++ /dev/null
@@ -1,16 +0,0 @@
-/*
- * Copyright (c) 2019 Linaro
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or
- * (at your option) any later version.
- *
- * See the COPYING file in the top-level directory for details.
- *
- * Target-specific opcodes for host vector expansion. These will be
- * emitted by tcg_expand_vec_op. For those familiar with GCC internals,
- * consider these to be UNSPEC with names.
- */
-
-DEF(arm_sli_vec, 1, 2, 1, TCG_OPF_VECTOR)
-DEF(arm_sshl_vec, 1, 2, 0, TCG_OPF_VECTOR)
-DEF(arm_ushl_vec, 1, 2, 0, TCG_OPF_VECTOR)
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
deleted file mode 100644
index 87ca66bb02..0000000000
--- a/tcg/arm/tcg-target.c.inc
+++ /dev/null
@@ -1,3489 +0,0 @@
-/*
- * Tiny Code Generator for QEMU
- *
- * Copyright (c) 2008 Andrzej Zaborowski
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to deal
- * in the Software without restriction, including without limitation the rights
- * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- * copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
- * THE SOFTWARE.
- */
-
-#include "elf.h"
-
-int arm_arch = __ARM_ARCH;
-
-#ifndef use_idiv_instructions
-bool use_idiv_instructions;
-#endif
-#ifndef use_neon_instructions
-bool use_neon_instructions;
-#endif
-
-/* Used for function call generation. */
-#define TCG_TARGET_STACK_ALIGN 8
-#define TCG_TARGET_CALL_STACK_OFFSET 0
-#define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_NORMAL
-#define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_EVEN
-#define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN
-#define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_BY_REF
-
-#ifdef CONFIG_DEBUG_TCG
-static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
- "%r0", "%r1", "%r2", "%r3", "%r4", "%r5", "%r6", "%r7",
- "%r8", "%r9", "%r10", "%r11", "%r12", "%sp", "%r14", "%pc",
- "%q0", "%q1", "%q2", "%q3", "%q4", "%q5", "%q6", "%q7",
- "%q8", "%q9", "%q10", "%q11", "%q12", "%q13", "%q14", "%q15",
-};
-#endif
-
-static const int tcg_target_reg_alloc_order[] = {
- TCG_REG_R4,
- TCG_REG_R5,
- TCG_REG_R6,
- TCG_REG_R7,
- TCG_REG_R8,
- TCG_REG_R9,
- TCG_REG_R10,
- TCG_REG_R11,
- TCG_REG_R13,
- TCG_REG_R0,
- TCG_REG_R1,
- TCG_REG_R2,
- TCG_REG_R3,
- TCG_REG_R12,
- TCG_REG_R14,
-
- TCG_REG_Q0,
- TCG_REG_Q1,
- TCG_REG_Q2,
- TCG_REG_Q3,
- /* Q4 - Q7 are call-saved, and skipped. */
- TCG_REG_Q8,
- TCG_REG_Q9,
- TCG_REG_Q10,
- TCG_REG_Q11,
- TCG_REG_Q12,
- TCG_REG_Q13,
- TCG_REG_Q14,
- TCG_REG_Q15,
-};
-
-static const int tcg_target_call_iarg_regs[4] = {
- TCG_REG_R0, TCG_REG_R1, TCG_REG_R2, TCG_REG_R3
-};
-
-static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot)
-{
- tcg_debug_assert(kind == TCG_CALL_RET_NORMAL);
- tcg_debug_assert(slot >= 0 && slot <= 3);
- return TCG_REG_R0 + slot;
-}
-
-#define TCG_REG_TMP TCG_REG_R12
-#define TCG_VEC_TMP TCG_REG_Q15
-#define TCG_REG_GUEST_BASE TCG_REG_R11
-
-typedef enum {
- COND_EQ = 0x0,
- COND_NE = 0x1,
- COND_CS = 0x2, /* Unsigned greater or equal */
- COND_CC = 0x3, /* Unsigned less than */
- COND_MI = 0x4, /* Negative */
- COND_PL = 0x5, /* Zero or greater */
- COND_VS = 0x6, /* Overflow */
- COND_VC = 0x7, /* No overflow */
- COND_HI = 0x8, /* Unsigned greater than */
- COND_LS = 0x9, /* Unsigned less or equal */
- COND_GE = 0xa,
- COND_LT = 0xb,
- COND_GT = 0xc,
- COND_LE = 0xd,
- COND_AL = 0xe,
-} ARMCond;
-
-#define TO_CPSR (1 << 20)
-
-#define SHIFT_IMM_LSL(im) (((im) << 7) | 0x00)
-#define SHIFT_IMM_LSR(im) (((im) << 7) | 0x20)
-#define SHIFT_IMM_ASR(im) (((im) << 7) | 0x40)
-#define SHIFT_IMM_ROR(im) (((im) << 7) | 0x60)
-#define SHIFT_REG_LSL(rs) (((rs) << 8) | 0x10)
-#define SHIFT_REG_LSR(rs) (((rs) << 8) | 0x30)
-#define SHIFT_REG_ASR(rs) (((rs) << 8) | 0x50)
-#define SHIFT_REG_ROR(rs) (((rs) << 8) | 0x70)
-
-typedef enum {
- ARITH_AND = 0x0 << 21,
- ARITH_EOR = 0x1 << 21,
- ARITH_SUB = 0x2 << 21,
- ARITH_RSB = 0x3 << 21,
- ARITH_ADD = 0x4 << 21,
- ARITH_ADC = 0x5 << 21,
- ARITH_SBC = 0x6 << 21,
- ARITH_RSC = 0x7 << 21,
- ARITH_TST = 0x8 << 21 | TO_CPSR,
- ARITH_CMP = 0xa << 21 | TO_CPSR,
- ARITH_CMN = 0xb << 21 | TO_CPSR,
- ARITH_ORR = 0xc << 21,
- ARITH_MOV = 0xd << 21,
- ARITH_BIC = 0xe << 21,
- ARITH_MVN = 0xf << 21,
-
- INSN_B = 0x0a000000,
-
- INSN_CLZ = 0x016f0f10,
- INSN_RBIT = 0x06ff0f30,
-
- INSN_LDMIA = 0x08b00000,
- INSN_STMDB = 0x09200000,
-
- INSN_LDR_IMM = 0x04100000,
- INSN_LDR_REG = 0x06100000,
- INSN_STR_IMM = 0x04000000,
- INSN_STR_REG = 0x06000000,
-
- INSN_LDRH_IMM = 0x005000b0,
- INSN_LDRH_REG = 0x001000b0,
- INSN_LDRSH_IMM = 0x005000f0,
- INSN_LDRSH_REG = 0x001000f0,
- INSN_STRH_IMM = 0x004000b0,
- INSN_STRH_REG = 0x000000b0,
-
- INSN_LDRB_IMM = 0x04500000,
- INSN_LDRB_REG = 0x06500000,
- INSN_LDRSB_IMM = 0x005000d0,
- INSN_LDRSB_REG = 0x001000d0,
- INSN_STRB_IMM = 0x04400000,
- INSN_STRB_REG = 0x06400000,
-
- INSN_LDRD_IMM = 0x004000d0,
- INSN_LDRD_REG = 0x000000d0,
- INSN_STRD_IMM = 0x004000f0,
- INSN_STRD_REG = 0x000000f0,
-
- INSN_DMB_ISH = 0xf57ff05b,
- INSN_DMB_MCR = 0xee070fba,
-
- INSN_MSRI_CPSR = 0x0360f000,
-
- /* Architected nop introduced in v6k. */
- /* ??? This is an MSR (imm) 0,0,0 insn. Anyone know if this
- also Just So Happened to do nothing on pre-v6k so that we
- don't need to conditionalize it? */
- INSN_NOP_v6k = 0xe320f000,
- /* Otherwise the assembler uses mov r0,r0 */
- INSN_NOP_v4 = (COND_AL << 28) | ARITH_MOV,
-
- INSN_VADD = 0xf2000800,
- INSN_VAND = 0xf2000110,
- INSN_VBIC = 0xf2100110,
- INSN_VEOR = 0xf3000110,
- INSN_VORN = 0xf2300110,
- INSN_VORR = 0xf2200110,
- INSN_VSUB = 0xf3000800,
- INSN_VMUL = 0xf2000910,
- INSN_VQADD = 0xf2000010,
- INSN_VQADD_U = 0xf3000010,
- INSN_VQSUB = 0xf2000210,
- INSN_VQSUB_U = 0xf3000210,
- INSN_VMAX = 0xf2000600,
- INSN_VMAX_U = 0xf3000600,
- INSN_VMIN = 0xf2000610,
- INSN_VMIN_U = 0xf3000610,
-
- INSN_VABS = 0xf3b10300,
- INSN_VMVN = 0xf3b00580,
- INSN_VNEG = 0xf3b10380,
-
- INSN_VCEQ0 = 0xf3b10100,
- INSN_VCGT0 = 0xf3b10000,
- INSN_VCGE0 = 0xf3b10080,
- INSN_VCLE0 = 0xf3b10180,
- INSN_VCLT0 = 0xf3b10200,
-
- INSN_VCEQ = 0xf3000810,
- INSN_VCGE = 0xf2000310,
- INSN_VCGT = 0xf2000300,
- INSN_VCGE_U = 0xf3000310,
- INSN_VCGT_U = 0xf3000300,
-
- INSN_VSHLI = 0xf2800510, /* VSHL (immediate) */
- INSN_VSARI = 0xf2800010, /* VSHR.S */
- INSN_VSHRI = 0xf3800010, /* VSHR.U */
- INSN_VSLI = 0xf3800510,
- INSN_VSHL_S = 0xf2000400, /* VSHL.S (register) */
- INSN_VSHL_U = 0xf3000400, /* VSHL.U (register) */
-
- INSN_VBSL = 0xf3100110,
- INSN_VBIT = 0xf3200110,
- INSN_VBIF = 0xf3300110,
-
- INSN_VTST = 0xf2000810,
-
- INSN_VDUP_G = 0xee800b10, /* VDUP (ARM core register) */
- INSN_VDUP_S = 0xf3b00c00, /* VDUP (scalar) */
- INSN_VLDR_D = 0xed100b00, /* VLDR.64 */
- INSN_VLD1 = 0xf4200000, /* VLD1 (multiple single elements) */
- INSN_VLD1R = 0xf4a00c00, /* VLD1 (single element to all lanes) */
- INSN_VST1 = 0xf4000000, /* VST1 (multiple single elements) */
- INSN_VMOVI = 0xf2800010, /* VMOV (immediate) */
-} ARMInsn;
-
-#define INSN_NOP (use_armv7_instructions ? INSN_NOP_v6k : INSN_NOP_v4)
-
-static const uint8_t tcg_cond_to_arm_cond[] = {
- [TCG_COND_EQ] = COND_EQ,
- [TCG_COND_NE] = COND_NE,
- [TCG_COND_LT] = COND_LT,
- [TCG_COND_GE] = COND_GE,
- [TCG_COND_LE] = COND_LE,
- [TCG_COND_GT] = COND_GT,
- /* unsigned */
- [TCG_COND_LTU] = COND_CC,
- [TCG_COND_GEU] = COND_CS,
- [TCG_COND_LEU] = COND_LS,
- [TCG_COND_GTU] = COND_HI,
-};
-
-static int encode_imm(uint32_t imm);
-
-/* TCG private relocation type: add with pc+imm8 */
-#define R_ARM_PC8 11
-
-/* TCG private relocation type: vldr with imm8 << 2 */
-#define R_ARM_PC11 12
-
-static bool reloc_pc24(tcg_insn_unit *src_rw, const tcg_insn_unit *target)
-{
- const tcg_insn_unit *src_rx = tcg_splitwx_to_rx(src_rw);
- ptrdiff_t offset = (tcg_ptr_byte_diff(target, src_rx) - 8) >> 2;
-
- if (offset == sextract32(offset, 0, 24)) {
- *src_rw = deposit32(*src_rw, 0, 24, offset);
- return true;
- }
- return false;
-}
-
-static bool reloc_pc13(tcg_insn_unit *src_rw, const tcg_insn_unit *target)
-{
- const tcg_insn_unit *src_rx = tcg_splitwx_to_rx(src_rw);
- ptrdiff_t offset = tcg_ptr_byte_diff(target, src_rx) - 8;
-
- if (offset >= -0xfff && offset <= 0xfff) {
- tcg_insn_unit insn = *src_rw;
- bool u = (offset >= 0);
- if (!u) {
- offset = -offset;
- }
- insn = deposit32(insn, 23, 1, u);
- insn = deposit32(insn, 0, 12, offset);
- *src_rw = insn;
- return true;
- }
- return false;
-}
-
-static bool reloc_pc11(tcg_insn_unit *src_rw, const tcg_insn_unit *target)
-{
- const tcg_insn_unit *src_rx = tcg_splitwx_to_rx(src_rw);
- ptrdiff_t offset = (tcg_ptr_byte_diff(target, src_rx) - 8) / 4;
-
- if (offset >= -0xff && offset <= 0xff) {
- tcg_insn_unit insn = *src_rw;
- bool u = (offset >= 0);
- if (!u) {
- offset = -offset;
- }
- insn = deposit32(insn, 23, 1, u);
- insn = deposit32(insn, 0, 8, offset);
- *src_rw = insn;
- return true;
- }
- return false;
-}
-
-static bool reloc_pc8(tcg_insn_unit *src_rw, const tcg_insn_unit *target)
-{
- const tcg_insn_unit *src_rx = tcg_splitwx_to_rx(src_rw);
- ptrdiff_t offset = tcg_ptr_byte_diff(target, src_rx) - 8;
- int imm12 = encode_imm(offset);
-
- if (imm12 >= 0) {
- *src_rw = deposit32(*src_rw, 0, 12, imm12);
- return true;
- }
- return false;
-}
-
-static bool patch_reloc(tcg_insn_unit *code_ptr, int type,
- intptr_t value, intptr_t addend)
-{
- tcg_debug_assert(addend == 0);
- switch (type) {
- case R_ARM_PC24:
- return reloc_pc24(code_ptr, (const tcg_insn_unit *)value);
- case R_ARM_PC13:
- return reloc_pc13(code_ptr, (const tcg_insn_unit *)value);
- case R_ARM_PC11:
- return reloc_pc11(code_ptr, (const tcg_insn_unit *)value);
- case R_ARM_PC8:
- return reloc_pc8(code_ptr, (const tcg_insn_unit *)value);
- default:
- g_assert_not_reached();
- }
-}
-
-#define TCG_CT_CONST_ARM 0x100
-#define TCG_CT_CONST_INV 0x200
-#define TCG_CT_CONST_NEG 0x400
-#define TCG_CT_CONST_ZERO 0x800
-#define TCG_CT_CONST_ORRI 0x1000
-#define TCG_CT_CONST_ANDI 0x2000
-
-#define ALL_GENERAL_REGS 0xffffu
-#define ALL_VECTOR_REGS 0xffff0000u
-
-/*
- * r0-r3 will be overwritten when reading the tlb entry (system-mode only);
- * r14 will be overwritten by the BLNE branching to the slow path.
- */
-#define ALL_QLDST_REGS \
- (ALL_GENERAL_REGS & ~((tcg_use_softmmu ? 0xf : 0) | (1 << TCG_REG_R14)))
-
-/*
- * ARM immediates for ALU instructions are made of an unsigned 8-bit
- * right-rotated by an even amount between 0 and 30.
- *
- * Return < 0 if @imm cannot be encoded, else the entire imm12 field.
- */
-static int encode_imm(uint32_t imm)
-{
- uint32_t rot, imm8;
-
- /* Simple case, no rotation required. */
- if ((imm & ~0xff) == 0) {
- return imm;
- }
-
- /* Next, try a simple even shift. */
- rot = ctz32(imm) & ~1;
- imm8 = imm >> rot;
- rot = 32 - rot;
- if ((imm8 & ~0xff) == 0) {
- goto found;
- }
-
- /*
- * Finally, try harder with rotations.
- * The ctz test above will have taken care of rotates >= 8.
- */
- for (rot = 2; rot < 8; rot += 2) {
- imm8 = rol32(imm, rot);
- if ((imm8 & ~0xff) == 0) {
- goto found;
- }
- }
- /* Fail: imm cannot be encoded. */
- return -1;
-
- found:
- /* Note that rot is even, and we discard bit 0 by shifting by 7. */
- return rot << 7 | imm8;
-}
-
-static int encode_imm_nofail(uint32_t imm)
-{
- int ret = encode_imm(imm);
- tcg_debug_assert(ret >= 0);
- return ret;
-}
-
-static bool check_fit_imm(uint32_t imm)
-{
- return encode_imm(imm) >= 0;
-}
-
-/* Return true if v16 is a valid 16-bit shifted immediate. */
-static bool is_shimm16(uint16_t v16, int *cmode, int *imm8)
-{
- if (v16 == (v16 & 0xff)) {
- *cmode = 0x8;
- *imm8 = v16 & 0xff;
- return true;
- } else if (v16 == (v16 & 0xff00)) {
- *cmode = 0xa;
- *imm8 = v16 >> 8;
- return true;
- }
- return false;
-}
-
-/* Return true if v32 is a valid 32-bit shifted immediate. */
-static bool is_shimm32(uint32_t v32, int *cmode, int *imm8)
-{
- if (v32 == (v32 & 0xff)) {
- *cmode = 0x0;
- *imm8 = v32 & 0xff;
- return true;
- } else if (v32 == (v32 & 0xff00)) {
- *cmode = 0x2;
- *imm8 = (v32 >> 8) & 0xff;
- return true;
- } else if (v32 == (v32 & 0xff0000)) {
- *cmode = 0x4;
- *imm8 = (v32 >> 16) & 0xff;
- return true;
- } else if (v32 == (v32 & 0xff000000)) {
- *cmode = 0x6;
- *imm8 = v32 >> 24;
- return true;
- }
- return false;
-}
-
-/* Return true if v32 is a valid 32-bit shifting ones immediate. */
-static bool is_soimm32(uint32_t v32, int *cmode, int *imm8)
-{
- if ((v32 & 0xffff00ff) == 0xff) {
- *cmode = 0xc;
- *imm8 = (v32 >> 8) & 0xff;
- return true;
- } else if ((v32 & 0xff00ffff) == 0xffff) {
- *cmode = 0xd;
- *imm8 = (v32 >> 16) & 0xff;
- return true;
- }
- return false;
-}
-
-/*
- * Return non-zero if v32 can be formed by MOVI+ORR.
- * Place the parameters for MOVI in (cmode, imm8).
- * Return the cmode for ORR; the imm8 can be had via extraction from v32.
- */
-static int is_shimm32_pair(uint32_t v32, int *cmode, int *imm8)
-{
- int i;
-
- for (i = 6; i > 0; i -= 2) {
- /* Mask out one byte we can add with ORR. */
- uint32_t tmp = v32 & ~(0xffu << (i * 4));
- if (is_shimm32(tmp, cmode, imm8) ||
- is_soimm32(tmp, cmode, imm8)) {
- break;
- }
- }
- return i;
-}
-
-/* Return true if V is a valid 16-bit or 32-bit shifted immediate. */
-static bool is_shimm1632(uint32_t v32, int *cmode, int *imm8)
-{
- if (v32 == deposit32(v32, 16, 16, v32)) {
- return is_shimm16(v32, cmode, imm8);
- } else {
- return is_shimm32(v32, cmode, imm8);
- }
-}
-
-/* Test if a constant matches the constraint.
- * TODO: define constraints for:
- *
- * ldr/str offset: between -0xfff and 0xfff
- * ldrh/strh offset: between -0xff and 0xff
- * mov operand2: values represented with x << (2 * y), x < 0x100
- * add, sub, eor...: ditto
- */
-static bool tcg_target_const_match(int64_t val, int ct,
- TCGType type, TCGCond cond, int vece)
-{
- if (ct & TCG_CT_CONST) {
- return 1;
- } else if ((ct & TCG_CT_CONST_ARM) && check_fit_imm(val)) {
- return 1;
- } else if ((ct & TCG_CT_CONST_INV) && check_fit_imm(~val)) {
- return 1;
- } else if ((ct & TCG_CT_CONST_NEG) && check_fit_imm(-val)) {
- return 1;
- } else if ((ct & TCG_CT_CONST_ZERO) && val == 0) {
- return 1;
- }
-
- switch (ct & (TCG_CT_CONST_ORRI | TCG_CT_CONST_ANDI)) {
- case 0:
- break;
- case TCG_CT_CONST_ANDI:
- val = ~val;
- /* fallthru */
- case TCG_CT_CONST_ORRI:
- if (val == deposit64(val, 32, 32, val)) {
- int cmode, imm8;
- return is_shimm1632(val, &cmode, &imm8);
- }
- break;
- default:
- /* Both bits should not be set for the same insn. */
- g_assert_not_reached();
- }
-
- return 0;
-}
-
-static void tcg_out_b_imm(TCGContext *s, ARMCond cond, int32_t offset)
-{
- tcg_out32(s, (cond << 28) | INSN_B |
- (((offset - 8) >> 2) & 0x00ffffff));
-}
-
-static void tcg_out_bl_imm(TCGContext *s, ARMCond cond, int32_t offset)
-{
- tcg_out32(s, (cond << 28) | 0x0b000000 |
- (((offset - 8) >> 2) & 0x00ffffff));
-}
-
-static void tcg_out_blx_reg(TCGContext *s, ARMCond cond, TCGReg rn)
-{
- tcg_out32(s, (cond << 28) | 0x012fff30 | rn);
-}
-
-static void tcg_out_blx_imm(TCGContext *s, int32_t offset)
-{
- tcg_out32(s, 0xfa000000 | ((offset & 2) << 23) |
- (((offset - 8) >> 2) & 0x00ffffff));
-}
-
-static void tcg_out_dat_reg(TCGContext *s, ARMCond cond, ARMInsn opc,
- TCGReg rd, TCGReg rn, TCGReg rm, int shift)
-{
- tcg_out32(s, (cond << 28) | (0 << 25) | opc |
- (rn << 16) | (rd << 12) | shift | rm);
-}
-
-static void tcg_out_mov_reg(TCGContext *s, ARMCond cond, TCGReg rd, TCGReg rm)
-{
- /* Simple reg-reg move, optimising out the 'do nothing' case */
- if (rd != rm) {
- tcg_out_dat_reg(s, cond, ARITH_MOV, rd, 0, rm, SHIFT_IMM_LSL(0));
- }
-}
-
-static void tcg_out_bx_reg(TCGContext *s, ARMCond cond, TCGReg rn)
-{
- tcg_out32(s, (cond << 28) | 0x012fff10 | rn);
-}
-
-static void tcg_out_b_reg(TCGContext *s, ARMCond cond, TCGReg rn)
-{
- /*
- * Unless the C portion of QEMU is compiled as thumb, we don't need
- * true BX semantics; merely a branch to an address held in a register.
- */
- tcg_out_bx_reg(s, cond, rn);
-}
-
-static void tcg_out_dat_imm(TCGContext *s, ARMCond cond, ARMInsn opc,
- TCGReg rd, TCGReg rn, int im)
-{
- tcg_out32(s, (cond << 28) | (1 << 25) | opc |
- (rn << 16) | (rd << 12) | im);
-}
-
-static void tcg_out_ldstm(TCGContext *s, ARMCond cond, ARMInsn opc,
- TCGReg rn, uint16_t mask)
-{
- tcg_out32(s, (cond << 28) | opc | (rn << 16) | mask);
-}
-
-/* Note that this routine is used for both LDR and LDRH formats, so we do
- not wish to include an immediate shift at this point. */
-static void tcg_out_memop_r(TCGContext *s, ARMCond cond, ARMInsn opc, TCGReg rt,
- TCGReg rn, TCGReg rm, bool u, bool p, bool w)
-{
- tcg_out32(s, (cond << 28) | opc | (u << 23) | (p << 24)
- | (w << 21) | (rn << 16) | (rt << 12) | rm);
-}
-
-static void tcg_out_memop_8(TCGContext *s, ARMCond cond, ARMInsn opc, TCGReg rt,
- TCGReg rn, int imm8, bool p, bool w)
-{
- bool u = 1;
- if (imm8 < 0) {
- imm8 = -imm8;
- u = 0;
- }
- tcg_out32(s, (cond << 28) | opc | (u << 23) | (p << 24) | (w << 21) |
- (rn << 16) | (rt << 12) | ((imm8 & 0xf0) << 4) | (imm8 & 0xf));
-}
-
-static void tcg_out_memop_12(TCGContext *s, ARMCond cond, ARMInsn opc,
- TCGReg rt, TCGReg rn, int imm12, bool p, bool w)
-{
- bool u = 1;
- if (imm12 < 0) {
- imm12 = -imm12;
- u = 0;
- }
- tcg_out32(s, (cond << 28) | opc | (u << 23) | (p << 24) | (w << 21) |
- (rn << 16) | (rt << 12) | imm12);
-}
-
-static void tcg_out_ld32_12(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm12)
-{
- tcg_out_memop_12(s, cond, INSN_LDR_IMM, rt, rn, imm12, 1, 0);
-}
-
-static void tcg_out_st32_12(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm12)
-{
- tcg_out_memop_12(s, cond, INSN_STR_IMM, rt, rn, imm12, 1, 0);
-}
-
-static void tcg_out_ld32_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDR_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_st32_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_STR_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_ldrd_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_LDRD_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_ldrd_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRD_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_strd_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_STRD_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_strd_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_STRD_REG, rt, rn, rm, 1, 1, 0);
-}
-
-/* Register pre-increment with base writeback. */
-static void tcg_out_ld32_rwb(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDR_REG, rt, rn, rm, 1, 1, 1);
-}
-
-static void tcg_out_st32_rwb(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_STR_REG, rt, rn, rm, 1, 1, 1);
-}
-
-static void tcg_out_ld16u_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_LDRH_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_st16_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_STRH_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_ld16u_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRH_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_st16_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_STRH_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_ld16s_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_LDRSH_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_ld16s_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRSH_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_ld8_12(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm12)
-{
- tcg_out_memop_12(s, cond, INSN_LDRB_IMM, rt, rn, imm12, 1, 0);
-}
-
-static void tcg_out_st8_12(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm12)
-{
- tcg_out_memop_12(s, cond, INSN_STRB_IMM, rt, rn, imm12, 1, 0);
-}
-
-static void tcg_out_ld8_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRB_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_st8_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_STRB_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_ld8s_8(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, int imm8)
-{
- tcg_out_memop_8(s, cond, INSN_LDRSB_IMM, rt, rn, imm8, 1, 0);
-}
-
-static void tcg_out_ld8s_r(TCGContext *s, ARMCond cond, TCGReg rt,
- TCGReg rn, TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRSB_REG, rt, rn, rm, 1, 1, 0);
-}
-
-static void tcg_out_movi_pool(TCGContext *s, ARMCond cond,
- TCGReg rd, uint32_t arg)
-{
- new_pool_label(s, arg, R_ARM_PC13, s->code_ptr, 0);
- tcg_out_ld32_12(s, cond, rd, TCG_REG_PC, 0);
-}
-
-static void tcg_out_movi32(TCGContext *s, ARMCond cond,
- TCGReg rd, uint32_t arg)
-{
- int imm12, diff, opc, sh1, sh2;
- uint32_t tt0, tt1, tt2;
-
- /* Check a single MOV/MVN before anything else. */
- imm12 = encode_imm(arg);
- if (imm12 >= 0) {
- tcg_out_dat_imm(s, cond, ARITH_MOV, rd, 0, imm12);
- return;
- }
- imm12 = encode_imm(~arg);
- if (imm12 >= 0) {
- tcg_out_dat_imm(s, cond, ARITH_MVN, rd, 0, imm12);
- return;
- }
-
- /* Check for a pc-relative address. This will usually be the TB,
- or within the TB, which is immediately before the code block. */
- diff = tcg_pcrel_diff(s, (void *)arg) - 8;
- if (diff >= 0) {
- imm12 = encode_imm(diff);
- if (imm12 >= 0) {
- tcg_out_dat_imm(s, cond, ARITH_ADD, rd, TCG_REG_PC, imm12);
- return;
- }
- } else {
- imm12 = encode_imm(-diff);
- if (imm12 >= 0) {
- tcg_out_dat_imm(s, cond, ARITH_SUB, rd, TCG_REG_PC, imm12);
- return;
- }
- }
-
- /* Use movw + movt. */
- if (use_armv7_instructions) {
- /* movw */
- tcg_out32(s, (cond << 28) | 0x03000000 | (rd << 12)
- | ((arg << 4) & 0x000f0000) | (arg & 0xfff));
- if (arg & 0xffff0000) {
- /* movt */
- tcg_out32(s, (cond << 28) | 0x03400000 | (rd << 12)
- | ((arg >> 12) & 0x000f0000) | ((arg >> 16) & 0xfff));
- }
- return;
- }
-
- /* Look for sequences of two insns. If we have lots of 1's, we can
- shorten the sequence by beginning with mvn and then clearing
- higher bits with eor. */
- tt0 = arg;
- opc = ARITH_MOV;
- if (ctpop32(arg) > 16) {
- tt0 = ~arg;
- opc = ARITH_MVN;
- }
- sh1 = ctz32(tt0) & ~1;
- tt1 = tt0 & ~(0xff << sh1);
- sh2 = ctz32(tt1) & ~1;
- tt2 = tt1 & ~(0xff << sh2);
- if (tt2 == 0) {
- int rot;
-
- rot = ((32 - sh1) << 7) & 0xf00;
- tcg_out_dat_imm(s, cond, opc, rd, 0, ((tt0 >> sh1) & 0xff) | rot);
- rot = ((32 - sh2) << 7) & 0xf00;
- tcg_out_dat_imm(s, cond, ARITH_EOR, rd, rd,
- ((tt0 >> sh2) & 0xff) | rot);
- return;
- }
-
- /* Otherwise, drop it into the constant pool. */
- tcg_out_movi_pool(s, cond, rd, arg);
-}
-
-/*
- * Emit either the reg,imm or reg,reg form of a data-processing insn.
- * rhs must satisfy the "rI" constraint.
- */
-static void tcg_out_dat_rI(TCGContext *s, ARMCond cond, ARMInsn opc,
- TCGReg dst, TCGReg lhs, TCGArg rhs, int rhs_is_const)
-{
- if (rhs_is_const) {
- tcg_out_dat_imm(s, cond, opc, dst, lhs, encode_imm_nofail(rhs));
- } else {
- tcg_out_dat_reg(s, cond, opc, dst, lhs, rhs, SHIFT_IMM_LSL(0));
- }
-}
-
-/*
- * Emit either the reg,imm or reg,reg form of a data-processing insn.
- * rhs must satisfy the "rIK" constraint.
- */
-static void tcg_out_dat_IK(TCGContext *s, ARMCond cond, ARMInsn opc,
- ARMInsn opinv, TCGReg dst, TCGReg lhs, TCGArg rhs)
-{
- int imm12 = encode_imm(rhs);
- if (imm12 < 0) {
- imm12 = encode_imm_nofail(~rhs);
- opc = opinv;
- }
- tcg_out_dat_imm(s, cond, opc, dst, lhs, imm12);
-}
-
-static void tcg_out_dat_rIK(TCGContext *s, ARMCond cond, ARMInsn opc,
- ARMInsn opinv, TCGReg dst, TCGReg lhs, TCGArg rhs,
- bool rhs_is_const)
-{
- if (rhs_is_const) {
- tcg_out_dat_IK(s, cond, opc, opinv, dst, lhs, rhs);
- } else {
- tcg_out_dat_reg(s, cond, opc, dst, lhs, rhs, SHIFT_IMM_LSL(0));
- }
-}
-
-static void tcg_out_dat_IN(TCGContext *s, ARMCond cond, ARMInsn opc,
- ARMInsn opneg, TCGReg dst, TCGReg lhs, TCGArg rhs)
-{
- int imm12 = encode_imm(rhs);
- if (imm12 < 0) {
- imm12 = encode_imm_nofail(-rhs);
- opc = opneg;
- }
- tcg_out_dat_imm(s, cond, opc, dst, lhs, imm12);
-}
-
-static void tcg_out_dat_rIN(TCGContext *s, ARMCond cond, ARMInsn opc,
- ARMInsn opneg, TCGReg dst, TCGReg lhs, TCGArg rhs,
- bool rhs_is_const)
-{
- /* Emit either the reg,imm or reg,reg form of a data-processing insn.
- * rhs must satisfy the "rIN" constraint.
- */
- if (rhs_is_const) {
- tcg_out_dat_IN(s, cond, opc, opneg, dst, lhs, rhs);
- } else {
- tcg_out_dat_reg(s, cond, opc, dst, lhs, rhs, SHIFT_IMM_LSL(0));
- }
-}
-
-static void tcg_out_ext8s(TCGContext *s, TCGType t, TCGReg rd, TCGReg rn)
-{
- /* sxtb */
- tcg_out32(s, 0x06af0070 | (COND_AL << 28) | (rd << 12) | rn);
-}
-
-static void tcg_out_ext8u(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_AND, rd, rn, 0xff);
-}
-
-static void tcg_out_ext16s(TCGContext *s, TCGType t, TCGReg rd, TCGReg rn)
-{
- /* sxth */
- tcg_out32(s, 0x06bf0070 | (COND_AL << 28) | (rd << 12) | rn);
-}
-
-static void tcg_out_ext16u(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- /* uxth */
- tcg_out32(s, 0x06ff0070 | (COND_AL << 28) | (rd << 12) | rn);
-}
-
-static void tcg_out_ext32s(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- g_assert_not_reached();
-}
-
-static void tcg_out_ext32u(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- g_assert_not_reached();
-}
-
-static void tcg_out_exts_i32_i64(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- g_assert_not_reached();
-}
-
-static void tcg_out_extu_i32_i64(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- g_assert_not_reached();
-}
-
-static void tcg_out_extrl_i64_i32(TCGContext *s, TCGReg rd, TCGReg rn)
-{
- g_assert_not_reached();
-}
-
-static void tgen_deposit(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
- TCGReg a2, unsigned ofs, unsigned len)
-{
- /* bfi/bfc */
- tcg_debug_assert(a0 == a1);
- tcg_out32(s, 0x07c00010 | (COND_AL << 28) | (a0 << 12) | a2
- | (ofs << 7) | ((ofs + len - 1) << 16));
-}
-
-static void tgen_depositi(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
- tcg_target_long a2, unsigned ofs, unsigned len)
-{
- /* bfi becomes bfc with rn == 15. */
- tgen_deposit(s, type, a0, a1, 15, ofs, len);
-}
-
-static const TCGOutOpDeposit outop_deposit = {
- .base.static_constraint = C_O1_I2(r, 0, rZ),
- .out_rrr = tgen_deposit,
- .out_rri = tgen_depositi,
-};
-
-static void tgen_extract(TCGContext *s, TCGType type, TCGReg rd, TCGReg rn,
- unsigned ofs, unsigned len)
-{
- /* According to gcc, AND can be faster. */
- if (ofs == 0 && len <= 8) {
- tcg_out_dat_imm(s, COND_AL, ARITH_AND, rd, rn,
- encode_imm_nofail((1 << len) - 1));
- return;
- }
-
- if (use_armv7_instructions) {
- /* ubfx */
- tcg_out32(s, 0x07e00050 | (COND_AL << 28) | (rd << 12) | rn
- | (ofs << 7) | ((len - 1) << 16));
- return;
- }
-
- assert(ofs % 8 == 0);
- switch (len) {
- case 8:
- /* uxtb */
- tcg_out32(s, 0x06ef0070 | (COND_AL << 28) |
- (rd << 12) | (ofs << 7) | rn);
- break;
- case 16:
- /* uxth */
- tcg_out32(s, 0x06ff0070 | (COND_AL << 28) |
- (rd << 12) | (ofs << 7) | rn);
- break;
- default:
- g_assert_not_reached();
- }
-}
-
-static const TCGOutOpExtract outop_extract = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_extract,
-};
-
-static void tgen_sextract(TCGContext *s, TCGType type, TCGReg rd, TCGReg rn,
- unsigned ofs, unsigned len)
-{
- if (use_armv7_instructions) {
- /* sbfx */
- tcg_out32(s, 0x07a00050 | (COND_AL << 28) | (rd << 12) | rn
- | (ofs << 7) | ((len - 1) << 16));
- return;
- }
-
- assert(ofs % 8 == 0);
- switch (len) {
- case 8:
- /* sxtb */
- tcg_out32(s, 0x06af0070 | (COND_AL << 28) |
- (rd << 12) | (ofs << 7) | rn);
- break;
- case 16:
- /* sxth */
- tcg_out32(s, 0x06bf0070 | (COND_AL << 28) |
- (rd << 12) | (ofs << 7) | rn);
- break;
- default:
- g_assert_not_reached();
- }
-}
-
-static const TCGOutOpExtract outop_sextract = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_sextract,
-};
-
-
-static void tcg_out_ld32u(TCGContext *s, ARMCond cond,
- TCGReg rd, TCGReg rn, int32_t offset)
-{
- if (offset > 0xfff || offset < -0xfff) {
- tcg_out_movi32(s, cond, TCG_REG_TMP, offset);
- tcg_out_ld32_r(s, cond, rd, rn, TCG_REG_TMP);
- } else
- tcg_out_ld32_12(s, cond, rd, rn, offset);
-}
-
-static void tcg_out_st32(TCGContext *s, ARMCond cond,
- TCGReg rd, TCGReg rn, int32_t offset)
-{
- if (offset > 0xfff || offset < -0xfff) {
- tcg_out_movi32(s, cond, TCG_REG_TMP, offset);
- tcg_out_st32_r(s, cond, rd, rn, TCG_REG_TMP);
- } else
- tcg_out_st32_12(s, cond, rd, rn, offset);
-}
-
-/*
- * The _goto case is normally between TBs within the same code buffer, and
- * with the code buffer limited to 16MB we wouldn't need the long case.
- * But we also use it for the tail-call to the qemu_ld/st helpers, which does.
- */
-static void tcg_out_goto(TCGContext *s, ARMCond cond, const tcg_insn_unit *addr)
-{
- intptr_t addri = (intptr_t)addr;
- ptrdiff_t disp = tcg_pcrel_diff(s, addr);
- bool arm_mode = !(addri & 1);
-
- if (arm_mode && disp - 8 < 0x01fffffd && disp - 8 > -0x01fffffd) {
- tcg_out_b_imm(s, cond, disp);
- return;
- }
-
- /* LDR is interworking from v5t. */
- tcg_out_movi_pool(s, cond, TCG_REG_PC, addri);
-}
-
-/*
- * The call case is mostly used for helpers - so it's not unreasonable
- * for them to be beyond branch range.
- */
-static void tcg_out_call_int(TCGContext *s, const tcg_insn_unit *addr)
-{
- intptr_t addri = (intptr_t)addr;
- ptrdiff_t disp = tcg_pcrel_diff(s, addr);
- bool arm_mode = !(addri & 1);
-
- if (disp - 8 < 0x02000000 && disp - 8 >= -0x02000000) {
- if (arm_mode) {
- tcg_out_bl_imm(s, COND_AL, disp);
- } else {
- tcg_out_blx_imm(s, disp);
- }
- return;
- }
-
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, addri);
- tcg_out_blx_reg(s, COND_AL, TCG_REG_TMP);
-}
-
-static void tcg_out_call(TCGContext *s, const tcg_insn_unit *addr,
- const TCGHelperInfo *info)
-{
- tcg_out_call_int(s, addr);
-}
-
-static void tcg_out_goto_label(TCGContext *s, ARMCond cond, TCGLabel *l)
-{
- if (l->has_value) {
- tcg_out_goto(s, cond, l->u.value_ptr);
- } else {
- tcg_out_reloc(s, s->code_ptr, R_ARM_PC24, l, 0);
- tcg_out_b_imm(s, cond, 0);
- }
-}
-
-static void tcg_out_br(TCGContext *s, TCGLabel *l)
-{
- tcg_out_goto_label(s, COND_AL, l);
-}
-
-static void tcg_out_mb(TCGContext *s, unsigned a0)
-{
- if (use_armv7_instructions) {
- tcg_out32(s, INSN_DMB_ISH);
- } else {
- tcg_out32(s, INSN_DMB_MCR);
- }
-}
-
-static TCGCond tgen_cmp(TCGContext *s, TCGCond cond, TCGReg a, TCGReg b)
-{
- if (is_tst_cond(cond)) {
- tcg_out_dat_reg(s, COND_AL, ARITH_TST, 0, a, b, SHIFT_IMM_LSL(0));
- return tcg_tst_eqne_cond(cond);
- }
- tcg_out_dat_reg(s, COND_AL, ARITH_CMP, 0, a, b, SHIFT_IMM_LSL(0));
- return cond;
-}
-
-static TCGCond tgen_cmpi(TCGContext *s, TCGCond cond, TCGReg a, TCGArg b)
-{
- int imm12;
-
- if (!is_tst_cond(cond)) {
- tcg_out_dat_IN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, a, b);
- return cond;
- }
-
- /*
- * The compare constraints allow rIN, but TST does not support N.
- * Be prepared to load the constant into a scratch register.
- */
- imm12 = encode_imm(b);
- if (imm12 >= 0) {
- tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, a, imm12);
- } else {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, b);
- tcg_out_dat_reg(s, COND_AL, ARITH_TST, 0,
- a, TCG_REG_TMP, SHIFT_IMM_LSL(0));
- }
- return tcg_tst_eqne_cond(cond);
-}
-
-static TCGCond tcg_out_cmp(TCGContext *s, TCGCond cond, TCGReg a,
- TCGArg b, int b_const)
-{
- if (b_const) {
- return tgen_cmpi(s, cond, a, b);
- } else {
- return tgen_cmp(s, cond, a, b);
- }
-}
-
-static TCGCond tcg_out_cmp2(TCGContext *s, TCGCond cond, TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl, TCGArg bh, bool const_bh)
-{
- switch (cond) {
- case TCG_COND_EQ:
- case TCG_COND_NE:
- case TCG_COND_LTU:
- case TCG_COND_LEU:
- case TCG_COND_GTU:
- case TCG_COND_GEU:
- /*
- * We perform a conditional comparison. If the high half is
- * equal, then overwrite the flags with the comparison of the
- * low half. The resulting flags cover the whole.
- */
- tcg_out_dat_rI(s, COND_AL, ARITH_CMP, 0, ah, bh, const_bh);
- tcg_out_dat_rI(s, COND_EQ, ARITH_CMP, 0, al, bl, const_bl);
- return cond;
-
- case TCG_COND_TSTEQ:
- case TCG_COND_TSTNE:
- /* Similar, but with TST instead of CMP. */
- tcg_out_dat_rI(s, COND_AL, ARITH_TST, 0, ah, bh, const_bh);
- tcg_out_dat_rI(s, COND_EQ, ARITH_TST, 0, al, bl, const_bl);
- return tcg_tst_eqne_cond(cond);
-
- case TCG_COND_LT:
- case TCG_COND_GE:
- /* We perform a double-word subtraction and examine the result.
- We do not actually need the result of the subtract, so the
- low part "subtract" is a compare. For the high half we have
- no choice but to compute into a temporary. */
- tcg_out_dat_rI(s, COND_AL, ARITH_CMP, 0, al, bl, const_bl);
- tcg_out_dat_rI(s, COND_AL, ARITH_SBC | TO_CPSR,
- TCG_REG_TMP, ah, bh, const_bh);
- return cond;
-
- case TCG_COND_LE:
- case TCG_COND_GT:
- /* Similar, but with swapped arguments, via reversed subtract. */
- tcg_out_dat_rI(s, COND_AL, ARITH_RSB | TO_CPSR,
- TCG_REG_TMP, al, bl, const_bl);
- tcg_out_dat_rI(s, COND_AL, ARITH_RSC | TO_CPSR,
- TCG_REG_TMP, ah, bh, const_bh);
- return tcg_swap_cond(cond);
-
- default:
- g_assert_not_reached();
- }
-}
-
-/*
- * Note that TCGReg references Q-registers.
- * Q-regno = 2 * D-regno, so shift left by 1 while inserting.
- */
-static uint32_t encode_vd(TCGReg rd)
-{
- tcg_debug_assert(rd >= TCG_REG_Q0);
- return (extract32(rd, 3, 1) << 22) | (extract32(rd, 0, 3) << 13);
-}
-
-static uint32_t encode_vn(TCGReg rn)
-{
- tcg_debug_assert(rn >= TCG_REG_Q0);
- return (extract32(rn, 3, 1) << 7) | (extract32(rn, 0, 3) << 17);
-}
-
-static uint32_t encode_vm(TCGReg rm)
-{
- tcg_debug_assert(rm >= TCG_REG_Q0);
- return (extract32(rm, 3, 1) << 5) | (extract32(rm, 0, 3) << 1);
-}
-
-static void tcg_out_vreg2(TCGContext *s, ARMInsn insn, int q, int vece,
- TCGReg d, TCGReg m)
-{
- tcg_out32(s, insn | (vece << 18) | (q << 6) |
- encode_vd(d) | encode_vm(m));
-}
-
-static void tcg_out_vreg3(TCGContext *s, ARMInsn insn, int q, int vece,
- TCGReg d, TCGReg n, TCGReg m)
-{
- tcg_out32(s, insn | (vece << 20) | (q << 6) |
- encode_vd(d) | encode_vn(n) | encode_vm(m));
-}
-
-static void tcg_out_vmovi(TCGContext *s, TCGReg rd,
- int q, int op, int cmode, uint8_t imm8)
-{
- tcg_out32(s, INSN_VMOVI | encode_vd(rd) | (q << 6) | (op << 5)
- | (cmode << 8) | extract32(imm8, 0, 4)
- | (extract32(imm8, 4, 3) << 16)
- | (extract32(imm8, 7, 1) << 24));
-}
-
-static void tcg_out_vshifti(TCGContext *s, ARMInsn insn, int q,
- TCGReg rd, TCGReg rm, int l_imm6)
-{
- tcg_out32(s, insn | (q << 6) | encode_vd(rd) | encode_vm(rm) |
- (extract32(l_imm6, 6, 1) << 7) |
- (extract32(l_imm6, 0, 6) << 16));
-}
-
-static void tcg_out_vldst(TCGContext *s, ARMInsn insn,
- TCGReg rd, TCGReg rn, int offset)
-{
- if (offset != 0) {
- if (check_fit_imm(offset) || check_fit_imm(-offset)) {
- tcg_out_dat_rIN(s, COND_AL, ARITH_ADD, ARITH_SUB,
- TCG_REG_TMP, rn, offset, true);
- } else {
- tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, offset);
- tcg_out_dat_reg(s, COND_AL, ARITH_ADD,
- TCG_REG_TMP, TCG_REG_TMP, rn, 0);
- }
- rn = TCG_REG_TMP;
- }
- tcg_out32(s, insn | (rn << 16) | encode_vd(rd) | 0xf);
-}
-
-typedef struct {
- ARMCond cond;
- TCGReg base;
- int index;
- bool index_scratch;
- TCGAtomAlign aa;
-} HostAddress;
-
-bool tcg_target_has_memory_bswap(MemOp memop)
-{
- return false;
-}
-
-static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg)
-{
- /* We arrive at the slow path via "BLNE", so R14 contains l->raddr. */
- return TCG_REG_R14;
-}
-
-static const TCGLdstHelperParam ldst_helper_param = {
- .ra_gen = ldst_ra_gen,
- .ntmp = 1,
- .tmp = { TCG_REG_TMP },
-};
-
-static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
-{
- MemOp opc = get_memop(lb->oi);
-
- if (!reloc_pc24(lb->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) {
- return false;
- }
-
- tcg_out_ld_helper_args(s, lb, &ldst_helper_param);
- tcg_out_call_int(s, qemu_ld_helpers[opc & MO_SIZE]);
- tcg_out_ld_helper_ret(s, lb, false, &ldst_helper_param);
-
- tcg_out_goto(s, COND_AL, lb->raddr);
- return true;
-}
-
-static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
-{
- MemOp opc = get_memop(lb->oi);
-
- if (!reloc_pc24(lb->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) {
- return false;
- }
-
- tcg_out_st_helper_args(s, lb, &ldst_helper_param);
-
- /* Tail-call to the helper, which will return to the fast path. */
- tcg_out_goto(s, COND_AL, qemu_st_helpers[opc & MO_SIZE]);
- return true;
-}
-
-/* We expect to use an 9-bit sign-magnitude negative offset from ENV. */
-#define MIN_TLB_MASK_TABLE_OFS -256
-
-static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
- TCGReg addr, MemOpIdx oi, bool is_ld)
-{
- TCGLabelQemuLdst *ldst = NULL;
- MemOp opc = get_memop(oi);
- unsigned a_mask;
-
- if (tcg_use_softmmu) {
- *h = (HostAddress){
- .cond = COND_AL,
- .base = addr,
- .index = TCG_REG_R1,
- .index_scratch = true,
- };
- } else {
- *h = (HostAddress){
- .cond = COND_AL,
- .base = addr,
- .index = guest_base ? TCG_REG_GUEST_BASE : -1,
- .index_scratch = false,
- };
- }
-
- h->aa = atom_and_align_for_opc(s, opc, MO_ATOM_IFALIGN, false);
- a_mask = (1 << h->aa.align) - 1;
-
- if (tcg_use_softmmu) {
- int mem_index = get_mmuidx(oi);
- int cmp_off = is_ld ? offsetof(CPUTLBEntry, addr_read)
- : offsetof(CPUTLBEntry, addr_write);
- int fast_off = tlb_mask_table_ofs(s, mem_index);
- unsigned s_mask = (1 << (opc & MO_SIZE)) - 1;
- TCGReg t_addr;
-
- ldst = new_ldst_label(s);
- ldst->is_ld = is_ld;
- ldst->oi = oi;
- ldst->addr_reg = addr;
-
- /* Load CPUTLBDescFast.{mask,table} into {r0,r1}. */
- QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0);
- QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
- tcg_out_ldrd_8(s, COND_AL, TCG_REG_R0, TCG_AREG0, fast_off);
-
- /* Extract the tlb index from the address into R0. */
- tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addr,
- SHIFT_IMM_LSR(TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS));
-
- /*
- * Add the tlb_table pointer, creating the CPUTLBEntry address in R1.
- * Load the tlb comparator into R2 and the fast path addend into R1.
- */
- if (cmp_off == 0) {
- tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0);
- } else {
- tcg_out_dat_reg(s, COND_AL, ARITH_ADD,
- TCG_REG_R1, TCG_REG_R1, TCG_REG_R0, 0);
- tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
- }
-
- /* Load the tlb addend. */
- tcg_out_ld32_12(s, COND_AL, TCG_REG_R1, TCG_REG_R1,
- offsetof(CPUTLBEntry, addend));
-
- /*
- * Check alignment, check comparators.
- * Do this in 2-4 insns. Use MOVW for v7, if possible,
- * to reduce the number of sequential conditional instructions.
- * Almost all guests have at least 4k pages, which means that we need
- * to clear at least 9 bits even for an 8-byte memory, which means it
- * isn't worth checking for an immediate operand for BIC.
- *
- * For unaligned accesses, test the page of the last unit of alignment.
- * This leaves the least significant alignment bits unchanged, and of
- * course must be zero.
- */
- t_addr = addr;
- if (a_mask < s_mask) {
- t_addr = TCG_REG_R0;
- tcg_out_dat_imm(s, COND_AL, ARITH_ADD, t_addr,
- addr, s_mask - a_mask);
- }
- if (use_armv7_instructions && TARGET_PAGE_BITS <= 16) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, ~(TARGET_PAGE_MASK | a_mask));
- tcg_out_dat_reg(s, COND_AL, ARITH_BIC, TCG_REG_TMP,
- t_addr, TCG_REG_TMP, 0);
- tcg_out_dat_reg(s, COND_AL, ARITH_CMP, 0,
- TCG_REG_R2, TCG_REG_TMP, 0);
- } else {
- if (a_mask) {
- tcg_debug_assert(a_mask <= 0xff);
- tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask);
- }
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, t_addr,
- SHIFT_IMM_LSR(TARGET_PAGE_BITS));
- tcg_out_dat_reg(s, (a_mask ? COND_EQ : COND_AL), ARITH_CMP,
- 0, TCG_REG_R2, TCG_REG_TMP,
- SHIFT_IMM_LSL(TARGET_PAGE_BITS));
- }
- } else if (a_mask) {
- ldst = new_ldst_label(s);
- ldst->is_ld = is_ld;
- ldst->oi = oi;
- ldst->addr_reg = addr;
-
- /* We are expecting alignment to max out at 7 */
- tcg_debug_assert(a_mask <= 0xff);
- /* tst addr, #mask */
- tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask);
- }
-
- return ldst;
-}
-
-static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg datalo,
- TCGReg datahi, HostAddress h)
-{
- TCGReg base;
-
- /* Byte swapping is left to middle-end expansion. */
- tcg_debug_assert((opc & MO_BSWAP) == 0);
-
- switch (opc & MO_SSIZE) {
- case MO_UB:
- if (h.index < 0) {
- tcg_out_ld8_12(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_ld8_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_SB:
- if (h.index < 0) {
- tcg_out_ld8s_8(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_ld8s_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_UW:
- if (h.index < 0) {
- tcg_out_ld16u_8(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_ld16u_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_SW:
- if (h.index < 0) {
- tcg_out_ld16s_8(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_ld16s_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_UL:
- if (h.index < 0) {
- tcg_out_ld32_12(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_ld32_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_UQ:
- /* We used pair allocation for datalo, so already should be aligned. */
- tcg_debug_assert((datalo & 1) == 0);
- tcg_debug_assert(datahi == datalo + 1);
- /* LDRD requires alignment; double-check that. */
- if (memop_alignment_bits(opc) >= MO_64) {
- if (h.index < 0) {
- tcg_out_ldrd_8(s, h.cond, datalo, h.base, 0);
- break;
- }
- /*
- * Rm (the second address op) must not overlap Rt or Rt + 1.
- * Since datalo is aligned, we can simplify the test via alignment.
- * Flip the two address arguments if that works.
- */
- if ((h.index & ~1) != datalo) {
- tcg_out_ldrd_r(s, h.cond, datalo, h.base, h.index);
- break;
- }
- if ((h.base & ~1) != datalo) {
- tcg_out_ldrd_r(s, h.cond, datalo, h.index, h.base);
- break;
- }
- }
- if (h.index < 0) {
- base = h.base;
- if (datalo == h.base) {
- tcg_out_mov_reg(s, h.cond, TCG_REG_TMP, base);
- base = TCG_REG_TMP;
- }
- } else if (h.index_scratch) {
- tcg_out_ld32_rwb(s, h.cond, datalo, h.index, h.base);
- tcg_out_ld32_12(s, h.cond, datahi, h.index, 4);
- break;
- } else {
- tcg_out_dat_reg(s, h.cond, ARITH_ADD, TCG_REG_TMP,
- h.base, h.index, SHIFT_IMM_LSL(0));
- base = TCG_REG_TMP;
- }
- tcg_out_ld32_12(s, h.cond, datalo, base, 0);
- tcg_out_ld32_12(s, h.cond, datahi, base, 4);
- break;
- default:
- g_assert_not_reached();
- }
-}
-
-static void tgen_qemu_ld(TCGContext *s, TCGType type, TCGReg data,
- TCGReg addr, MemOpIdx oi)
-{
- MemOp opc = get_memop(oi);
- TCGLabelQemuLdst *ldst;
- HostAddress h;
-
- ldst = prepare_host_addr(s, &h, addr, oi, true);
- if (ldst) {
- ldst->type = type;
- ldst->datalo_reg = data;
- ldst->datahi_reg = -1;
-
- /*
- * This a conditional BL only to load a pointer within this
- * opcode into LR for the slow path. We will not be using
- * the value for a tail call.
- */
- ldst->label_ptr[0] = s->code_ptr;
- tcg_out_bl_imm(s, COND_NE, 0);
- }
-
- tcg_out_qemu_ld_direct(s, opc, data, -1, h);
-
- if (ldst) {
- ldst->raddr = tcg_splitwx_to_rx(s->code_ptr);
- }
-}
-
-static const TCGOutOpQemuLdSt outop_qemu_ld = {
- .base.static_constraint = C_O1_I1(r, q),
- .out = tgen_qemu_ld,
-};
-
-static void tgen_qemu_ld2(TCGContext *s, TCGType type, TCGReg datalo,
- TCGReg datahi, TCGReg addr, MemOpIdx oi)
-{
- MemOp opc = get_memop(oi);
- TCGLabelQemuLdst *ldst;
- HostAddress h;
-
- ldst = prepare_host_addr(s, &h, addr, oi, true);
- if (ldst) {
- ldst->type = type;
- ldst->datalo_reg = datalo;
- ldst->datahi_reg = datahi;
-
- /*
- * This a conditional BL only to load a pointer within this
- * opcode into LR for the slow path. We will not be using
- * the value for a tail call.
- */
- ldst->label_ptr[0] = s->code_ptr;
- tcg_out_bl_imm(s, COND_NE, 0);
- }
-
- tcg_out_qemu_ld_direct(s, opc, datalo, datahi, h);
-
- if (ldst) {
- ldst->raddr = tcg_splitwx_to_rx(s->code_ptr);
- }
-}
-
-static const TCGOutOpQemuLdSt2 outop_qemu_ld2 = {
- .base.static_constraint = C_O2_I1(e, p, q),
- .out = tgen_qemu_ld2,
-};
-
-static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg datalo,
- TCGReg datahi, HostAddress h)
-{
- /* Byte swapping is left to middle-end expansion. */
- tcg_debug_assert((opc & MO_BSWAP) == 0);
-
- switch (opc & MO_SIZE) {
- case MO_8:
- if (h.index < 0) {
- tcg_out_st8_12(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_st8_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_16:
- if (h.index < 0) {
- tcg_out_st16_8(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_st16_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_32:
- if (h.index < 0) {
- tcg_out_st32_12(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_st32_r(s, h.cond, datalo, h.base, h.index);
- }
- break;
- case MO_64:
- /* We used pair allocation for datalo, so already should be aligned. */
- tcg_debug_assert((datalo & 1) == 0);
- tcg_debug_assert(datahi == datalo + 1);
- /* STRD requires alignment; double-check that. */
- if (memop_alignment_bits(opc) >= MO_64) {
- if (h.index < 0) {
- tcg_out_strd_8(s, h.cond, datalo, h.base, 0);
- } else {
- tcg_out_strd_r(s, h.cond, datalo, h.base, h.index);
- }
- } else if (h.index < 0) {
- tcg_out_st32_12(s, h.cond, datalo, h.base, 0);
- tcg_out_st32_12(s, h.cond, datahi, h.base, 4);
- } else if (h.index_scratch) {
- tcg_out_st32_rwb(s, h.cond, datalo, h.index, h.base);
- tcg_out_st32_12(s, h.cond, datahi, h.index, 4);
- } else {
- tcg_out_dat_reg(s, h.cond, ARITH_ADD, TCG_REG_TMP,
- h.base, h.index, SHIFT_IMM_LSL(0));
- tcg_out_st32_12(s, h.cond, datalo, TCG_REG_TMP, 0);
- tcg_out_st32_12(s, h.cond, datahi, TCG_REG_TMP, 4);
- }
- break;
- default:
- g_assert_not_reached();
- }
-}
-
-static void tgen_qemu_st(TCGContext *s, TCGType type, TCGReg data,
- TCGReg addr, MemOpIdx oi)
-{
- MemOp opc = get_memop(oi);
- TCGLabelQemuLdst *ldst;
- HostAddress h;
-
- ldst = prepare_host_addr(s, &h, addr, oi, false);
- if (ldst) {
- ldst->type = type;
- ldst->datalo_reg = data;
- ldst->datahi_reg = -1;
-
- h.cond = COND_EQ;
- tcg_out_qemu_st_direct(s, opc, data, -1, h);
-
- /* The conditional call is last, as we're going to return here. */
- ldst->label_ptr[0] = s->code_ptr;
- tcg_out_bl_imm(s, COND_NE, 0);
- ldst->raddr = tcg_splitwx_to_rx(s->code_ptr);
- } else {
- tcg_out_qemu_st_direct(s, opc, data, -1, h);
- }
-}
-
-static const TCGOutOpQemuLdSt outop_qemu_st = {
- .base.static_constraint = C_O0_I2(q, q),
- .out = tgen_qemu_st,
-};
-
-static void tgen_qemu_st2(TCGContext *s, TCGType type, TCGReg datalo,
- TCGReg datahi, TCGReg addr, MemOpIdx oi)
-{
- MemOp opc = get_memop(oi);
- TCGLabelQemuLdst *ldst;
- HostAddress h;
-
- ldst = prepare_host_addr(s, &h, addr, oi, false);
- if (ldst) {
- ldst->type = type;
- ldst->datalo_reg = datalo;
- ldst->datahi_reg = datahi;
-
- h.cond = COND_EQ;
- tcg_out_qemu_st_direct(s, opc, datalo, datahi, h);
-
- /* The conditional call is last, as we're going to return here. */
- ldst->label_ptr[0] = s->code_ptr;
- tcg_out_bl_imm(s, COND_NE, 0);
- ldst->raddr = tcg_splitwx_to_rx(s->code_ptr);
- } else {
- tcg_out_qemu_st_direct(s, opc, datalo, datahi, h);
- }
-}
-
-static const TCGOutOpQemuLdSt2 outop_qemu_st2 = {
- .base.static_constraint = C_O0_I3(Q, p, q),
- .out = tgen_qemu_st2,
-};
-
-static void tcg_out_epilogue(TCGContext *s);
-
-static void tcg_out_exit_tb(TCGContext *s, uintptr_t arg)
-{
- tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, arg);
- tcg_out_epilogue(s);
-}
-
-static void tcg_out_goto_tb(TCGContext *s, int which)
-{
- uintptr_t i_addr;
- intptr_t i_disp;
-
- /* Direct branch will be patched by tb_target_set_jmp_target. */
- set_jmp_insn_offset(s, which);
- tcg_out32(s, INSN_NOP);
-
- /* When branch is out of range, fall through to indirect. */
- i_addr = get_jmp_target_addr(s, which);
- i_disp = tcg_pcrel_diff(s, (void *)i_addr) - 8;
- tcg_debug_assert(i_disp < 0);
- if (i_disp >= -0xfff) {
- tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, TCG_REG_PC, i_disp);
- } else {
- /*
- * The TB is close, but outside the 12 bits addressable by
- * the load. We can extend this to 20 bits with a sub of a
- * shifted immediate from pc.
- */
- int h = -i_disp;
- int l = -(h & 0xfff);
-
- h = encode_imm_nofail(h + l);
- tcg_out_dat_imm(s, COND_AL, ARITH_SUB, TCG_REG_R0, TCG_REG_PC, h);
- tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, TCG_REG_R0, l);
- }
- set_jmp_reset_offset(s, which);
-}
-
-static void tcg_out_goto_ptr(TCGContext *s, TCGReg a0)
-{
- tcg_out_b_reg(s, COND_AL, a0);
-}
-
-void tb_target_set_jmp_target(const TranslationBlock *tb, int n,
- uintptr_t jmp_rx, uintptr_t jmp_rw)
-{
- uintptr_t addr = tb->jmp_target_addr[n];
- ptrdiff_t offset = addr - (jmp_rx + 8);
- tcg_insn_unit insn;
-
- /* Either directly branch, or fall through to indirect branch. */
- if (offset == sextract64(offset, 0, 26)) {
- /* B <addr> */
- insn = deposit32((COND_AL << 28) | INSN_B, 0, 24, offset >> 2);
- } else {
- insn = INSN_NOP;
- }
-
- qatomic_set((uint32_t *)jmp_rw, insn);
- flush_idcache_range(jmp_rx, jmp_rw, 4);
-}
-
-
-static void tgen_add(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_ADD, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_addi(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IN(s, COND_AL, ARITH_ADD, ARITH_SUB, a0, a1, a2);
-}
-
-static const TCGOutOpBinary outop_add = {
- .base.static_constraint = C_O1_I2(r, r, rIN),
- .out_rrr = tgen_add,
- .out_rri = tgen_addi,
-};
-
-static void tgen_addco(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_ADD | TO_CPSR,
- a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_addco_imm(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IN(s, COND_AL, ARITH_ADD | TO_CPSR, ARITH_SUB | TO_CPSR,
- a0, a1, a2);
-}
-
-static const TCGOutOpBinary outop_addco = {
- .base.static_constraint = C_O1_I2(r, r, rIN),
- .out_rrr = tgen_addco,
- .out_rri = tgen_addco_imm,
-};
-
-static void tgen_addci(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_ADC, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_addci_imm(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IK(s, COND_AL, ARITH_ADC, ARITH_SBC, a0, a1, a2);
-}
-
-static const TCGOutOpAddSubCarry outop_addci = {
- .base.static_constraint = C_O1_I2(r, r, rIK),
- .out_rrr = tgen_addci,
- .out_rri = tgen_addci_imm,
-};
-
-static void tgen_addcio(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_ADC | TO_CPSR,
- a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_addcio_imm(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IK(s, COND_AL, ARITH_ADC | TO_CPSR, ARITH_SBC | TO_CPSR,
- a0, a1, a2);
-}
-
-static const TCGOutOpBinary outop_addcio = {
- .base.static_constraint = C_O1_I2(r, r, rIK),
- .out_rrr = tgen_addcio,
- .out_rri = tgen_addcio_imm,
-};
-
-/* Set C to @c; NZVQ all set to 0. */
-static void tcg_out_movi_apsr_c(TCGContext *s, bool c)
-{
- int imm12 = encode_imm_nofail(c << 29);
- tcg_out32(s, (COND_AL << 28) | INSN_MSRI_CPSR | 0x80000 | imm12);
-}
-
-static void tcg_out_set_carry(TCGContext *s)
-{
- tcg_out_movi_apsr_c(s, 1);
-}
-
-static void tgen_and(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_AND, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_andi(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IK(s, COND_AL, ARITH_AND, ARITH_BIC, a0, a1, a2);
-}
-
-static const TCGOutOpBinary outop_and = {
- .base.static_constraint = C_O1_I2(r, r, rIK),
- .out_rrr = tgen_and,
- .out_rri = tgen_andi,
-};
-
-static void tgen_andc(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_BIC, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static const TCGOutOpBinary outop_andc = {
- .base.static_constraint = C_O1_I2(r, r, r),
- .out_rrr = tgen_andc,
-};
-
-static void tgen_clz(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_CMP, 0, a1, 0);
- tcg_out_dat_reg(s, COND_NE, INSN_CLZ, a0, 0, a1, 0);
- tcg_out_mov_reg(s, COND_EQ, a0, a2);
-}
-
-static void tgen_clzi(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- if (a2 == 32) {
- tcg_out_dat_reg(s, COND_AL, INSN_CLZ, a0, 0, a1, 0);
- } else {
- tcg_out_dat_imm(s, COND_AL, ARITH_CMP, 0, a1, 0);
- tcg_out_dat_reg(s, COND_NE, INSN_CLZ, a0, 0, a1, 0);
- tcg_out_movi32(s, COND_EQ, a0, a2);
- }
-}
-
-static const TCGOutOpBinary outop_clz = {
- .base.static_constraint = C_O1_I2(r, r, rIK),
- .out_rrr = tgen_clz,
- .out_rri = tgen_clzi,
-};
-
-static const TCGOutOpUnary outop_ctpop = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_ctz(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0);
- tgen_clz(s, TCG_TYPE_I32, a0, TCG_REG_TMP, a2);
-}
-
-static void tgen_ctzi(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0);
- tgen_clzi(s, TCG_TYPE_I32, a0, TCG_REG_TMP, a2);
-}
-
-static TCGConstraintSetIndex cset_ctz(TCGType type, unsigned flags)
-{
- return use_armv7_instructions ? C_O1_I2(r, r, rIK) : C_NotImplemented;
-}
-
-static const TCGOutOpBinary outop_ctz = {
- .base.static_constraint = C_Dynamic,
- .base.dynamic_constraint = cset_ctz,
- .out_rrr = tgen_ctz,
- .out_rri = tgen_ctzi,
-};
-
-static TCGConstraintSetIndex cset_idiv(TCGType type, unsigned flags)
-{
- return use_idiv_instructions ? C_O1_I2(r, r, r) : C_NotImplemented;
-}
-
-static void tgen_divs(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- /* sdiv */
- tcg_out32(s, 0x0710f010 | (COND_AL << 28) | (a0 << 16) | a1 | (a2 << 8));
-}
-
-static const TCGOutOpBinary outop_divs = {
- .base.static_constraint = C_Dynamic,
- .base.dynamic_constraint = cset_idiv,
- .out_rrr = tgen_divs,
-};
-
-static const TCGOutOpDivRem outop_divs2 = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_divu(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- /* udiv */
- tcg_out32(s, 0x0730f010 | (COND_AL << 28) | (a0 << 16) | a1 | (a2 << 8));
-}
-
-static const TCGOutOpBinary outop_divu = {
- .base.static_constraint = C_Dynamic,
- .base.dynamic_constraint = cset_idiv,
- .out_rrr = tgen_divu,
-};
-
-static const TCGOutOpDivRem outop_divu2 = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_eqv = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_mul(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- /* mul */
- tcg_out32(s, (COND_AL << 28) | 0x90 | (a0 << 16) | (a1 << 8) | a2);
-}
-
-static const TCGOutOpBinary outop_mul = {
- .base.static_constraint = C_O1_I2(r, r, r),
- .out_rrr = tgen_mul,
-};
-
-static void tgen_muls2(TCGContext *s, TCGType type,
- TCGReg rd0, TCGReg rd1, TCGReg rn, TCGReg rm)
-{
- /* smull */
- tcg_out32(s, (COND_AL << 28) | 0x00c00090 |
- (rd1 << 16) | (rd0 << 12) | (rm << 8) | rn);
-}
-
-static const TCGOutOpMul2 outop_muls2 = {
- .base.static_constraint = C_O2_I2(r, r, r, r),
- .out_rrrr = tgen_muls2,
-};
-
-static const TCGOutOpBinary outop_mulsh = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_mulu2(TCGContext *s, TCGType type,
- TCGReg rd0, TCGReg rd1, TCGReg rn, TCGReg rm)
-{
- /* umull */
- tcg_out32(s, (COND_AL << 28) | 0x00800090 |
- (rd1 << 16) | (rd0 << 12) | (rm << 8) | rn);
-}
-
-static const TCGOutOpMul2 outop_mulu2 = {
- .base.static_constraint = C_O2_I2(r, r, r, r),
- .out_rrrr = tgen_mulu2,
-};
-
-static const TCGOutOpBinary outop_muluh = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_nand = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_nor = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_or(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_ORR, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_ori(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_ORR, a0, a1, encode_imm_nofail(a2));
-}
-
-static const TCGOutOpBinary outop_or = {
- .base.static_constraint = C_O1_I2(r, r, rI),
- .out_rrr = tgen_or,
- .out_rri = tgen_ori,
-};
-
-static const TCGOutOpBinary outop_orc = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_rems = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_remu = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static const TCGOutOpBinary outop_rotl = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_rotr(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_ROR(a2));
-}
-
-static void tgen_rotri(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_IMM_ROR(a2 & 0x1f));
-}
-
-static const TCGOutOpBinary outop_rotr = {
- .base.static_constraint = C_O1_I2(r, r, ri),
- .out_rrr = tgen_rotr,
- .out_rri = tgen_rotri,
-};
-
-static void tgen_sar(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_ASR(a2));
-}
-
-static void tgen_sari(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1,
- SHIFT_IMM_ASR(a2 & 0x1f));
-}
-
-static const TCGOutOpBinary outop_sar = {
- .base.static_constraint = C_O1_I2(r, r, ri),
- .out_rrr = tgen_sar,
- .out_rri = tgen_sari,
-};
-
-static void tgen_shl(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_LSL(a2));
-}
-
-static void tgen_shli(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1,
- SHIFT_IMM_LSL(a2 & 0x1f));
-}
-
-static const TCGOutOpBinary outop_shl = {
- .base.static_constraint = C_O1_I2(r, r, ri),
- .out_rrr = tgen_shl,
- .out_rri = tgen_shli,
-};
-
-static void tgen_shr(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_LSR(a2));
-}
-
-static void tgen_shri(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1,
- SHIFT_IMM_LSR(a2 & 0x1f));
-}
-
-static const TCGOutOpBinary outop_shr = {
- .base.static_constraint = C_O1_I2(r, r, ri),
- .out_rrr = tgen_shr,
- .out_rri = tgen_shri,
-};
-
-static void tgen_sub(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_SUB, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_subfi(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, TCGReg a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_RSB, a0, a2, encode_imm_nofail(a1));
-}
-
-static const TCGOutOpSubtract outop_sub = {
- .base.static_constraint = C_O1_I2(r, rI, r),
- .out_rrr = tgen_sub,
- .out_rir = tgen_subfi,
-};
-
-static void tgen_subbo_rrr(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_SUB | TO_CPSR,
- a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_subbo_rri(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IN(s, COND_AL, ARITH_SUB | TO_CPSR, ARITH_ADD | TO_CPSR,
- a0, a1, a2);
-}
-
-static void tgen_subbo_rir(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, TCGReg a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_RSB | TO_CPSR,
- a0, a2, encode_imm_nofail(a1));
-}
-
-static void tgen_subbo_rii(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, tcg_target_long a2)
-{
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, a2);
- tgen_subbo_rir(s, TCG_TYPE_I32, a0, a1, TCG_REG_TMP);
-}
-
-static const TCGOutOpAddSubCarry outop_subbo = {
- .base.static_constraint = C_O1_I2(r, rI, rIN),
- .out_rrr = tgen_subbo_rrr,
- .out_rri = tgen_subbo_rri,
- .out_rir = tgen_subbo_rir,
- .out_rii = tgen_subbo_rii,
-};
-
-static void tgen_subbi_rrr(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_SBC,
- a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_subbi_rri(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IK(s, COND_AL, ARITH_SBC, ARITH_ADC, a0, a1, a2);
-}
-
-static void tgen_subbi_rir(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, TCGReg a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_RSC, a0, a2, encode_imm_nofail(a1));
-}
-
-static void tgen_subbi_rii(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, tcg_target_long a2)
-{
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, a2);
- tgen_subbi_rir(s, TCG_TYPE_I32, a0, a1, TCG_REG_TMP);
-}
-
-static const TCGOutOpAddSubCarry outop_subbi = {
- .base.static_constraint = C_O1_I2(r, rI, rIK),
- .out_rrr = tgen_subbi_rrr,
- .out_rri = tgen_subbi_rri,
- .out_rir = tgen_subbi_rir,
- .out_rii = tgen_subbi_rii,
-};
-
-static void tgen_subbio_rrr(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_SBC | TO_CPSR,
- a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_subbio_rri(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_IK(s, COND_AL, ARITH_SBC | TO_CPSR, ARITH_ADC | TO_CPSR,
- a0, a1, a2);
-}
-
-static void tgen_subbio_rir(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, TCGReg a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_RSC | TO_CPSR,
- a0, a2, encode_imm_nofail(a1));
-}
-
-static void tgen_subbio_rii(TCGContext *s, TCGType type,
- TCGReg a0, tcg_target_long a1, tcg_target_long a2)
-{
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, a2);
- tgen_subbio_rir(s, TCG_TYPE_I32, a0, a1, TCG_REG_TMP);
-}
-
-static const TCGOutOpAddSubCarry outop_subbio = {
- .base.static_constraint = C_O1_I2(r, rI, rIK),
- .out_rrr = tgen_subbio_rrr,
- .out_rri = tgen_subbio_rri,
- .out_rir = tgen_subbio_rir,
- .out_rii = tgen_subbio_rii,
-};
-
-static void tcg_out_set_borrow(TCGContext *s)
-{
- tcg_out_movi_apsr_c(s, 0); /* borrow = !carry */
-}
-
-static void tgen_xor(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_EOR, a0, a1, a2, SHIFT_IMM_LSL(0));
-}
-
-static void tgen_xori(TCGContext *s, TCGType type,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- tcg_out_dat_imm(s, COND_AL, ARITH_EOR, a0, a1, encode_imm_nofail(a2));
-}
-
-static const TCGOutOpBinary outop_xor = {
- .base.static_constraint = C_O1_I2(r, r, rI),
- .out_rrr = tgen_xor,
- .out_rri = tgen_xori,
-};
-
-static void tgen_bswap16(TCGContext *s, TCGType type,
- TCGReg rd, TCGReg rn, unsigned flags)
-{
- if (flags & TCG_BSWAP_OS) {
- /* revsh */
- tcg_out32(s, 0x06ff0fb0 | (COND_AL << 28) | (rd << 12) | rn);
- return;
- }
-
- /* rev16 */
- tcg_out32(s, 0x06bf0fb0 | (COND_AL << 28) | (rd << 12) | rn);
- if ((flags & (TCG_BSWAP_IZ | TCG_BSWAP_OZ)) == TCG_BSWAP_OZ) {
- tcg_out_ext16u(s, rd, rd);
- }
-}
-
-static const TCGOutOpBswap outop_bswap16 = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_bswap16,
-};
-
-static void tgen_bswap32(TCGContext *s, TCGType type,
- TCGReg rd, TCGReg rn, unsigned flags)
-{
- /* rev */
- tcg_out32(s, 0x06bf0f30 | (COND_AL << 28) | (rd << 12) | rn);
-}
-
-static const TCGOutOpBswap outop_bswap32 = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_bswap32,
-};
-
-static const TCGOutOpUnary outop_bswap64 = {
- .base.static_constraint = C_NotImplemented,
-};
-
-static void tgen_neg(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
-{
- tgen_subfi(s, type, a0, 0, a1);
-}
-
-static const TCGOutOpUnary outop_neg = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_neg,
-};
-
-static void tgen_not(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
-{
- tcg_out_dat_reg(s, COND_AL, ARITH_MVN, a0, 0, a1, SHIFT_IMM_LSL(0));
-}
-
-static const TCGOutOpUnary outop_not = {
- .base.static_constraint = C_O1_I1(r, r),
- .out_rr = tgen_not,
-};
-
-static void tgen_brcond(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, TCGReg a1, TCGLabel *l)
-{
- cond = tgen_cmp(s, cond, a0, a1);
- tcg_out_goto_label(s, tcg_cond_to_arm_cond[cond], l);
-}
-
-static void tgen_brcondi(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, tcg_target_long a1, TCGLabel *l)
-{
- cond = tgen_cmpi(s, cond, a0, a1);
- tcg_out_goto_label(s, tcg_cond_to_arm_cond[cond], l);
-}
-
-static const TCGOutOpBrcond outop_brcond = {
- .base.static_constraint = C_O0_I2(r, rIN),
- .out_rr = tgen_brcond,
- .out_ri = tgen_brcondi,
-};
-
-static void finish_setcond(TCGContext *s, TCGCond cond, TCGReg ret, bool neg)
-{
- tcg_out_movi32(s, tcg_cond_to_arm_cond[tcg_invert_cond(cond)], ret, 0);
- tcg_out_movi32(s, tcg_cond_to_arm_cond[cond], ret, neg ? -1 : 1);
-}
-
-static void tgen_setcond(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- cond = tgen_cmp(s, cond, a1, a2);
- finish_setcond(s, cond, a0, false);
-}
-
-static void tgen_setcondi(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- cond = tgen_cmpi(s, cond, a1, a2);
- finish_setcond(s, cond, a0, false);
-}
-
-static const TCGOutOpSetcond outop_setcond = {
- .base.static_constraint = C_O1_I2(r, r, rIN),
- .out_rrr = tgen_setcond,
- .out_rri = tgen_setcondi,
-};
-
-static void tgen_negsetcond(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, TCGReg a1, TCGReg a2)
-{
- cond = tgen_cmp(s, cond, a1, a2);
- finish_setcond(s, cond, a0, true);
-}
-
-static void tgen_negsetcondi(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg a0, TCGReg a1, tcg_target_long a2)
-{
- cond = tgen_cmpi(s, cond, a1, a2);
- finish_setcond(s, cond, a0, true);
-}
-
-static const TCGOutOpSetcond outop_negsetcond = {
- .base.static_constraint = C_O1_I2(r, r, rIN),
- .out_rrr = tgen_negsetcond,
- .out_rri = tgen_negsetcondi,
-};
-
-static void tgen_movcond(TCGContext *s, TCGType type, TCGCond cond,
- TCGReg ret, TCGReg c1, TCGArg c2, bool const_c2,
- TCGArg vt, bool const_vt, TCGArg vf, bool consf_vf)
-{
- cond = tcg_out_cmp(s, cond, c1, c2, const_c2);
- tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[cond], ARITH_MOV, ARITH_MVN,
- ret, 0, vt, const_vt);
-}
-
-static const TCGOutOpMovcond outop_movcond = {
- .base.static_constraint = C_O1_I4(r, r, rIN, rIK, 0),
- .out = tgen_movcond,
-};
-
-static void tgen_brcond2(TCGContext *s, TCGCond cond, TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl, TCGArg bh, bool const_bh,
- TCGLabel *l)
-{
- cond = tcg_out_cmp2(s, cond, al, ah, bl, const_bl, bh, const_bh);
- tcg_out_goto_label(s, tcg_cond_to_arm_cond[cond], l);
-}
-
-static const TCGOutOpBrcond2 outop_brcond2 = {
- .base.static_constraint = C_O0_I4(r, r, rI, rI),
- .out = tgen_brcond2,
-};
-
-static void tgen_setcond2(TCGContext *s, TCGCond cond, TCGReg ret,
- TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl,
- TCGArg bh, bool const_bh)
-{
- cond = tcg_out_cmp2(s, cond, al, ah, bl, const_bl, bh, const_bh);
- finish_setcond(s, cond, ret, false);
-}
-
-static const TCGOutOpSetcond2 outop_setcond2 = {
- .base.static_constraint = C_O1_I4(r, r, r, rI, rI),
- .out = tgen_setcond2,
-};
-
-static void tgen_extract2(TCGContext *s, TCGType type, TCGReg a0,
- TCGReg a1, TCGReg a2, unsigned shr)
-{
- /* We can do extract2 in 2 insns, vs the 3 required otherwise. */
- tgen_shli(s, TCG_TYPE_I32, TCG_REG_TMP, a2, 32 - shr);
- tcg_out_dat_reg(s, COND_AL, ARITH_ORR, a0, TCG_REG_TMP,
- a1, SHIFT_IMM_LSR(shr));
-}
-
-static const TCGOutOpExtract2 outop_extract2 = {
- .base.static_constraint = C_O1_I2(r, r, r),
- .out_rrr = tgen_extract2,
-};
-
-static void tgen_ld8u(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xfff || offset < -0xfff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_ld8_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_ld8_12(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpLoad outop_ld8u = {
- .base.static_constraint = C_O1_I1(r, r),
- .out = tgen_ld8u,
-};
-
-static void tgen_ld8s(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xff || offset < -0xff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_ld8s_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_ld8s_8(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpLoad outop_ld8s = {
- .base.static_constraint = C_O1_I1(r, r),
- .out = tgen_ld8s,
-};
-
-static void tgen_ld16u(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xff || offset < -0xff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_ld16u_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_ld16u_8(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpLoad outop_ld16u = {
- .base.static_constraint = C_O1_I1(r, r),
- .out = tgen_ld16u,
-};
-
-static void tgen_ld16s(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xff || offset < -0xff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_ld16s_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_ld16s_8(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpLoad outop_ld16s = {
- .base.static_constraint = C_O1_I1(r, r),
- .out = tgen_ld16s,
-};
-
-static void tgen_st8(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xfff || offset < -0xfff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_st8_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_st8_12(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpStore outop_st8 = {
- .base.static_constraint = C_O0_I2(r, r),
- .out_r = tgen_st8,
-};
-
-static void tgen_st16(TCGContext *s, TCGType type, TCGReg rd,
- TCGReg rn, ptrdiff_t offset)
-{
- if (offset > 0xff || offset < -0xff) {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, offset);
- tcg_out_st16_r(s, COND_AL, rd, rn, TCG_REG_TMP);
- } else {
- tcg_out_st16_8(s, COND_AL, rd, rn, offset);
- }
-}
-
-static const TCGOutOpStore outop_st16 = {
- .base.static_constraint = C_O0_I2(r, r),
- .out_r = tgen_st16,
-};
-
-static const TCGOutOpStore outop_st = {
- .base.static_constraint = C_O0_I2(r, r),
- .out_r = tcg_out_st,
-};
-
-static TCGConstraintSetIndex
-tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
-{
- switch (op) {
- case INDEX_op_st_vec:
- return C_O0_I2(w, r);
- case INDEX_op_ld_vec:
- case INDEX_op_dupm_vec:
- return C_O1_I1(w, r);
- case INDEX_op_dup_vec:
- return C_O1_I1(w, wr);
- case INDEX_op_abs_vec:
- case INDEX_op_neg_vec:
- case INDEX_op_not_vec:
- case INDEX_op_shli_vec:
- case INDEX_op_shri_vec:
- case INDEX_op_sari_vec:
- return C_O1_I1(w, w);
- case INDEX_op_dup2_vec:
- case INDEX_op_add_vec:
- case INDEX_op_mul_vec:
- case INDEX_op_smax_vec:
- case INDEX_op_smin_vec:
- case INDEX_op_ssadd_vec:
- case INDEX_op_sssub_vec:
- case INDEX_op_sub_vec:
- case INDEX_op_umax_vec:
- case INDEX_op_umin_vec:
- case INDEX_op_usadd_vec:
- case INDEX_op_ussub_vec:
- case INDEX_op_xor_vec:
- case INDEX_op_arm_sshl_vec:
- case INDEX_op_arm_ushl_vec:
- return C_O1_I2(w, w, w);
- case INDEX_op_arm_sli_vec:
- return C_O1_I2(w, 0, w);
- case INDEX_op_or_vec:
- case INDEX_op_andc_vec:
- return C_O1_I2(w, w, wO);
- case INDEX_op_and_vec:
- case INDEX_op_orc_vec:
- return C_O1_I2(w, w, wV);
- case INDEX_op_cmp_vec:
- return C_O1_I2(w, w, wZ);
- case INDEX_op_bitsel_vec:
- return C_O1_I3(w, w, w, w);
- default:
- return C_NotImplemented;
- }
-}
-
-static void tcg_target_init(TCGContext *s)
-{
- /*
- * Only probe for the platform and capabilities if we haven't already
- * determined maximum values at compile time.
- */
-#if !defined(use_idiv_instructions) || !defined(use_neon_instructions)
- {
- unsigned long hwcap = qemu_getauxval(AT_HWCAP);
-#ifndef use_idiv_instructions
- use_idiv_instructions = (hwcap & HWCAP_ARM_IDIVA) != 0;
-#endif
-#ifndef use_neon_instructions
- use_neon_instructions = (hwcap & HWCAP_ARM_NEON) != 0;
-#endif
- }
-#endif
-
- if (__ARM_ARCH < 7) {
- const char *pl = (const char *)qemu_getauxval(AT_PLATFORM);
- if (pl != NULL && pl[0] == 'v' && pl[1] >= '4' && pl[1] <= '9') {
- arm_arch = pl[1] - '0';
- }
-
- if (arm_arch < 6) {
- error_report("TCG: ARMv%d is unsupported; exiting", arm_arch);
- exit(EXIT_FAILURE);
- }
- }
-
- tcg_target_available_regs[TCG_TYPE_I32] = ALL_GENERAL_REGS;
-
- tcg_target_call_clobber_regs = 0;
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R0);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R1);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R2);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R3);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R12);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R14);
-
- if (use_neon_instructions) {
- tcg_target_available_regs[TCG_TYPE_V64] = ALL_VECTOR_REGS;
- tcg_target_available_regs[TCG_TYPE_V128] = ALL_VECTOR_REGS;
-
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q0);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q1);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q2);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q3);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q8);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q9);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q10);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q11);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q12);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q13);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q14);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_Q15);
- }
-
- s->reserved_regs = 0;
- tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK);
- tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP);
- tcg_regset_set_reg(s->reserved_regs, TCG_REG_PC);
- tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP);
-}
-
-static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg arg,
- TCGReg arg1, intptr_t arg2)
-{
- switch (type) {
- case TCG_TYPE_I32:
- tcg_out_ld32u(s, COND_AL, arg, arg1, arg2);
- return;
- case TCG_TYPE_V64:
- /* regs 1; size 8; align 8 */
- tcg_out_vldst(s, INSN_VLD1 | 0x7d0, arg, arg1, arg2);
- return;
- case TCG_TYPE_V128:
- /*
- * We have only 8-byte alignment for the stack per the ABI.
- * Rather than dynamically re-align the stack, it's easier
- * to simply not request alignment beyond that. So:
- * regs 2; size 8; align 8
- */
- tcg_out_vldst(s, INSN_VLD1 | 0xad0, arg, arg1, arg2);
- return;
- default:
- g_assert_not_reached();
- }
-}
-
-static void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
- TCGReg arg1, intptr_t arg2)
-{
- switch (type) {
- case TCG_TYPE_I32:
- tcg_out_st32(s, COND_AL, arg, arg1, arg2);
- return;
- case TCG_TYPE_V64:
- /* regs 1; size 8; align 8 */
- tcg_out_vldst(s, INSN_VST1 | 0x7d0, arg, arg1, arg2);
- return;
- case TCG_TYPE_V128:
- /* See tcg_out_ld re alignment: regs 2; size 8; align 8 */
- tcg_out_vldst(s, INSN_VST1 | 0xad0, arg, arg1, arg2);
- return;
- default:
- g_assert_not_reached();
- }
-}
-
-static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val,
- TCGReg base, intptr_t ofs)
-{
- return false;
-}
-
-static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg)
-{
- if (ret == arg) {
- return true;
- }
- switch (type) {
- case TCG_TYPE_I32:
- if (ret < TCG_REG_Q0 && arg < TCG_REG_Q0) {
- tcg_out_mov_reg(s, COND_AL, ret, arg);
- return true;
- }
- return false;
-
- case TCG_TYPE_V64:
- case TCG_TYPE_V128:
- /* "VMOV D,N" is an alias for "VORR D,N,N". */
- tcg_out_vreg3(s, INSN_VORR, type - TCG_TYPE_V64, 0, ret, arg, arg);
- return true;
-
- default:
- g_assert_not_reached();
- }
-}
-
-static void tcg_out_movi(TCGContext *s, TCGType type,
- TCGReg ret, tcg_target_long arg)
-{
- tcg_debug_assert(type == TCG_TYPE_I32);
- tcg_debug_assert(ret < TCG_REG_Q0);
- tcg_out_movi32(s, COND_AL, ret, arg);
-}
-
-static bool tcg_out_xchg(TCGContext *s, TCGType type, TCGReg r1, TCGReg r2)
-{
- return false;
-}
-
-static void tcg_out_addi_ptr(TCGContext *s, TCGReg rd, TCGReg rs,
- tcg_target_long imm)
-{
- int enc, opc = ARITH_ADD;
-
- /* All of the easiest immediates to encode are positive. */
- if (imm < 0) {
- imm = -imm;
- opc = ARITH_SUB;
- }
- enc = encode_imm(imm);
- if (enc >= 0) {
- tcg_out_dat_imm(s, COND_AL, opc, rd, rs, enc);
- } else {
- tcg_out_movi32(s, COND_AL, TCG_REG_TMP, imm);
- tcg_out_dat_reg(s, COND_AL, opc, rd, rs,
- TCG_REG_TMP, SHIFT_IMM_LSL(0));
- }
-}
-
-/* Type is always V128, with I64 elements. */
-static void tcg_out_dup2_vec(TCGContext *s, TCGReg rd, TCGReg rl, TCGReg rh)
-{
- /* Move high element into place first. */
- /* VMOV Dd+1, Ds */
- tcg_out_vreg3(s, INSN_VORR | (1 << 12), 0, 0, rd, rh, rh);
- /* Move low element into place; tcg_out_mov will check for nop. */
- tcg_out_mov(s, TCG_TYPE_V64, rd, rl);
-}
-
-static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
- TCGReg rd, TCGReg rs)
-{
- int q = type - TCG_TYPE_V64;
-
- if (vece == MO_64) {
- if (type == TCG_TYPE_V128) {
- tcg_out_dup2_vec(s, rd, rs, rs);
- } else {
- tcg_out_mov(s, TCG_TYPE_V64, rd, rs);
- }
- } else if (rs < TCG_REG_Q0) {
- int b = (vece == MO_8);
- int e = (vece == MO_16);
- tcg_out32(s, INSN_VDUP_G | (b << 22) | (q << 21) | (e << 5) |
- encode_vn(rd) | (rs << 12));
- } else {
- int imm4 = 1 << vece;
- tcg_out32(s, INSN_VDUP_S | (imm4 << 16) | (q << 6) |
- encode_vd(rd) | encode_vm(rs));
- }
- return true;
-}
-
-static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
- TCGReg rd, TCGReg base, intptr_t offset)
-{
- if (vece == MO_64) {
- tcg_out_ld(s, TCG_TYPE_V64, rd, base, offset);
- if (type == TCG_TYPE_V128) {
- tcg_out_dup2_vec(s, rd, rd, rd);
- }
- } else {
- int q = type - TCG_TYPE_V64;
- tcg_out_vldst(s, INSN_VLD1R | (vece << 6) | (q << 5),
- rd, base, offset);
- }
- return true;
-}
-
-static void tcg_out_dupi_vec(TCGContext *s, TCGType type, unsigned vece,
- TCGReg rd, int64_t v64)
-{
- int q = type - TCG_TYPE_V64;
- int cmode, imm8, i;
-
- /* Test all bytes equal first. */
- if (vece == MO_8) {
- tcg_out_vmovi(s, rd, q, 0, 0xe, v64);
- return;
- }
-
- /*
- * Test all bytes 0x00 or 0xff second. This can match cases that
- * might otherwise take 2 or 3 insns for MO_16 or MO_32 below.
- */
- for (i = imm8 = 0; i < 8; i++) {
- uint8_t byte = v64 >> (i * 8);
- if (byte == 0xff) {
- imm8 |= 1 << i;
- } else if (byte != 0) {
- goto fail_bytes;
- }
- }
- tcg_out_vmovi(s, rd, q, 1, 0xe, imm8);
- return;
- fail_bytes:
-
- /*
- * Tests for various replications. For each element width, if we
- * cannot find an expansion there's no point checking a larger
- * width because we already know by replication it cannot match.
- */
- if (vece == MO_16) {
- uint16_t v16 = v64;
-
- if (is_shimm16(v16, &cmode, &imm8)) {
- tcg_out_vmovi(s, rd, q, 0, cmode, imm8);
- return;
- }
- if (is_shimm16(~v16, &cmode, &imm8)) {
- tcg_out_vmovi(s, rd, q, 1, cmode, imm8);
- return;
- }
-
- /*
- * Otherwise, all remaining constants can be loaded in two insns:
- * rd = v16 & 0xff, rd |= v16 & 0xff00.
- */
- tcg_out_vmovi(s, rd, q, 0, 0x8, v16 & 0xff);
- tcg_out_vmovi(s, rd, q, 0, 0xb, v16 >> 8); /* VORRI */
- return;
- }
-
- if (vece == MO_32) {
- uint32_t v32 = v64;
-
- if (is_shimm32(v32, &cmode, &imm8) ||
- is_soimm32(v32, &cmode, &imm8)) {
- tcg_out_vmovi(s, rd, q, 0, cmode, imm8);
- return;
- }
- if (is_shimm32(~v32, &cmode, &imm8) ||
- is_soimm32(~v32, &cmode, &imm8)) {
- tcg_out_vmovi(s, rd, q, 1, cmode, imm8);
- return;
- }
-
- /*
- * Restrict the set of constants to those we can load with
- * two instructions. Others we load from the pool.
- */
- i = is_shimm32_pair(v32, &cmode, &imm8);
- if (i) {
- tcg_out_vmovi(s, rd, q, 0, cmode, imm8);
- tcg_out_vmovi(s, rd, q, 0, i | 1, extract32(v32, i * 4, 8));
- return;
- }
- i = is_shimm32_pair(~v32, &cmode, &imm8);
- if (i) {
- tcg_out_vmovi(s, rd, q, 1, cmode, imm8);
- tcg_out_vmovi(s, rd, q, 1, i | 1, extract32(~v32, i * 4, 8));
- return;
- }
- }
-
- /*
- * As a last resort, load from the constant pool.
- */
- if (!q || vece == MO_64) {
- new_pool_l2(s, R_ARM_PC11, s->code_ptr, 0, v64, v64 >> 32);
- /* VLDR Dd, [pc + offset] */
- tcg_out32(s, INSN_VLDR_D | encode_vd(rd) | (0xf << 16));
- if (q) {
- tcg_out_dup2_vec(s, rd, rd, rd);
- }
- } else {
- new_pool_label(s, (uint32_t)v64, R_ARM_PC8, s->code_ptr, 0);
- /* add tmp, pc, offset */
- tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_TMP, TCG_REG_PC, 0);
- tcg_out_dupm_vec(s, type, MO_32, rd, TCG_REG_TMP, 0);
- }
-}
-
-static const ARMInsn vec_cmp_insn[16] = {
- [TCG_COND_EQ] = INSN_VCEQ,
- [TCG_COND_GT] = INSN_VCGT,
- [TCG_COND_GE] = INSN_VCGE,
- [TCG_COND_GTU] = INSN_VCGT_U,
- [TCG_COND_GEU] = INSN_VCGE_U,
-};
-
-static const ARMInsn vec_cmp0_insn[16] = {
- [TCG_COND_EQ] = INSN_VCEQ0,
- [TCG_COND_GT] = INSN_VCGT0,
- [TCG_COND_GE] = INSN_VCGE0,
- [TCG_COND_LT] = INSN_VCLT0,
- [TCG_COND_LE] = INSN_VCLE0,
-};
-
-static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
- unsigned vecl, unsigned vece,
- const TCGArg args[TCG_MAX_OP_ARGS],
- const int const_args[TCG_MAX_OP_ARGS])
-{
- TCGType type = vecl + TCG_TYPE_V64;
- unsigned q = vecl;
- TCGArg a0, a1, a2, a3;
- int cmode, imm8;
-
- a0 = args[0];
- a1 = args[1];
- a2 = args[2];
-
- switch (opc) {
- case INDEX_op_ld_vec:
- tcg_out_ld(s, type, a0, a1, a2);
- return;
- case INDEX_op_st_vec:
- tcg_out_st(s, type, a0, a1, a2);
- return;
- case INDEX_op_dupm_vec:
- tcg_out_dupm_vec(s, type, vece, a0, a1, a2);
- return;
- case INDEX_op_dup2_vec:
- tcg_out_dup2_vec(s, a0, a1, a2);
- return;
- case INDEX_op_abs_vec:
- tcg_out_vreg2(s, INSN_VABS, q, vece, a0, a1);
- return;
- case INDEX_op_neg_vec:
- tcg_out_vreg2(s, INSN_VNEG, q, vece, a0, a1);
- return;
- case INDEX_op_not_vec:
- tcg_out_vreg2(s, INSN_VMVN, q, 0, a0, a1);
- return;
- case INDEX_op_add_vec:
- tcg_out_vreg3(s, INSN_VADD, q, vece, a0, a1, a2);
- return;
- case INDEX_op_mul_vec:
- tcg_out_vreg3(s, INSN_VMUL, q, vece, a0, a1, a2);
- return;
- case INDEX_op_smax_vec:
- tcg_out_vreg3(s, INSN_VMAX, q, vece, a0, a1, a2);
- return;
- case INDEX_op_smin_vec:
- tcg_out_vreg3(s, INSN_VMIN, q, vece, a0, a1, a2);
- return;
- case INDEX_op_sub_vec:
- tcg_out_vreg3(s, INSN_VSUB, q, vece, a0, a1, a2);
- return;
- case INDEX_op_ssadd_vec:
- tcg_out_vreg3(s, INSN_VQADD, q, vece, a0, a1, a2);
- return;
- case INDEX_op_sssub_vec:
- tcg_out_vreg3(s, INSN_VQSUB, q, vece, a0, a1, a2);
- return;
- case INDEX_op_umax_vec:
- tcg_out_vreg3(s, INSN_VMAX_U, q, vece, a0, a1, a2);
- return;
- case INDEX_op_umin_vec:
- tcg_out_vreg3(s, INSN_VMIN_U, q, vece, a0, a1, a2);
- return;
- case INDEX_op_usadd_vec:
- tcg_out_vreg3(s, INSN_VQADD_U, q, vece, a0, a1, a2);
- return;
- case INDEX_op_ussub_vec:
- tcg_out_vreg3(s, INSN_VQSUB_U, q, vece, a0, a1, a2);
- return;
- case INDEX_op_xor_vec:
- tcg_out_vreg3(s, INSN_VEOR, q, 0, a0, a1, a2);
- return;
- case INDEX_op_arm_sshl_vec:
- /*
- * Note that Vm is the data and Vn is the shift count,
- * therefore the arguments appear reversed.
- */
- tcg_out_vreg3(s, INSN_VSHL_S, q, vece, a0, a2, a1);
- return;
- case INDEX_op_arm_ushl_vec:
- /* See above. */
- tcg_out_vreg3(s, INSN_VSHL_U, q, vece, a0, a2, a1);
- return;
- case INDEX_op_shli_vec:
- tcg_out_vshifti(s, INSN_VSHLI, q, a0, a1, a2 + (8 << vece));
- return;
- case INDEX_op_shri_vec:
- tcg_out_vshifti(s, INSN_VSHRI, q, a0, a1, (16 << vece) - a2);
- return;
- case INDEX_op_sari_vec:
- tcg_out_vshifti(s, INSN_VSARI, q, a0, a1, (16 << vece) - a2);
- return;
- case INDEX_op_arm_sli_vec:
- tcg_out_vshifti(s, INSN_VSLI, q, a0, a2, args[3] + (8 << vece));
- return;
-
- case INDEX_op_andc_vec:
- if (!const_args[2]) {
- tcg_out_vreg3(s, INSN_VBIC, q, 0, a0, a1, a2);
- return;
- }
- a2 = ~a2;
- /* fall through */
- case INDEX_op_and_vec:
- if (const_args[2]) {
- is_shimm1632(~a2, &cmode, &imm8);
- if (a0 == a1) {
- tcg_out_vmovi(s, a0, q, 1, cmode | 1, imm8); /* VBICI */
- return;
- }
- tcg_out_vmovi(s, a0, q, 1, cmode, imm8); /* VMVNI */
- a2 = a0;
- }
- tcg_out_vreg3(s, INSN_VAND, q, 0, a0, a1, a2);
- return;
-
- case INDEX_op_orc_vec:
- if (!const_args[2]) {
- tcg_out_vreg3(s, INSN_VORN, q, 0, a0, a1, a2);
- return;
- }
- a2 = ~a2;
- /* fall through */
- case INDEX_op_or_vec:
- if (const_args[2]) {
- is_shimm1632(a2, &cmode, &imm8);
- if (a0 == a1) {
- tcg_out_vmovi(s, a0, q, 0, cmode | 1, imm8); /* VORRI */
- return;
- }
- tcg_out_vmovi(s, a0, q, 0, cmode, imm8); /* VMOVI */
- a2 = a0;
- }
- tcg_out_vreg3(s, INSN_VORR, q, 0, a0, a1, a2);
- return;
-
- case INDEX_op_cmp_vec:
- {
- TCGCond cond = args[3];
- ARMInsn insn;
-
- switch (cond) {
- case TCG_COND_NE:
- if (const_args[2]) {
- tcg_out_vreg3(s, INSN_VTST, q, vece, a0, a1, a1);
- } else {
- tcg_out_vreg3(s, INSN_VCEQ, q, vece, a0, a1, a2);
- tcg_out_vreg2(s, INSN_VMVN, q, 0, a0, a0);
- }
- break;
-
- case TCG_COND_TSTNE:
- case TCG_COND_TSTEQ:
- if (const_args[2]) {
- /* (x & 0) == 0 */
- tcg_out_dupi_vec(s, type, MO_8, a0,
- -(cond == TCG_COND_TSTEQ));
- break;
- }
- tcg_out_vreg3(s, INSN_VTST, q, vece, a0, a1, a2);
- if (cond == TCG_COND_TSTEQ) {
- tcg_out_vreg2(s, INSN_VMVN, q, 0, a0, a0);
- }
- break;
-
- default:
- if (const_args[2]) {
- insn = vec_cmp0_insn[cond];
- if (insn) {
- tcg_out_vreg2(s, insn, q, vece, a0, a1);
- return;
- }
- tcg_out_dupi_vec(s, type, MO_8, TCG_VEC_TMP, 0);
- a2 = TCG_VEC_TMP;
- }
- insn = vec_cmp_insn[cond];
- if (insn == 0) {
- TCGArg t;
- t = a1, a1 = a2, a2 = t;
- cond = tcg_swap_cond(cond);
- insn = vec_cmp_insn[cond];
- tcg_debug_assert(insn != 0);
- }
- tcg_out_vreg3(s, insn, q, vece, a0, a1, a2);
- break;
- }
- }
- return;
-
- case INDEX_op_bitsel_vec:
- a3 = args[3];
- if (a0 == a3) {
- tcg_out_vreg3(s, INSN_VBIT, q, 0, a0, a2, a1);
- } else if (a0 == a2) {
- tcg_out_vreg3(s, INSN_VBIF, q, 0, a0, a3, a1);
- } else {
- tcg_out_mov(s, type, a0, a1);
- tcg_out_vreg3(s, INSN_VBSL, q, 0, a0, a2, a3);
- }
- return;
-
- case INDEX_op_mov_vec: /* Always emitted via tcg_out_mov. */
- case INDEX_op_dup_vec: /* Always emitted via tcg_out_dup_vec. */
- default:
- g_assert_not_reached();
- }
-}
-
-int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
-{
- switch (opc) {
- case INDEX_op_add_vec:
- case INDEX_op_sub_vec:
- case INDEX_op_and_vec:
- case INDEX_op_andc_vec:
- case INDEX_op_or_vec:
- case INDEX_op_orc_vec:
- case INDEX_op_xor_vec:
- case INDEX_op_not_vec:
- case INDEX_op_shli_vec:
- case INDEX_op_shri_vec:
- case INDEX_op_sari_vec:
- case INDEX_op_ssadd_vec:
- case INDEX_op_sssub_vec:
- case INDEX_op_usadd_vec:
- case INDEX_op_ussub_vec:
- case INDEX_op_bitsel_vec:
- return 1;
- case INDEX_op_abs_vec:
- case INDEX_op_cmp_vec:
- case INDEX_op_mul_vec:
- case INDEX_op_neg_vec:
- case INDEX_op_smax_vec:
- case INDEX_op_smin_vec:
- case INDEX_op_umax_vec:
- case INDEX_op_umin_vec:
- return vece < MO_64;
- case INDEX_op_shlv_vec:
- case INDEX_op_shrv_vec:
- case INDEX_op_sarv_vec:
- case INDEX_op_rotli_vec:
- case INDEX_op_rotlv_vec:
- case INDEX_op_rotrv_vec:
- return -1;
- default:
- return 0;
- }
-}
-
-void tcg_expand_vec_op(TCGOpcode opc, TCGType type, unsigned vece,
- TCGArg a0, ...)
-{
- va_list va;
- TCGv_vec v0, v1, v2, t1, t2, c1;
- TCGArg a2;
-
- va_start(va, a0);
- v0 = temp_tcgv_vec(arg_temp(a0));
- v1 = temp_tcgv_vec(arg_temp(va_arg(va, TCGArg)));
- a2 = va_arg(va, TCGArg);
- va_end(va);
-
- switch (opc) {
- case INDEX_op_shlv_vec:
- /*
- * Merely propagate shlv_vec to arm_ushl_vec.
- * In this way we don't set TCG_TARGET_HAS_shv_vec
- * because everything is done via expansion.
- */
- v2 = temp_tcgv_vec(arg_temp(a2));
- vec_gen_3(INDEX_op_arm_ushl_vec, type, vece, tcgv_vec_arg(v0),
- tcgv_vec_arg(v1), tcgv_vec_arg(v2));
- break;
-
- case INDEX_op_shrv_vec:
- case INDEX_op_sarv_vec:
- /* Right shifts are negative left shifts for NEON. */
- v2 = temp_tcgv_vec(arg_temp(a2));
- t1 = tcg_temp_new_vec(type);
- tcg_gen_neg_vec(vece, t1, v2);
- if (opc == INDEX_op_shrv_vec) {
- opc = INDEX_op_arm_ushl_vec;
- } else {
- opc = INDEX_op_arm_sshl_vec;
- }
- vec_gen_3(opc, type, vece, tcgv_vec_arg(v0),
- tcgv_vec_arg(v1), tcgv_vec_arg(t1));
- tcg_temp_free_vec(t1);
- break;
-
- case INDEX_op_rotli_vec:
- t1 = tcg_temp_new_vec(type);
- tcg_gen_shri_vec(vece, t1, v1, -a2 & ((8 << vece) - 1));
- vec_gen_4(INDEX_op_arm_sli_vec, type, vece,
- tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(v1), a2);
- tcg_temp_free_vec(t1);
- break;
-
- case INDEX_op_rotlv_vec:
- v2 = temp_tcgv_vec(arg_temp(a2));
- t1 = tcg_temp_new_vec(type);
- c1 = tcg_constant_vec(type, vece, 8 << vece);
- tcg_gen_sub_vec(vece, t1, v2, c1);
- /* Right shifts are negative left shifts for NEON. */
- vec_gen_3(INDEX_op_arm_ushl_vec, type, vece, tcgv_vec_arg(t1),
- tcgv_vec_arg(v1), tcgv_vec_arg(t1));
- vec_gen_3(INDEX_op_arm_ushl_vec, type, vece, tcgv_vec_arg(v0),
- tcgv_vec_arg(v1), tcgv_vec_arg(v2));
- tcg_gen_or_vec(vece, v0, v0, t1);
- tcg_temp_free_vec(t1);
- break;
-
- case INDEX_op_rotrv_vec:
- v2 = temp_tcgv_vec(arg_temp(a2));
- t1 = tcg_temp_new_vec(type);
- t2 = tcg_temp_new_vec(type);
- c1 = tcg_constant_vec(type, vece, 8 << vece);
- tcg_gen_neg_vec(vece, t1, v2);
- tcg_gen_sub_vec(vece, t2, c1, v2);
- /* Right shifts are negative left shifts for NEON. */
- vec_gen_3(INDEX_op_arm_ushl_vec, type, vece, tcgv_vec_arg(t1),
- tcgv_vec_arg(v1), tcgv_vec_arg(t1));
- vec_gen_3(INDEX_op_arm_ushl_vec, type, vece, tcgv_vec_arg(t2),
- tcgv_vec_arg(v1), tcgv_vec_arg(t2));
- tcg_gen_or_vec(vece, v0, t1, t2);
- tcg_temp_free_vec(t1);
- tcg_temp_free_vec(t2);
- break;
-
- default:
- g_assert_not_reached();
- }
-}
-
-static void tcg_out_nop_fill(tcg_insn_unit *p, int count)
-{
- int i;
- for (i = 0; i < count; ++i) {
- p[i] = INSN_NOP;
- }
-}
-
-/* Compute frame size via macros, to share between tcg_target_qemu_prologue
- and tcg_register_jit. */
-
-#define PUSH_SIZE ((11 - 4 + 1 + 1) * sizeof(tcg_target_long))
-
-#define FRAME_SIZE \
- ((PUSH_SIZE \
- + TCG_STATIC_CALL_ARGS_SIZE \
- + CPU_TEMP_BUF_NLONGS * sizeof(long) \
- + TCG_TARGET_STACK_ALIGN - 1) \
- & -TCG_TARGET_STACK_ALIGN)
-
-#define STACK_ADDEND (FRAME_SIZE - PUSH_SIZE)
-
-static void tcg_target_qemu_prologue(TCGContext *s)
-{
- /* Calling convention requires us to save r4-r11 and lr. */
- /* stmdb sp!, { r4 - r11, lr } */
- tcg_out_ldstm(s, COND_AL, INSN_STMDB, TCG_REG_CALL_STACK,
- (1 << TCG_REG_R4) | (1 << TCG_REG_R5) | (1 << TCG_REG_R6) |
- (1 << TCG_REG_R7) | (1 << TCG_REG_R8) | (1 << TCG_REG_R9) |
- (1 << TCG_REG_R10) | (1 << TCG_REG_R11) | (1 << TCG_REG_R14));
-
- /* Reserve callee argument and tcg temp space. */
- tcg_out_dat_rI(s, COND_AL, ARITH_SUB, TCG_REG_CALL_STACK,
- TCG_REG_CALL_STACK, STACK_ADDEND, 1);
- tcg_set_frame(s, TCG_REG_CALL_STACK, TCG_STATIC_CALL_ARGS_SIZE,
- CPU_TEMP_BUF_NLONGS * sizeof(long));
-
- tcg_out_mov(s, TCG_TYPE_PTR, TCG_AREG0, tcg_target_call_iarg_regs[0]);
-
- if (!tcg_use_softmmu && guest_base) {
- tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_GUEST_BASE, guest_base);
- tcg_regset_set_reg(s->reserved_regs, TCG_REG_GUEST_BASE);
- }
-
- tcg_out_b_reg(s, COND_AL, tcg_target_call_iarg_regs[1]);
-
- /*
- * Return path for goto_ptr. Set return value to 0, a-la exit_tb,
- * and fall through to the rest of the epilogue.
- */
- tcg_code_gen_epilogue = tcg_splitwx_to_rx(s->code_ptr);
- tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, 0);
- tcg_out_epilogue(s);
-}
-
-static void tcg_out_epilogue(TCGContext *s)
-{
- /* Release local stack frame. */
- tcg_out_dat_rI(s, COND_AL, ARITH_ADD, TCG_REG_CALL_STACK,
- TCG_REG_CALL_STACK, STACK_ADDEND, 1);
-
- /* ldmia sp!, { r4 - r11, pc } */
- tcg_out_ldstm(s, COND_AL, INSN_LDMIA, TCG_REG_CALL_STACK,
- (1 << TCG_REG_R4) | (1 << TCG_REG_R5) | (1 << TCG_REG_R6) |
- (1 << TCG_REG_R7) | (1 << TCG_REG_R8) | (1 << TCG_REG_R9) |
- (1 << TCG_REG_R10) | (1 << TCG_REG_R11) | (1 << TCG_REG_PC));
-}
-
-static void tcg_out_tb_start(TCGContext *s)
-{
- /* nothing to do */
-}
-
-typedef struct {
- DebugFrameHeader h;
- uint8_t fde_def_cfa[4];
- uint8_t fde_reg_ofs[18];
-} DebugFrame;
-
-#define ELF_HOST_MACHINE EM_ARM
-
-/* We're expecting a 2 byte uleb128 encoded value. */
-QEMU_BUILD_BUG_ON(FRAME_SIZE >= (1 << 14));
-
-static const DebugFrame debug_frame = {
- .h.cie.len = sizeof(DebugFrameCIE)-4, /* length after .len member */
- .h.cie.id = -1,
- .h.cie.version = 1,
- .h.cie.code_align = 1,
- .h.cie.data_align = 0x7c, /* sleb128 -4 */
- .h.cie.return_column = 14,
-
- /* Total FDE size does not include the "len" member. */
- .h.fde.len = sizeof(DebugFrame) - offsetof(DebugFrame, h.fde.cie_offset),
-
- .fde_def_cfa = {
- 12, 13, /* DW_CFA_def_cfa sp, ... */
- (FRAME_SIZE & 0x7f) | 0x80, /* ... uleb128 FRAME_SIZE */
- (FRAME_SIZE >> 7)
- },
- .fde_reg_ofs = {
- /* The following must match the stmdb in the prologue. */
- 0x8e, 1, /* DW_CFA_offset, lr, -4 */
- 0x8b, 2, /* DW_CFA_offset, r11, -8 */
- 0x8a, 3, /* DW_CFA_offset, r10, -12 */
- 0x89, 4, /* DW_CFA_offset, r9, -16 */
- 0x88, 5, /* DW_CFA_offset, r8, -20 */
- 0x87, 6, /* DW_CFA_offset, r7, -24 */
- 0x86, 7, /* DW_CFA_offset, r6, -28 */
- 0x85, 8, /* DW_CFA_offset, r5, -32 */
- 0x84, 9, /* DW_CFA_offset, r4, -36 */
- }
-};
-
-void tcg_register_jit(const void *buf, size_t buf_size)
-{
- tcg_register_jit_int(buf, buf_size, &debug_frame, sizeof(debug_frame));
-}
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (5 preceding siblings ...)
2026-01-18 22:03 ` [PULL 06/54] *: Remove arm host support Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-19 8:01 ` Pierrick Bouvier
2026-01-18 22:03 ` [PULL 08/54] *: Remove __i386__ tests Richard Henderson
` (47 subsequent siblings)
54 siblings, 1 reply; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel
Cc: Kyle Evans, Warner Losh, Philippe Mathieu-Daudé,
Pierrick Bouvier
The target test is TARGET_I386, not __i386__.
Cc: Kyle Evans <kevans@freebsd.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
bsd-user/syscall_defs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/bsd-user/syscall_defs.h b/bsd-user/syscall_defs.h
index 52f84d5dd1..c49be32bdc 100644
--- a/bsd-user/syscall_defs.h
+++ b/bsd-user/syscall_defs.h
@@ -247,7 +247,7 @@ struct target_freebsd11_stat {
unsigned int:(8 / 2) * (16 - (int)sizeof(struct target_freebsd_timespec));
} __packed;
-#if defined(__i386__)
+#if defined(TARGET_I386)
#define TARGET_HAS_STAT_TIME_T_EXT 1
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 08/54] *: Remove __i386__ tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (6 preceding siblings ...)
2026-01-18 22:03 ` [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 09/54] *: Remove i386 host support Richard Henderson
` (46 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Remove instances of __i386__, except from tests and imported headers.
Drop a block containing sanity check and fprintf error message for
i386-on-i386 or x86_64-on-x86_64 emulation. If we really want
something like this, we would do it via some form of compile-time check.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/qemu/atomic.h | 4 ++--
include/qemu/cacheflush.h | 2 +-
include/qemu/osdep.h | 4 +---
include/qemu/processor.h | 2 +-
include/qemu/timer.h | 9 ---------
tcg/tci/tcg-target-mo.h | 2 +-
accel/kvm/kvm-all.c | 2 +-
disas/disas-host.c | 6 ------
hw/display/xenfb.c | 10 +---------
linux-user/syscall.c | 9 ---------
target/i386/cpu.c | 10 ----------
util/cacheflush.c | 2 +-
configure | 2 --
13 files changed, 9 insertions(+), 55 deletions(-)
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index f80cba24cf..c39dc99f2f 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -204,7 +204,7 @@
* the same semantics.
*/
#if !defined(QEMU_SANITIZE_THREAD) && \
- (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
+ (defined(__x86_64__) || defined(__s390x__))
# define smp_mb__before_rmw() signal_barrier()
# define smp_mb__after_rmw() signal_barrier()
#else
@@ -218,7 +218,7 @@
*/
#if !defined(QEMU_SANITIZE_THREAD) && \
- (defined(__i386__) || defined(__x86_64__) || defined(__s390x__))
+ (defined(__x86_64__) || defined(__s390x__))
# define qatomic_set_mb(ptr, i) \
({ (void)qatomic_xchg(ptr, i); smp_mb__after_rmw(); })
#else
diff --git a/include/qemu/cacheflush.h b/include/qemu/cacheflush.h
index 76eb55d818..8c64b87814 100644
--- a/include/qemu/cacheflush.h
+++ b/include/qemu/cacheflush.h
@@ -19,7 +19,7 @@
* mappings of the same physical page(s).
*/
-#if defined(__i386__) || defined(__x86_64__) || defined(__s390__)
+#if defined(__x86_64__) || defined(__s390__)
static inline void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
{
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 4cdeda0b9c..b384b5b506 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -637,9 +637,7 @@ bool qemu_has_ofd_lock(void);
bool qemu_has_direct_io(void);
-#if defined(__HAIKU__) && defined(__i386__)
-#define FMT_pid "%ld"
-#elif defined(WIN64)
+#ifdef WIN64
#define FMT_pid "%" PRId64
#else
#define FMT_pid "%d"
diff --git a/include/qemu/processor.h b/include/qemu/processor.h
index 9f0dcdf28f..95b3262f8b 100644
--- a/include/qemu/processor.h
+++ b/include/qemu/processor.h
@@ -7,7 +7,7 @@
#ifndef QEMU_PROCESSOR_H
#define QEMU_PROCESSOR_H
-#if defined(__i386__) || defined(__x86_64__)
+#if defined(__x86_64__)
# define cpu_relax() asm volatile("rep; nop" ::: "memory")
#elif defined(__aarch64__)
diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 8b561cd696..7c18da1652 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -866,15 +866,6 @@ static inline int64_t cpu_get_host_ticks(void)
return retval;
}
-#elif defined(__i386__)
-
-static inline int64_t cpu_get_host_ticks(void)
-{
- int64_t val;
- asm volatile ("rdtsc" : "=A" (val));
- return val;
-}
-
#elif defined(__x86_64__)
static inline int64_t cpu_get_host_ticks(void)
diff --git a/tcg/tci/tcg-target-mo.h b/tcg/tci/tcg-target-mo.h
index 779872e39a..b5b389dafc 100644
--- a/tcg/tci/tcg-target-mo.h
+++ b/tcg/tci/tcg-target-mo.h
@@ -8,7 +8,7 @@
#define TCG_TARGET_MO_H
/*
- * We could notice __i386__ or __s390x__ and reduce the barriers depending
+ * We could notice __x86_64__ or __s390x__ and reduce the barriers depending
* on the host. But if you want performance, you use the normal backend.
* We prefer consistency across hosts on this.
*/
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index f85eb42d78..8301a512e7 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -61,7 +61,7 @@
#include <sys/eventfd.h>
#endif
-#if defined(__i386__) || defined(__x86_64__) || defined(__aarch64__)
+#if defined(__x86_64__) || defined(__aarch64__)
# define KVM_HAVE_MCE_INJECTION 1
#endif
diff --git a/disas/disas-host.c b/disas/disas-host.c
index 88e7d8800c..7cf432938e 100644
--- a/disas/disas-host.c
+++ b/disas/disas-host.c
@@ -44,12 +44,6 @@ static void initialize_debug_host(CPUDebug *s)
#endif
#if defined(CONFIG_TCG_INTERPRETER)
s->info.print_insn = print_insn_tci;
-#elif defined(__i386__)
- s->info.mach = bfd_mach_i386_i386;
- s->info.cap_arch = CS_ARCH_X86;
- s->info.cap_mode = CS_MODE_32;
- s->info.cap_insn_unit = 1;
- s->info.cap_insn_split = 8;
#elif defined(__x86_64__)
s->info.mach = bfd_mach_x86_64;
s->info.cap_arch = CS_ARCH_X86;
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 164fd0b248..ba886a940e 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -459,10 +459,7 @@ static int xenfb_map_fb(struct XenFB *xenfb)
*/
uint32_t *ptr32 = NULL;
uint32_t *ptr64 = NULL;
-#if defined(__i386__)
- ptr32 = (void*)page->pd;
- ptr64 = ((void*)page->pd) + 4;
-#elif defined(__x86_64__)
+#if defined(__x86_64__)
ptr32 = ((void*)page->pd) - 4;
ptr64 = (void*)page->pd;
#endif
@@ -480,11 +477,6 @@ static int xenfb_map_fb(struct XenFB *xenfb)
/* 64bit dom0, 32bit domU */
mode = 32;
pd = ((void*)page->pd) - 4;
-#elif defined(__i386__)
- } else if (strcmp(protocol, XEN_IO_PROTO_ABI_X86_64) == 0) {
- /* 32bit dom0, 64bit domU */
- mode = 64;
- pd = ((void*)page->pd) + 4;
#endif
}
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 67ad681098..3601715769 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -7452,15 +7452,6 @@ void syscall_init(void)
~(TARGET_IOC_SIZEMASK << TARGET_IOC_SIZESHIFT)) |
(size << TARGET_IOC_SIZESHIFT);
}
-
- /* automatic consistency check if same arch */
-#if (defined(__i386__) && defined(TARGET_I386) && defined(TARGET_ABI32)) || \
- (defined(__x86_64__) && defined(TARGET_X86_64))
- if (unlikely(ie->target_cmd != ie->host_cmd)) {
- fprintf(stderr, "ERROR: ioctl(%s): target=0x%x host=0x%x\n",
- ie->name, ie->target_cmd, ie->host_cmd);
- }
-#endif
ie++;
}
}
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 37803cd724..0b8cca7cec 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -2251,16 +2251,6 @@ void host_cpuid(uint32_t function, uint32_t count,
: "=a"(vec[0]), "=b"(vec[1]),
"=c"(vec[2]), "=d"(vec[3])
: "0"(function), "c"(count) : "cc");
-#elif defined(__i386__)
- asm volatile("pusha \n\t"
- "cpuid \n\t"
- "mov %%eax, 0(%2) \n\t"
- "mov %%ebx, 4(%2) \n\t"
- "mov %%ecx, 8(%2) \n\t"
- "mov %%edx, 12(%2) \n\t"
- "popa"
- : : "a"(function), "c"(count), "S"(vec)
- : "memory", "cc");
#else
abort();
#endif
diff --git a/util/cacheflush.c b/util/cacheflush.c
index 69c9614e2c..99221a409f 100644
--- a/util/cacheflush.c
+++ b/util/cacheflush.c
@@ -225,7 +225,7 @@ static void __attribute__((constructor)) init_cache_info(void)
* Architecture (+ OS) specific cache flushing mechanisms.
*/
-#if defined(__i386__) || defined(__x86_64__) || defined(__s390__)
+#if defined(__x86_64__) || defined(__s390__)
/* Caches are coherent and do not require flushing; symbol inline. */
diff --git a/configure b/configure
index 0742f1212d..de0f3a8ebe 100755
--- a/configure
+++ b/configure
@@ -382,8 +382,6 @@ fi
if test ! -z "$cpu" ; then
# command line argument
:
-elif check_define __i386__ ; then
- cpu="i386"
elif check_define __x86_64__ ; then
if check_define __ILP32__ ; then
cpu="x32"
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 09/54] *: Remove i386 host support
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (7 preceding siblings ...)
2026-01-18 22:03 ` [PULL 08/54] *: Remove __i386__ tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 10/54] host/include/x86_64/bufferiszero: Remove no SSE2 fallback Richard Henderson
` (45 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Move the files from host/include/i386 to host/include/x86_64,
replacing the stub headers that redirected to i386.
Remove linux-user/include/host/i386.
Remove common-user/host/i386.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
host/include/i386/host/cpuinfo.h | 41 ------
host/include/i386/host/crypto/aes-round.h | 152 -------------------
host/include/i386/host/crypto/clmul.h | 29 ----
host/include/x86_64/host/cpuinfo.h | 42 +++++-
host/include/x86_64/host/crypto/aes-round.h | 153 +++++++++++++++++++-
host/include/x86_64/host/crypto/clmul.h | 30 +++-
linux-user/include/host/i386/host-signal.h | 38 -----
common-user/host/i386/safe-syscall.inc.S | 127 ----------------
host/include/i386/host/bufferiszero.c.inc | 125 ----------------
host/include/x86_64/host/bufferiszero.c.inc | 126 +++++++++++++++-
10 files changed, 347 insertions(+), 516 deletions(-)
delete mode 100644 host/include/i386/host/cpuinfo.h
delete mode 100644 host/include/i386/host/crypto/aes-round.h
delete mode 100644 host/include/i386/host/crypto/clmul.h
delete mode 100644 linux-user/include/host/i386/host-signal.h
delete mode 100644 common-user/host/i386/safe-syscall.inc.S
delete mode 100644 host/include/i386/host/bufferiszero.c.inc
diff --git a/host/include/i386/host/cpuinfo.h b/host/include/i386/host/cpuinfo.h
deleted file mode 100644
index 93d029d499..0000000000
--- a/host/include/i386/host/cpuinfo.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * SPDX-License-Identifier: GPL-2.0-or-later
- * Host specific cpu identification for x86.
- */
-
-#ifndef HOST_CPUINFO_H
-#define HOST_CPUINFO_H
-
-/* Digested version of <cpuid.h> */
-
-#define CPUINFO_ALWAYS (1u << 0) /* so cpuinfo is nonzero */
-#define CPUINFO_OSXSAVE (1u << 1)
-#define CPUINFO_MOVBE (1u << 2)
-#define CPUINFO_LZCNT (1u << 3)
-#define CPUINFO_POPCNT (1u << 4)
-#define CPUINFO_BMI1 (1u << 5)
-#define CPUINFO_BMI2 (1u << 6)
-#define CPUINFO_SSE2 (1u << 7)
-#define CPUINFO_AVX1 (1u << 9)
-#define CPUINFO_AVX2 (1u << 10)
-#define CPUINFO_AVX512F (1u << 11)
-#define CPUINFO_AVX512VL (1u << 12)
-#define CPUINFO_AVX512BW (1u << 13)
-#define CPUINFO_AVX512DQ (1u << 14)
-#define CPUINFO_AVX512VBMI2 (1u << 15)
-#define CPUINFO_ATOMIC_VMOVDQA (1u << 16)
-#define CPUINFO_ATOMIC_VMOVDQU (1u << 17)
-#define CPUINFO_AES (1u << 18)
-#define CPUINFO_PCLMUL (1u << 19)
-#define CPUINFO_GFNI (1u << 20)
-
-/* Initialized with a constructor. */
-extern unsigned cpuinfo;
-
-/*
- * We cannot rely on constructor ordering, so other constructors must
- * use the function interface rather than the variable above.
- */
-unsigned cpuinfo_init(void);
-
-#endif /* HOST_CPUINFO_H */
diff --git a/host/include/i386/host/crypto/aes-round.h b/host/include/i386/host/crypto/aes-round.h
deleted file mode 100644
index 59a64130f7..0000000000
--- a/host/include/i386/host/crypto/aes-round.h
+++ /dev/null
@@ -1,152 +0,0 @@
-/*
- * x86 specific aes acceleration.
- * SPDX-License-Identifier: GPL-2.0-or-later
- */
-
-#ifndef X86_HOST_CRYPTO_AES_ROUND_H
-#define X86_HOST_CRYPTO_AES_ROUND_H
-
-#include "host/cpuinfo.h"
-#include <immintrin.h>
-
-#if defined(__AES__) && defined(__SSSE3__)
-# define HAVE_AES_ACCEL true
-# define ATTR_AES_ACCEL
-#else
-# define HAVE_AES_ACCEL likely(cpuinfo & CPUINFO_AES)
-# define ATTR_AES_ACCEL __attribute__((target("aes,ssse3")))
-#endif
-
-static inline __m128i ATTR_AES_ACCEL
-aes_accel_bswap(__m128i x)
-{
- return _mm_shuffle_epi8(x, _mm_set_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 13, 14, 15));
-}
-
-static inline void ATTR_AES_ACCEL
-aesenc_MC_accel(AESState *ret, const AESState *st, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i z = _mm_setzero_si128();
-
- if (be) {
- t = aes_accel_bswap(t);
- t = _mm_aesdeclast_si128(t, z);
- t = _mm_aesenc_si128(t, z);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesdeclast_si128(t, z);
- t = _mm_aesenc_si128(t, z);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesenc_SB_SR_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i k = (__m128i)rk->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- k = aes_accel_bswap(k);
- t = _mm_aesenclast_si128(t, k);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesenclast_si128(t, k);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesenc_SB_SR_MC_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i k = (__m128i)rk->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- k = aes_accel_bswap(k);
- t = _mm_aesenc_si128(t, k);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesenc_si128(t, k);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesdec_IMC_accel(AESState *ret, const AESState *st, bool be)
-{
- __m128i t = (__m128i)st->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- t = _mm_aesimc_si128(t);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesimc_si128(t);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesdec_ISB_ISR_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i k = (__m128i)rk->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- k = aes_accel_bswap(k);
- t = _mm_aesdeclast_si128(t, k);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesdeclast_si128(t, k);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesdec_ISB_ISR_AK_IMC_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i k = (__m128i)rk->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- k = aes_accel_bswap(k);
- t = _mm_aesdeclast_si128(t, k);
- t = _mm_aesimc_si128(t);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesdeclast_si128(t, k);
- t = _mm_aesimc_si128(t);
- }
- ret->v = (AESStateVec)t;
-}
-
-static inline void ATTR_AES_ACCEL
-aesdec_ISB_ISR_IMC_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- __m128i t = (__m128i)st->v;
- __m128i k = (__m128i)rk->v;
-
- if (be) {
- t = aes_accel_bswap(t);
- k = aes_accel_bswap(k);
- t = _mm_aesdec_si128(t, k);
- t = aes_accel_bswap(t);
- } else {
- t = _mm_aesdec_si128(t, k);
- }
- ret->v = (AESStateVec)t;
-}
-
-#endif /* X86_HOST_CRYPTO_AES_ROUND_H */
diff --git a/host/include/i386/host/crypto/clmul.h b/host/include/i386/host/crypto/clmul.h
deleted file mode 100644
index dc3c814797..0000000000
--- a/host/include/i386/host/crypto/clmul.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * x86 specific clmul acceleration.
- * SPDX-License-Identifier: GPL-2.0-or-later
- */
-
-#ifndef X86_HOST_CRYPTO_CLMUL_H
-#define X86_HOST_CRYPTO_CLMUL_H
-
-#include "host/cpuinfo.h"
-#include <immintrin.h>
-
-#if defined(__PCLMUL__)
-# define HAVE_CLMUL_ACCEL true
-# define ATTR_CLMUL_ACCEL
-#else
-# define HAVE_CLMUL_ACCEL likely(cpuinfo & CPUINFO_PCLMUL)
-# define ATTR_CLMUL_ACCEL __attribute__((target("pclmul")))
-#endif
-
-static inline Int128 ATTR_CLMUL_ACCEL
-clmul_64_accel(uint64_t n, uint64_t m)
-{
- union { __m128i v; Int128 s; } u;
-
- u.v = _mm_clmulepi64_si128(_mm_set_epi64x(0, n), _mm_set_epi64x(0, m), 0);
- return u.s;
-}
-
-#endif /* X86_HOST_CRYPTO_CLMUL_H */
diff --git a/host/include/x86_64/host/cpuinfo.h b/host/include/x86_64/host/cpuinfo.h
index 67debab9a0..93d029d499 100644
--- a/host/include/x86_64/host/cpuinfo.h
+++ b/host/include/x86_64/host/cpuinfo.h
@@ -1 +1,41 @@
-#include "host/include/i386/host/cpuinfo.h"
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ * Host specific cpu identification for x86.
+ */
+
+#ifndef HOST_CPUINFO_H
+#define HOST_CPUINFO_H
+
+/* Digested version of <cpuid.h> */
+
+#define CPUINFO_ALWAYS (1u << 0) /* so cpuinfo is nonzero */
+#define CPUINFO_OSXSAVE (1u << 1)
+#define CPUINFO_MOVBE (1u << 2)
+#define CPUINFO_LZCNT (1u << 3)
+#define CPUINFO_POPCNT (1u << 4)
+#define CPUINFO_BMI1 (1u << 5)
+#define CPUINFO_BMI2 (1u << 6)
+#define CPUINFO_SSE2 (1u << 7)
+#define CPUINFO_AVX1 (1u << 9)
+#define CPUINFO_AVX2 (1u << 10)
+#define CPUINFO_AVX512F (1u << 11)
+#define CPUINFO_AVX512VL (1u << 12)
+#define CPUINFO_AVX512BW (1u << 13)
+#define CPUINFO_AVX512DQ (1u << 14)
+#define CPUINFO_AVX512VBMI2 (1u << 15)
+#define CPUINFO_ATOMIC_VMOVDQA (1u << 16)
+#define CPUINFO_ATOMIC_VMOVDQU (1u << 17)
+#define CPUINFO_AES (1u << 18)
+#define CPUINFO_PCLMUL (1u << 19)
+#define CPUINFO_GFNI (1u << 20)
+
+/* Initialized with a constructor. */
+extern unsigned cpuinfo;
+
+/*
+ * We cannot rely on constructor ordering, so other constructors must
+ * use the function interface rather than the variable above.
+ */
+unsigned cpuinfo_init(void);
+
+#endif /* HOST_CPUINFO_H */
diff --git a/host/include/x86_64/host/crypto/aes-round.h b/host/include/x86_64/host/crypto/aes-round.h
index 2773cc9f10..59a64130f7 100644
--- a/host/include/x86_64/host/crypto/aes-round.h
+++ b/host/include/x86_64/host/crypto/aes-round.h
@@ -1 +1,152 @@
-#include "host/include/i386/host/crypto/aes-round.h"
+/*
+ * x86 specific aes acceleration.
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef X86_HOST_CRYPTO_AES_ROUND_H
+#define X86_HOST_CRYPTO_AES_ROUND_H
+
+#include "host/cpuinfo.h"
+#include <immintrin.h>
+
+#if defined(__AES__) && defined(__SSSE3__)
+# define HAVE_AES_ACCEL true
+# define ATTR_AES_ACCEL
+#else
+# define HAVE_AES_ACCEL likely(cpuinfo & CPUINFO_AES)
+# define ATTR_AES_ACCEL __attribute__((target("aes,ssse3")))
+#endif
+
+static inline __m128i ATTR_AES_ACCEL
+aes_accel_bswap(__m128i x)
+{
+ return _mm_shuffle_epi8(x, _mm_set_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8,
+ 9, 10, 11, 12, 13, 14, 15));
+}
+
+static inline void ATTR_AES_ACCEL
+aesenc_MC_accel(AESState *ret, const AESState *st, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i z = _mm_setzero_si128();
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ t = _mm_aesdeclast_si128(t, z);
+ t = _mm_aesenc_si128(t, z);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesdeclast_si128(t, z);
+ t = _mm_aesenc_si128(t, z);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesenc_SB_SR_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i k = (__m128i)rk->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ k = aes_accel_bswap(k);
+ t = _mm_aesenclast_si128(t, k);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesenclast_si128(t, k);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesenc_SB_SR_MC_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i k = (__m128i)rk->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ k = aes_accel_bswap(k);
+ t = _mm_aesenc_si128(t, k);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesenc_si128(t, k);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesdec_IMC_accel(AESState *ret, const AESState *st, bool be)
+{
+ __m128i t = (__m128i)st->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ t = _mm_aesimc_si128(t);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesimc_si128(t);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesdec_ISB_ISR_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i k = (__m128i)rk->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ k = aes_accel_bswap(k);
+ t = _mm_aesdeclast_si128(t, k);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesdeclast_si128(t, k);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesdec_ISB_ISR_AK_IMC_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i k = (__m128i)rk->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ k = aes_accel_bswap(k);
+ t = _mm_aesdeclast_si128(t, k);
+ t = _mm_aesimc_si128(t);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesdeclast_si128(t, k);
+ t = _mm_aesimc_si128(t);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+static inline void ATTR_AES_ACCEL
+aesdec_ISB_ISR_IMC_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ __m128i t = (__m128i)st->v;
+ __m128i k = (__m128i)rk->v;
+
+ if (be) {
+ t = aes_accel_bswap(t);
+ k = aes_accel_bswap(k);
+ t = _mm_aesdec_si128(t, k);
+ t = aes_accel_bswap(t);
+ } else {
+ t = _mm_aesdec_si128(t, k);
+ }
+ ret->v = (AESStateVec)t;
+}
+
+#endif /* X86_HOST_CRYPTO_AES_ROUND_H */
diff --git a/host/include/x86_64/host/crypto/clmul.h b/host/include/x86_64/host/crypto/clmul.h
index f25eced416..dc3c814797 100644
--- a/host/include/x86_64/host/crypto/clmul.h
+++ b/host/include/x86_64/host/crypto/clmul.h
@@ -1 +1,29 @@
-#include "host/include/i386/host/crypto/clmul.h"
+/*
+ * x86 specific clmul acceleration.
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef X86_HOST_CRYPTO_CLMUL_H
+#define X86_HOST_CRYPTO_CLMUL_H
+
+#include "host/cpuinfo.h"
+#include <immintrin.h>
+
+#if defined(__PCLMUL__)
+# define HAVE_CLMUL_ACCEL true
+# define ATTR_CLMUL_ACCEL
+#else
+# define HAVE_CLMUL_ACCEL likely(cpuinfo & CPUINFO_PCLMUL)
+# define ATTR_CLMUL_ACCEL __attribute__((target("pclmul")))
+#endif
+
+static inline Int128 ATTR_CLMUL_ACCEL
+clmul_64_accel(uint64_t n, uint64_t m)
+{
+ union { __m128i v; Int128 s; } u;
+
+ u.v = _mm_clmulepi64_si128(_mm_set_epi64x(0, n), _mm_set_epi64x(0, m), 0);
+ return u.s;
+}
+
+#endif /* X86_HOST_CRYPTO_CLMUL_H */
diff --git a/linux-user/include/host/i386/host-signal.h b/linux-user/include/host/i386/host-signal.h
deleted file mode 100644
index e2b64f077f..0000000000
--- a/linux-user/include/host/i386/host-signal.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * host-signal.h: signal info dependent on the host architecture
- *
- * Copyright (c) 2003-2005 Fabrice Bellard
- * Copyright (c) 2021 Linaro Limited
- *
- * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef I386_HOST_SIGNAL_H
-#define I386_HOST_SIGNAL_H
-
-/* The third argument to a SA_SIGINFO handler is ucontext_t. */
-typedef ucontext_t host_sigcontext;
-
-static inline uintptr_t host_signal_pc(host_sigcontext *uc)
-{
- return uc->uc_mcontext.gregs[REG_EIP];
-}
-
-static inline void host_signal_set_pc(host_sigcontext *uc, uintptr_t pc)
-{
- uc->uc_mcontext.gregs[REG_EIP] = pc;
-}
-
-static inline void *host_signal_mask(host_sigcontext *uc)
-{
- return &uc->uc_sigmask;
-}
-
-static inline bool host_signal_write(siginfo_t *info, host_sigcontext *uc)
-{
- return uc->uc_mcontext.gregs[REG_TRAPNO] == 0xe
- && (uc->uc_mcontext.gregs[REG_ERR] & 0x2);
-}
-
-#endif
diff --git a/common-user/host/i386/safe-syscall.inc.S b/common-user/host/i386/safe-syscall.inc.S
deleted file mode 100644
index db2ed09839..0000000000
--- a/common-user/host/i386/safe-syscall.inc.S
+++ /dev/null
@@ -1,127 +0,0 @@
-/*
- * safe-syscall.inc.S : host-specific assembly fragment
- * to handle signals occurring at the same time as system calls.
- * This is intended to be included by common-user/safe-syscall.S
- *
- * Written by Richard Henderson <rth@twiddle.net>
- * Copyright (C) 2016 Red Hat, Inc.
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
- .global safe_syscall_base
- .global safe_syscall_start
- .global safe_syscall_end
- .type safe_syscall_base, @function
-
- /* This is the entry point for making a system call. The calling
- * convention here is that of a C varargs function with the
- * first argument an 'int *' to the signal_pending flag, the
- * second one the system call number (as a 'long'), and all further
- * arguments being syscall arguments (also 'long').
- */
-safe_syscall_base:
- .cfi_startproc
- push %ebp
- .cfi_adjust_cfa_offset 4
- .cfi_rel_offset ebp, 0
- push %esi
- .cfi_adjust_cfa_offset 4
- .cfi_rel_offset esi, 0
- push %edi
- .cfi_adjust_cfa_offset 4
- .cfi_rel_offset edi, 0
- push %ebx
- .cfi_adjust_cfa_offset 4
- .cfi_rel_offset ebx, 0
-
- /* The syscall calling convention isn't the same as the C one:
- * we enter with 0(%esp) == return address
- * 4(%esp) == &signal_pending
- * 8(%esp) == syscall number
- * 12(%esp) ... 32(%esp) == syscall arguments
- * and return the result in eax
- * and the syscall instruction needs
- * eax == syscall number
- * ebx, ecx, edx, esi, edi, ebp == syscall arguments
- * and returns the result in eax
- * Shuffle everything around appropriately.
- * Note the 16 bytes that we pushed to save registers.
- */
- mov 12+16(%esp), %ebx /* the syscall arguments */
- mov 16+16(%esp), %ecx
- mov 20+16(%esp), %edx
- mov 24+16(%esp), %esi
- mov 28+16(%esp), %edi
- mov 32+16(%esp), %ebp
-
- /* This next sequence of code works in conjunction with the
- * rewind_if_safe_syscall_function(). If a signal is taken
- * and the interrupted PC is anywhere between 'safe_syscall_start'
- * and 'safe_syscall_end' then we rewind it to 'safe_syscall_start'.
- * The code sequence must therefore be able to cope with this, and
- * the syscall instruction must be the final one in the sequence.
- */
-safe_syscall_start:
- /* if signal_pending is non-zero, don't do the call */
- mov 4+16(%esp), %eax /* signal_pending */
- cmpl $0, (%eax)
- jnz 2f
- mov 8+16(%esp), %eax /* syscall number */
- int $0x80
-safe_syscall_end:
-
- /* code path for having successfully executed the syscall */
-#if defined(__linux__)
- /* Linux kernel returns (small) negative errno. */
- cmp $-4095, %eax
- jae 0f
-#elif defined(__FreeBSD__)
- /* FreeBSD kernel returns positive errno and C bit set. */
- jc 1f
-#else
-#error "unsupported os"
-#endif
- pop %ebx
- .cfi_remember_state
- .cfi_adjust_cfa_offset -4
- .cfi_restore ebx
- pop %edi
- .cfi_adjust_cfa_offset -4
- .cfi_restore edi
- pop %esi
- .cfi_adjust_cfa_offset -4
- .cfi_restore esi
- pop %ebp
- .cfi_adjust_cfa_offset -4
- .cfi_restore ebp
- ret
- .cfi_restore_state
-
-#if defined(__linux__)
-0: neg %eax
- jmp 1f
-#endif
-
- /* code path when we didn't execute the syscall */
-2: mov $QEMU_ERESTARTSYS, %eax
-
- /* code path setting errno */
-1: pop %ebx
- .cfi_adjust_cfa_offset -4
- .cfi_restore ebx
- pop %edi
- .cfi_adjust_cfa_offset -4
- .cfi_restore edi
- pop %esi
- .cfi_adjust_cfa_offset -4
- .cfi_restore esi
- pop %ebp
- .cfi_adjust_cfa_offset -4
- .cfi_restore ebp
- mov %eax, 4(%esp)
- jmp safe_syscall_set_errno_tail
-
- .cfi_endproc
- .size safe_syscall_base, .-safe_syscall_base
diff --git a/host/include/i386/host/bufferiszero.c.inc b/host/include/i386/host/bufferiszero.c.inc
deleted file mode 100644
index 74ae98580f..0000000000
--- a/host/include/i386/host/bufferiszero.c.inc
+++ /dev/null
@@ -1,125 +0,0 @@
-/*
- * SPDX-License-Identifier: GPL-2.0-or-later
- * buffer_is_zero acceleration, x86 version.
- */
-
-#if defined(CONFIG_AVX2_OPT) || defined(__SSE2__)
-#include <immintrin.h>
-
-/* Helper for preventing the compiler from reassociating
- chains of binary vector operations. */
-#define SSE_REASSOC_BARRIER(vec0, vec1) asm("" : "+x"(vec0), "+x"(vec1))
-
-/* Note that these vectorized functions may assume len >= 256. */
-
-static bool __attribute__((target("sse2")))
-buffer_zero_sse2(const void *buf, size_t len)
-{
- /* Unaligned loads at head/tail. */
- __m128i v = *(__m128i_u *)(buf);
- __m128i w = *(__m128i_u *)(buf + len - 16);
- /* Align head/tail to 16-byte boundaries. */
- const __m128i *p = QEMU_ALIGN_PTR_DOWN(buf + 16, 16);
- const __m128i *e = QEMU_ALIGN_PTR_DOWN(buf + len - 1, 16);
- __m128i zero = { 0 };
-
- /* Collect a partial block at tail end. */
- v |= e[-1]; w |= e[-2];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-3]; w |= e[-4];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-5]; w |= e[-6];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-7]; v |= w;
-
- /*
- * Loop over complete 128-byte blocks.
- * With the head and tail removed, e - p >= 14, so the loop
- * must iterate at least once.
- */
- do {
- v = _mm_cmpeq_epi8(v, zero);
- if (unlikely(_mm_movemask_epi8(v) != 0xFFFF)) {
- return false;
- }
- v = p[0]; w = p[1];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[2]; w |= p[3];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[4]; w |= p[5];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[6]; w |= p[7];
- SSE_REASSOC_BARRIER(v, w);
- v |= w;
- p += 8;
- } while (p < e - 7);
-
- return _mm_movemask_epi8(_mm_cmpeq_epi8(v, zero)) == 0xFFFF;
-}
-
-#ifdef CONFIG_AVX2_OPT
-static bool __attribute__((target("avx2")))
-buffer_zero_avx2(const void *buf, size_t len)
-{
- /* Unaligned loads at head/tail. */
- __m256i v = *(__m256i_u *)(buf);
- __m256i w = *(__m256i_u *)(buf + len - 32);
- /* Align head/tail to 32-byte boundaries. */
- const __m256i *p = QEMU_ALIGN_PTR_DOWN(buf + 32, 32);
- const __m256i *e = QEMU_ALIGN_PTR_DOWN(buf + len - 1, 32);
- __m256i zero = { 0 };
-
- /* Collect a partial block at tail end. */
- v |= e[-1]; w |= e[-2];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-3]; w |= e[-4];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-5]; w |= e[-6];
- SSE_REASSOC_BARRIER(v, w);
- v |= e[-7]; v |= w;
-
- /* Loop over complete 256-byte blocks. */
- for (; p < e - 7; p += 8) {
- /* PTEST is not profitable here. */
- v = _mm256_cmpeq_epi8(v, zero);
- if (unlikely(_mm256_movemask_epi8(v) != 0xFFFFFFFF)) {
- return false;
- }
- v = p[0]; w = p[1];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[2]; w |= p[3];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[4]; w |= p[5];
- SSE_REASSOC_BARRIER(v, w);
- v |= p[6]; w |= p[7];
- SSE_REASSOC_BARRIER(v, w);
- v |= w;
- }
-
- return _mm256_movemask_epi8(_mm256_cmpeq_epi8(v, zero)) == 0xFFFFFFFF;
-}
-#endif /* CONFIG_AVX2_OPT */
-
-static biz_accel_fn const accel_table[] = {
- buffer_is_zero_int_ge256,
- buffer_zero_sse2,
-#ifdef CONFIG_AVX2_OPT
- buffer_zero_avx2,
-#endif
-};
-
-static unsigned best_accel(void)
-{
- unsigned info = cpuinfo_init();
-
-#ifdef CONFIG_AVX2_OPT
- if (info & CPUINFO_AVX2) {
- return 2;
- }
-#endif
- return info & CPUINFO_SSE2 ? 1 : 0;
-}
-
-#else
-# include "host/include/generic/host/bufferiszero.c.inc"
-#endif
diff --git a/host/include/x86_64/host/bufferiszero.c.inc b/host/include/x86_64/host/bufferiszero.c.inc
index 1d3f1fd6f5..74ae98580f 100644
--- a/host/include/x86_64/host/bufferiszero.c.inc
+++ b/host/include/x86_64/host/bufferiszero.c.inc
@@ -1 +1,125 @@
-#include "host/include/i386/host/bufferiszero.c.inc"
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ * buffer_is_zero acceleration, x86 version.
+ */
+
+#if defined(CONFIG_AVX2_OPT) || defined(__SSE2__)
+#include <immintrin.h>
+
+/* Helper for preventing the compiler from reassociating
+ chains of binary vector operations. */
+#define SSE_REASSOC_BARRIER(vec0, vec1) asm("" : "+x"(vec0), "+x"(vec1))
+
+/* Note that these vectorized functions may assume len >= 256. */
+
+static bool __attribute__((target("sse2")))
+buffer_zero_sse2(const void *buf, size_t len)
+{
+ /* Unaligned loads at head/tail. */
+ __m128i v = *(__m128i_u *)(buf);
+ __m128i w = *(__m128i_u *)(buf + len - 16);
+ /* Align head/tail to 16-byte boundaries. */
+ const __m128i *p = QEMU_ALIGN_PTR_DOWN(buf + 16, 16);
+ const __m128i *e = QEMU_ALIGN_PTR_DOWN(buf + len - 1, 16);
+ __m128i zero = { 0 };
+
+ /* Collect a partial block at tail end. */
+ v |= e[-1]; w |= e[-2];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-3]; w |= e[-4];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-5]; w |= e[-6];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-7]; v |= w;
+
+ /*
+ * Loop over complete 128-byte blocks.
+ * With the head and tail removed, e - p >= 14, so the loop
+ * must iterate at least once.
+ */
+ do {
+ v = _mm_cmpeq_epi8(v, zero);
+ if (unlikely(_mm_movemask_epi8(v) != 0xFFFF)) {
+ return false;
+ }
+ v = p[0]; w = p[1];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[2]; w |= p[3];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[4]; w |= p[5];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[6]; w |= p[7];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= w;
+ p += 8;
+ } while (p < e - 7);
+
+ return _mm_movemask_epi8(_mm_cmpeq_epi8(v, zero)) == 0xFFFF;
+}
+
+#ifdef CONFIG_AVX2_OPT
+static bool __attribute__((target("avx2")))
+buffer_zero_avx2(const void *buf, size_t len)
+{
+ /* Unaligned loads at head/tail. */
+ __m256i v = *(__m256i_u *)(buf);
+ __m256i w = *(__m256i_u *)(buf + len - 32);
+ /* Align head/tail to 32-byte boundaries. */
+ const __m256i *p = QEMU_ALIGN_PTR_DOWN(buf + 32, 32);
+ const __m256i *e = QEMU_ALIGN_PTR_DOWN(buf + len - 1, 32);
+ __m256i zero = { 0 };
+
+ /* Collect a partial block at tail end. */
+ v |= e[-1]; w |= e[-2];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-3]; w |= e[-4];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-5]; w |= e[-6];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= e[-7]; v |= w;
+
+ /* Loop over complete 256-byte blocks. */
+ for (; p < e - 7; p += 8) {
+ /* PTEST is not profitable here. */
+ v = _mm256_cmpeq_epi8(v, zero);
+ if (unlikely(_mm256_movemask_epi8(v) != 0xFFFFFFFF)) {
+ return false;
+ }
+ v = p[0]; w = p[1];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[2]; w |= p[3];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[4]; w |= p[5];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= p[6]; w |= p[7];
+ SSE_REASSOC_BARRIER(v, w);
+ v |= w;
+ }
+
+ return _mm256_movemask_epi8(_mm256_cmpeq_epi8(v, zero)) == 0xFFFFFFFF;
+}
+#endif /* CONFIG_AVX2_OPT */
+
+static biz_accel_fn const accel_table[] = {
+ buffer_is_zero_int_ge256,
+ buffer_zero_sse2,
+#ifdef CONFIG_AVX2_OPT
+ buffer_zero_avx2,
+#endif
+};
+
+static unsigned best_accel(void)
+{
+ unsigned info = cpuinfo_init();
+
+#ifdef CONFIG_AVX2_OPT
+ if (info & CPUINFO_AVX2) {
+ return 2;
+ }
+#endif
+ return info & CPUINFO_SSE2 ? 1 : 0;
+}
+
+#else
+# include "host/include/generic/host/bufferiszero.c.inc"
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 10/54] host/include/x86_64/bufferiszero: Remove no SSE2 fallback
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (8 preceding siblings ...)
2026-01-18 22:03 ` [PULL 09/54] *: Remove i386 host support Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 11/54] meson: Remove cpu == x86 tests Richard Henderson
` (44 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Since x86_64 always has SSE2, we can remove the fallback
that was present for i686.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
host/include/x86_64/host/bufferiszero.c.inc | 5 -----
1 file changed, 5 deletions(-)
diff --git a/host/include/x86_64/host/bufferiszero.c.inc b/host/include/x86_64/host/bufferiszero.c.inc
index 74ae98580f..7e9d896a8d 100644
--- a/host/include/x86_64/host/bufferiszero.c.inc
+++ b/host/include/x86_64/host/bufferiszero.c.inc
@@ -3,7 +3,6 @@
* buffer_is_zero acceleration, x86 version.
*/
-#if defined(CONFIG_AVX2_OPT) || defined(__SSE2__)
#include <immintrin.h>
/* Helper for preventing the compiler from reassociating
@@ -119,7 +118,3 @@ static unsigned best_accel(void)
#endif
return info & CPUINFO_SSE2 ? 1 : 0;
}
-
-#else
-# include "host/include/generic/host/bufferiszero.c.inc"
-#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 11/54] meson: Remove cpu == x86 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (9 preceding siblings ...)
2026-01-18 22:03 ` [PULL 10/54] host/include/x86_64/bufferiszero: Remove no SSE2 fallback Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 12/54] *: Remove ppc host support Richard Henderson
` (43 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
The 32-bit x86 host is no longer supported.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
configure | 16 +---------------
meson.build | 49 ++++++++++---------------------------------------
2 files changed, 11 insertions(+), 54 deletions(-)
diff --git a/configure b/configure
index de0f3a8ebe..e9d0b9e2c0 100755
--- a/configure
+++ b/configure
@@ -447,13 +447,6 @@ case "$cpu" in
linux_arch=arm64
;;
- i386|i486|i586|i686)
- cpu="i386"
- host_arch=i386
- linux_arch=x86
- CPU_CFLAGS="-m32"
- ;;
-
loongarch*)
cpu=loongarch64
host_arch=loongarch64
@@ -1944,14 +1937,7 @@ if test "$skip_meson" = no; then
if test "$cross_compile" = "yes"; then
echo "[host_machine]" >> $cross
echo "system = '$host_os'" >> $cross
- case "$cpu" in
- i386)
- echo "cpu_family = 'x86'" >> $cross
- ;;
- *)
- echo "cpu_family = '$cpu'" >> $cross
- ;;
- esac
+ echo "cpu_family = '$cpu'" >> $cross
echo "cpu = '$cpu'" >> $cross
if test "$bigendian" = "yes" ; then
echo "endian = 'big'" >> $cross
diff --git a/meson.build b/meson.build
index 137b2dcdc7..506904c7d7 100644
--- a/meson.build
+++ b/meson.build
@@ -50,7 +50,7 @@ qapi_trace_events = []
bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin']
supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten']
-supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86', 'x86_64',
+supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86_64',
'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
cpu = host_machine.cpu_family()
@@ -265,8 +265,6 @@ enable_modules = get_option('modules') \
if cpu not in supported_cpus
host_arch = 'unknown'
-elif cpu == 'x86'
- host_arch = 'i386'
elif cpu == 'mips64'
host_arch = 'mips'
elif cpu in ['riscv32', 'riscv64']
@@ -275,9 +273,7 @@ else
host_arch = cpu
endif
-if cpu == 'x86'
- kvm_targets = ['i386-softmmu']
-elif cpu == 'x86_64'
+if cpu == 'x86_64'
kvm_targets = ['i386-softmmu', 'x86_64-softmmu']
elif cpu == 'aarch64'
kvm_targets = ['aarch64-softmmu']
@@ -300,9 +296,7 @@ else
endif
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
-if cpu == 'x86'
- xen_targets = ['i386-softmmu']
-elif cpu == 'x86_64'
+if cpu == 'x86_64'
xen_targets = ['i386-softmmu', 'x86_64-softmmu']
elif cpu == 'aarch64'
# i386 emulator provides xenpv machine type for multiple architectures
@@ -391,40 +385,17 @@ endif
qemu_isa_flags = []
-# __sync_fetch_and_and requires at least -march=i486. Many toolchains
-# use i686 as default anyway, but for those that don't, an explicit
-# specification is necessary
-if host_arch == 'i386' and not cc.links('''
- static int sfaa(int *ptr)
- {
- return __sync_fetch_and_and(ptr, 0);
- }
-
- int main(void)
- {
- int val = 42;
- val = __sync_val_compare_and_swap(&val, 0, 1);
- sfaa(&val);
- return val;
- }''')
- qemu_isa_flags += ['-march=i486']
-endif
-
# Pick x86-64 baseline version
-if host_arch in ['i386', 'x86_64']
- if get_option('x86_version') == '0' and host_arch == 'x86_64'
+if host_arch == 'x86_64'
+ if get_option('x86_version') == '0'
error('x86_64-v1 required for x86-64 hosts')
endif
# add flags for individual instruction set extensions
if get_option('x86_version') >= '1'
- if host_arch == 'i386'
- qemu_common_flags = ['-mfpmath=sse'] + qemu_common_flags
- else
- # present on basically all processors but technically not part of
- # x86-64-v1, so only include -mneeded for x86-64 version 2 and above
- qemu_isa_flags += ['-mcx16']
- endif
+ # present on basically all processors but technically not part of
+ # x86-64-v1, so only include -mneeded for x86-64 version 2 and above
+ qemu_isa_flags += ['-mcx16']
endif
if get_option('x86_version') >= '2'
qemu_isa_flags += ['-mpopcnt']
@@ -1040,7 +1011,7 @@ have_xen_pci_passthrough = get_option('xen_pci_passthrough') \
error_message: 'Xen PCI passthrough requested but Xen not enabled') \
.require(host_os == 'linux',
error_message: 'Xen PCI passthrough not available on this platform') \
- .require(cpu == 'x86' or cpu == 'x86_64',
+ .require(cpu == 'x86_64',
error_message: 'Xen PCI passthrough not available on this platform') \
.allowed()
@@ -4564,7 +4535,7 @@ if have_tools
libcap_ng, mpathpersist],
install: true)
- if cpu in ['x86', 'x86_64']
+ if cpu == 'x86_64'
executable('qemu-vmsr-helper', files('tools/i386/qemu-vmsr-helper.c'),
dependencies: [authz, crypto, io, qom, qemuutil,
libcap_ng, mpathpersist],
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 12/54] *: Remove ppc host support
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (10 preceding siblings ...)
2026-01-18 22:03 ` [PULL 11/54] meson: Remove cpu == x86 tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 13/54] tcg/i386: Remove TCG_TARGET_REG_BITS tests Richard Henderson
` (42 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Move the files from host/include/ppc to host/include/ppc64,
replacing the stub headers that redirected to ppc.
Remove linux-user/include/host/ppc.
Remove common-user/host/ppc.
Remove cpu == ppc tests from meson.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
host/include/ppc/host/cpuinfo.h | 30 ----
host/include/ppc/host/crypto/aes-round.h | 182 --------------------
host/include/ppc64/host/cpuinfo.h | 31 +++-
host/include/ppc64/host/crypto/aes-round.h | 183 ++++++++++++++++++++-
linux-user/include/host/ppc/host-signal.h | 39 -----
common-user/host/ppc/safe-syscall.inc.S | 107 ------------
meson.build | 4 +-
7 files changed, 213 insertions(+), 363 deletions(-)
delete mode 100644 host/include/ppc/host/cpuinfo.h
delete mode 100644 host/include/ppc/host/crypto/aes-round.h
delete mode 100644 linux-user/include/host/ppc/host-signal.h
delete mode 100644 common-user/host/ppc/safe-syscall.inc.S
diff --git a/host/include/ppc/host/cpuinfo.h b/host/include/ppc/host/cpuinfo.h
deleted file mode 100644
index 38b8eabe2a..0000000000
--- a/host/include/ppc/host/cpuinfo.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * SPDX-License-Identifier: GPL-2.0-or-later
- * Host specific cpu identification for ppc.
- */
-
-#ifndef HOST_CPUINFO_H
-#define HOST_CPUINFO_H
-
-/* Digested version of <cpuid.h> */
-
-#define CPUINFO_ALWAYS (1u << 0) /* so cpuinfo is nonzero */
-#define CPUINFO_V2_06 (1u << 1)
-#define CPUINFO_V2_07 (1u << 2)
-#define CPUINFO_V3_0 (1u << 3)
-#define CPUINFO_V3_1 (1u << 4)
-#define CPUINFO_ISEL (1u << 5)
-#define CPUINFO_ALTIVEC (1u << 6)
-#define CPUINFO_VSX (1u << 7)
-#define CPUINFO_CRYPTO (1u << 8)
-
-/* Initialized with a constructor. */
-extern unsigned cpuinfo;
-
-/*
- * We cannot rely on constructor ordering, so other constructors must
- * use the function interface rather than the variable above.
- */
-unsigned cpuinfo_init(void);
-
-#endif /* HOST_CPUINFO_H */
diff --git a/host/include/ppc/host/crypto/aes-round.h b/host/include/ppc/host/crypto/aes-round.h
deleted file mode 100644
index 8062d2a537..0000000000
--- a/host/include/ppc/host/crypto/aes-round.h
+++ /dev/null
@@ -1,182 +0,0 @@
-/*
- * Power v2.07 specific aes acceleration.
- * SPDX-License-Identifier: GPL-2.0-or-later
- */
-
-#ifndef PPC_HOST_CRYPTO_AES_ROUND_H
-#define PPC_HOST_CRYPTO_AES_ROUND_H
-
-#ifdef __ALTIVEC__
-#include "host/cpuinfo.h"
-
-#ifdef __CRYPTO__
-# define HAVE_AES_ACCEL true
-#else
-# define HAVE_AES_ACCEL likely(cpuinfo & CPUINFO_CRYPTO)
-#endif
-#define ATTR_AES_ACCEL
-
-/*
- * While there is <altivec.h>, both gcc and clang "aid" with the
- * endianness issues in different ways. Just use inline asm instead.
- */
-
-/* Bytes in memory are host-endian; bytes in register are @be. */
-static inline AESStateVec aes_accel_ld(const AESState *p, bool be)
-{
- AESStateVec r;
-
- if (be) {
- asm("lvx %0, 0, %1" : "=v"(r) : "r"(p), "m"(*p));
- } else if (HOST_BIG_ENDIAN) {
- AESStateVec rev = {
- 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0,
- };
- asm("lvx %0, 0, %1\n\t"
- "vperm %0, %0, %0, %2"
- : "=v"(r) : "r"(p), "v"(rev), "m"(*p));
- } else {
-#ifdef __POWER9_VECTOR__
- asm("lxvb16x %x0, 0, %1" : "=v"(r) : "r"(p), "m"(*p));
-#else
- asm("lxvd2x %x0, 0, %1\n\t"
- "xxpermdi %x0, %x0, %x0, 2"
- : "=v"(r) : "r"(p), "m"(*p));
-#endif
- }
- return r;
-}
-
-static void aes_accel_st(AESState *p, AESStateVec r, bool be)
-{
- if (be) {
- asm("stvx %1, 0, %2" : "=m"(*p) : "v"(r), "r"(p));
- } else if (HOST_BIG_ENDIAN) {
- AESStateVec rev = {
- 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0,
- };
- asm("vperm %1, %1, %1, %2\n\t"
- "stvx %1, 0, %3"
- : "=m"(*p), "+v"(r) : "v"(rev), "r"(p));
- } else {
-#ifdef __POWER9_VECTOR__
- asm("stxvb16x %x1, 0, %2" : "=m"(*p) : "v"(r), "r"(p));
-#else
- asm("xxpermdi %x1, %x1, %x1, 2\n\t"
- "stxvd2x %x1, 0, %2"
- : "=m"(*p), "+v"(r) : "r"(p));
-#endif
- }
-}
-
-static inline AESStateVec aes_accel_vcipher(AESStateVec d, AESStateVec k)
-{
- asm("vcipher %0, %0, %1" : "+v"(d) : "v"(k));
- return d;
-}
-
-static inline AESStateVec aes_accel_vncipher(AESStateVec d, AESStateVec k)
-{
- asm("vncipher %0, %0, %1" : "+v"(d) : "v"(k));
- return d;
-}
-
-static inline AESStateVec aes_accel_vcipherlast(AESStateVec d, AESStateVec k)
-{
- asm("vcipherlast %0, %0, %1" : "+v"(d) : "v"(k));
- return d;
-}
-
-static inline AESStateVec aes_accel_vncipherlast(AESStateVec d, AESStateVec k)
-{
- asm("vncipherlast %0, %0, %1" : "+v"(d) : "v"(k));
- return d;
-}
-
-static inline void
-aesenc_MC_accel(AESState *ret, const AESState *st, bool be)
-{
- AESStateVec t, z = { };
-
- t = aes_accel_ld(st, be);
- t = aes_accel_vncipherlast(t, z);
- t = aes_accel_vcipher(t, z);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesenc_SB_SR_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- AESStateVec t, k;
-
- t = aes_accel_ld(st, be);
- k = aes_accel_ld(rk, be);
- t = aes_accel_vcipherlast(t, k);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesenc_SB_SR_MC_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- AESStateVec t, k;
-
- t = aes_accel_ld(st, be);
- k = aes_accel_ld(rk, be);
- t = aes_accel_vcipher(t, k);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesdec_IMC_accel(AESState *ret, const AESState *st, bool be)
-{
- AESStateVec t, z = { };
-
- t = aes_accel_ld(st, be);
- t = aes_accel_vcipherlast(t, z);
- t = aes_accel_vncipher(t, z);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesdec_ISB_ISR_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- AESStateVec t, k;
-
- t = aes_accel_ld(st, be);
- k = aes_accel_ld(rk, be);
- t = aes_accel_vncipherlast(t, k);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesdec_ISB_ISR_AK_IMC_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- AESStateVec t, k;
-
- t = aes_accel_ld(st, be);
- k = aes_accel_ld(rk, be);
- t = aes_accel_vncipher(t, k);
- aes_accel_st(ret, t, be);
-}
-
-static inline void
-aesdec_ISB_ISR_IMC_AK_accel(AESState *ret, const AESState *st,
- const AESState *rk, bool be)
-{
- AESStateVec t, k, z = { };
-
- t = aes_accel_ld(st, be);
- k = aes_accel_ld(rk, be);
- t = aes_accel_vncipher(t, z);
- aes_accel_st(ret, t ^ k, be);
-}
-#else
-/* Without ALTIVEC, we can't even write inline assembly. */
-#include "host/include/generic/host/crypto/aes-round.h"
-#endif
-
-#endif /* PPC_HOST_CRYPTO_AES_ROUND_H */
diff --git a/host/include/ppc64/host/cpuinfo.h b/host/include/ppc64/host/cpuinfo.h
index 2f036a0627..38b8eabe2a 100644
--- a/host/include/ppc64/host/cpuinfo.h
+++ b/host/include/ppc64/host/cpuinfo.h
@@ -1 +1,30 @@
-#include "host/include/ppc/host/cpuinfo.h"
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ * Host specific cpu identification for ppc.
+ */
+
+#ifndef HOST_CPUINFO_H
+#define HOST_CPUINFO_H
+
+/* Digested version of <cpuid.h> */
+
+#define CPUINFO_ALWAYS (1u << 0) /* so cpuinfo is nonzero */
+#define CPUINFO_V2_06 (1u << 1)
+#define CPUINFO_V2_07 (1u << 2)
+#define CPUINFO_V3_0 (1u << 3)
+#define CPUINFO_V3_1 (1u << 4)
+#define CPUINFO_ISEL (1u << 5)
+#define CPUINFO_ALTIVEC (1u << 6)
+#define CPUINFO_VSX (1u << 7)
+#define CPUINFO_CRYPTO (1u << 8)
+
+/* Initialized with a constructor. */
+extern unsigned cpuinfo;
+
+/*
+ * We cannot rely on constructor ordering, so other constructors must
+ * use the function interface rather than the variable above.
+ */
+unsigned cpuinfo_init(void);
+
+#endif /* HOST_CPUINFO_H */
diff --git a/host/include/ppc64/host/crypto/aes-round.h b/host/include/ppc64/host/crypto/aes-round.h
index 5eeba6dcb7..8062d2a537 100644
--- a/host/include/ppc64/host/crypto/aes-round.h
+++ b/host/include/ppc64/host/crypto/aes-round.h
@@ -1 +1,182 @@
-#include "host/include/ppc/host/crypto/aes-round.h"
+/*
+ * Power v2.07 specific aes acceleration.
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef PPC_HOST_CRYPTO_AES_ROUND_H
+#define PPC_HOST_CRYPTO_AES_ROUND_H
+
+#ifdef __ALTIVEC__
+#include "host/cpuinfo.h"
+
+#ifdef __CRYPTO__
+# define HAVE_AES_ACCEL true
+#else
+# define HAVE_AES_ACCEL likely(cpuinfo & CPUINFO_CRYPTO)
+#endif
+#define ATTR_AES_ACCEL
+
+/*
+ * While there is <altivec.h>, both gcc and clang "aid" with the
+ * endianness issues in different ways. Just use inline asm instead.
+ */
+
+/* Bytes in memory are host-endian; bytes in register are @be. */
+static inline AESStateVec aes_accel_ld(const AESState *p, bool be)
+{
+ AESStateVec r;
+
+ if (be) {
+ asm("lvx %0, 0, %1" : "=v"(r) : "r"(p), "m"(*p));
+ } else if (HOST_BIG_ENDIAN) {
+ AESStateVec rev = {
+ 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0,
+ };
+ asm("lvx %0, 0, %1\n\t"
+ "vperm %0, %0, %0, %2"
+ : "=v"(r) : "r"(p), "v"(rev), "m"(*p));
+ } else {
+#ifdef __POWER9_VECTOR__
+ asm("lxvb16x %x0, 0, %1" : "=v"(r) : "r"(p), "m"(*p));
+#else
+ asm("lxvd2x %x0, 0, %1\n\t"
+ "xxpermdi %x0, %x0, %x0, 2"
+ : "=v"(r) : "r"(p), "m"(*p));
+#endif
+ }
+ return r;
+}
+
+static void aes_accel_st(AESState *p, AESStateVec r, bool be)
+{
+ if (be) {
+ asm("stvx %1, 0, %2" : "=m"(*p) : "v"(r), "r"(p));
+ } else if (HOST_BIG_ENDIAN) {
+ AESStateVec rev = {
+ 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0,
+ };
+ asm("vperm %1, %1, %1, %2\n\t"
+ "stvx %1, 0, %3"
+ : "=m"(*p), "+v"(r) : "v"(rev), "r"(p));
+ } else {
+#ifdef __POWER9_VECTOR__
+ asm("stxvb16x %x1, 0, %2" : "=m"(*p) : "v"(r), "r"(p));
+#else
+ asm("xxpermdi %x1, %x1, %x1, 2\n\t"
+ "stxvd2x %x1, 0, %2"
+ : "=m"(*p), "+v"(r) : "r"(p));
+#endif
+ }
+}
+
+static inline AESStateVec aes_accel_vcipher(AESStateVec d, AESStateVec k)
+{
+ asm("vcipher %0, %0, %1" : "+v"(d) : "v"(k));
+ return d;
+}
+
+static inline AESStateVec aes_accel_vncipher(AESStateVec d, AESStateVec k)
+{
+ asm("vncipher %0, %0, %1" : "+v"(d) : "v"(k));
+ return d;
+}
+
+static inline AESStateVec aes_accel_vcipherlast(AESStateVec d, AESStateVec k)
+{
+ asm("vcipherlast %0, %0, %1" : "+v"(d) : "v"(k));
+ return d;
+}
+
+static inline AESStateVec aes_accel_vncipherlast(AESStateVec d, AESStateVec k)
+{
+ asm("vncipherlast %0, %0, %1" : "+v"(d) : "v"(k));
+ return d;
+}
+
+static inline void
+aesenc_MC_accel(AESState *ret, const AESState *st, bool be)
+{
+ AESStateVec t, z = { };
+
+ t = aes_accel_ld(st, be);
+ t = aes_accel_vncipherlast(t, z);
+ t = aes_accel_vcipher(t, z);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesenc_SB_SR_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ AESStateVec t, k;
+
+ t = aes_accel_ld(st, be);
+ k = aes_accel_ld(rk, be);
+ t = aes_accel_vcipherlast(t, k);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesenc_SB_SR_MC_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ AESStateVec t, k;
+
+ t = aes_accel_ld(st, be);
+ k = aes_accel_ld(rk, be);
+ t = aes_accel_vcipher(t, k);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesdec_IMC_accel(AESState *ret, const AESState *st, bool be)
+{
+ AESStateVec t, z = { };
+
+ t = aes_accel_ld(st, be);
+ t = aes_accel_vcipherlast(t, z);
+ t = aes_accel_vncipher(t, z);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesdec_ISB_ISR_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ AESStateVec t, k;
+
+ t = aes_accel_ld(st, be);
+ k = aes_accel_ld(rk, be);
+ t = aes_accel_vncipherlast(t, k);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesdec_ISB_ISR_AK_IMC_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ AESStateVec t, k;
+
+ t = aes_accel_ld(st, be);
+ k = aes_accel_ld(rk, be);
+ t = aes_accel_vncipher(t, k);
+ aes_accel_st(ret, t, be);
+}
+
+static inline void
+aesdec_ISB_ISR_IMC_AK_accel(AESState *ret, const AESState *st,
+ const AESState *rk, bool be)
+{
+ AESStateVec t, k, z = { };
+
+ t = aes_accel_ld(st, be);
+ k = aes_accel_ld(rk, be);
+ t = aes_accel_vncipher(t, z);
+ aes_accel_st(ret, t ^ k, be);
+}
+#else
+/* Without ALTIVEC, we can't even write inline assembly. */
+#include "host/include/generic/host/crypto/aes-round.h"
+#endif
+
+#endif /* PPC_HOST_CRYPTO_AES_ROUND_H */
diff --git a/linux-user/include/host/ppc/host-signal.h b/linux-user/include/host/ppc/host-signal.h
deleted file mode 100644
index de25c803f5..0000000000
--- a/linux-user/include/host/ppc/host-signal.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * host-signal.h: signal info dependent on the host architecture
- *
- * Copyright (c) 2022 Linaro Ltd.
- *
- * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef PPC_HOST_SIGNAL_H
-#define PPC_HOST_SIGNAL_H
-
-#include <asm/ptrace.h>
-
-/* The third argument to a SA_SIGINFO handler is ucontext_t. */
-typedef ucontext_t host_sigcontext;
-
-static inline uintptr_t host_signal_pc(host_sigcontext *uc)
-{
- return uc->uc_mcontext.regs->nip;
-}
-
-static inline void host_signal_set_pc(host_sigcontext *uc, uintptr_t pc)
-{
- uc->uc_mcontext.regs->nip = pc;
-}
-
-static inline void *host_signal_mask(host_sigcontext *uc)
-{
- return &uc->uc_sigmask;
-}
-
-static inline bool host_signal_write(siginfo_t *info, host_sigcontext *uc)
-{
- return uc->uc_mcontext.regs->trap != 0x400
- && (uc->uc_mcontext.regs->dsisr & 0x02000000);
-}
-
-#endif
diff --git a/common-user/host/ppc/safe-syscall.inc.S b/common-user/host/ppc/safe-syscall.inc.S
deleted file mode 100644
index 0851f6c0b8..0000000000
--- a/common-user/host/ppc/safe-syscall.inc.S
+++ /dev/null
@@ -1,107 +0,0 @@
-/*
- * safe-syscall.inc.S : host-specific assembly fragment
- * to handle signals occurring at the same time as system calls.
- * This is intended to be included by common-user/safe-syscall.S
- *
- * Copyright (C) 2022 Linaro, Ltd.
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-/*
- * Standardize on the _CALL_FOO symbols used by GCC:
- * Apple XCode does not define _CALL_DARWIN.
- * Clang defines _CALL_ELF (64-bit) but not _CALL_SYSV (32-bit).
- */
-#if !defined(_CALL_SYSV) && \
- !defined(_CALL_DARWIN) && \
- !defined(_CALL_AIX) && \
- !defined(_CALL_ELF)
-# if defined(__APPLE__)
-# define _CALL_DARWIN
-# elif defined(__ELF__) && TCG_TARGET_REG_BITS == 32
-# define _CALL_SYSV
-# else
-# error "Unknown ABI"
-# endif
-#endif
-
-#ifndef _CALL_SYSV
-# error "Unsupported ABI"
-#endif
-
-
- .global safe_syscall_base
- .global safe_syscall_start
- .global safe_syscall_end
- .type safe_syscall_base, @function
-
- .text
-
- /*
- * This is the entry point for making a system call. The calling
- * convention here is that of a C varargs function with the
- * first argument an 'int *' to the signal_pending flag, the
- * second one the system call number (as a 'long'), and all further
- * arguments being syscall arguments (also 'long').
- */
-safe_syscall_base:
- .cfi_startproc
- stwu 1, -8(1)
- .cfi_def_cfa_offset 8
- stw 30, 4(1)
- .cfi_offset 30, -4
-
- /*
- * We enter with r3 == &signal_pending
- * r4 == syscall number
- * r5 ... r10 == syscall arguments
- * and return the result in r3
- * and the syscall instruction needs
- * r0 == syscall number
- * r3 ... r8 == syscall arguments
- * and returns the result in r3
- * Shuffle everything around appropriately.
- */
- mr 30, 3 /* signal_pending */
- mr 0, 4 /* syscall number */
- mr 3, 5 /* syscall arguments */
- mr 4, 6
- mr 5, 7
- mr 6, 8
- mr 7, 9
- mr 8, 10
-
- /*
- * This next sequence of code works in conjunction with the
- * rewind_if_safe_syscall_function(). If a signal is taken
- * and the interrupted PC is anywhere between 'safe_syscall_start'
- * and 'safe_syscall_end' then we rewind it to 'safe_syscall_start'.
- * The code sequence must therefore be able to cope with this, and
- * the syscall instruction must be the final one in the sequence.
- */
-safe_syscall_start:
- /* if signal_pending is non-zero, don't do the call */
- lwz 12, 0(30)
- cmpwi 0, 12, 0
- bne- 2f
- sc
-safe_syscall_end:
- /* code path when we did execute the syscall */
- lwz 30, 4(1) /* restore r30 */
- addi 1, 1, 8 /* restore stack */
- .cfi_restore 30
- .cfi_def_cfa_offset 0
- bnslr+ /* return on success */
- b safe_syscall_set_errno_tail
-
- /* code path when we didn't execute the syscall */
-2: lwz 30, 4(1)
- addi 1, 1, 8
- addi 3, 0, QEMU_ERESTARTSYS
- b safe_syscall_set_errno_tail
-
- .cfi_endproc
-
- .size safe_syscall_base, .-safe_syscall_base
diff --git a/meson.build b/meson.build
index 506904c7d7..7993e4cfb9 100644
--- a/meson.build
+++ b/meson.build
@@ -50,7 +50,7 @@ qapi_trace_events = []
bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin']
supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten']
-supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86_64',
+supported_cpus = ['ppc64', 's390x', 'riscv32', 'riscv64', 'x86_64',
'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
cpu = host_machine.cpu_family()
@@ -279,8 +279,6 @@ elif cpu == 'aarch64'
kvm_targets = ['aarch64-softmmu']
elif cpu == 's390x'
kvm_targets = ['s390x-softmmu']
-elif cpu == 'ppc'
- kvm_targets = ['ppc-softmmu']
elif cpu == 'ppc64'
kvm_targets = ['ppc-softmmu', 'ppc64-softmmu']
elif cpu == 'mips64'
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 13/54] tcg/i386: Remove TCG_TARGET_REG_BITS tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (11 preceding siblings ...)
2026-01-18 22:03 ` [PULL 12/54] *: Remove ppc host support Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 14/54] tcg/x86_64: Rename from i386 Richard Henderson
` (41 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth
We now only support 64-bit code generation.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/i386/tcg-target-has.h | 8 +-
tcg/i386/tcg-target-reg-bits.h | 2 +-
tcg/i386/tcg-target.h | 13 +-
tcg/i386/tcg-target.c.inc | 552 ++++++---------------------------
4 files changed, 97 insertions(+), 478 deletions(-)
diff --git a/tcg/i386/tcg-target-has.h b/tcg/i386/tcg-target-has.h
index 42647fabbd..d249c1b3e7 100644
--- a/tcg/i386/tcg-target-has.h
+++ b/tcg/i386/tcg-target-has.h
@@ -26,13 +26,10 @@
#define have_avx512vbmi2 ((cpuinfo & CPUINFO_AVX512VBMI2) && have_avx512vl)
/* optional instructions */
-#if TCG_TARGET_REG_BITS == 64
/* Keep 32-bit values zero-extended in a register. */
#define TCG_TARGET_HAS_extr_i64_i32 1
-#endif
-#define TCG_TARGET_HAS_qemu_ldst_i128 \
- (TCG_TARGET_REG_BITS == 64 && (cpuinfo & CPUINFO_ATOMIC_VMOVDQA))
+#define TCG_TARGET_HAS_qemu_ldst_i128 (cpuinfo & CPUINFO_ATOMIC_VMOVDQA)
#define TCG_TARGET_HAS_tst 1
@@ -63,8 +60,7 @@
#define TCG_TARGET_HAS_tst_vec have_avx512bw
#define TCG_TARGET_deposit_valid(type, ofs, len) \
- (((ofs) == 0 && ((len) == 8 || (len) == 16)) || \
- (TCG_TARGET_REG_BITS == 32 && (ofs) == 8 && (len) == 8))
+ ((ofs) == 0 && ((len) == 8 || (len) == 16))
/*
* Check for the possibility of low byte/word extraction, high-byte extraction
diff --git a/tcg/i386/tcg-target-reg-bits.h b/tcg/i386/tcg-target-reg-bits.h
index aa386050eb..fc3377e829 100644
--- a/tcg/i386/tcg-target-reg-bits.h
+++ b/tcg/i386/tcg-target-reg-bits.h
@@ -10,7 +10,7 @@
#ifdef __x86_64__
# define TCG_TARGET_REG_BITS 64
#else
-# define TCG_TARGET_REG_BITS 32
+# error
#endif
#endif
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
index 3cbdfbca52..7ebae56a7d 100644
--- a/tcg/i386/tcg-target.h
+++ b/tcg/i386/tcg-target.h
@@ -27,13 +27,8 @@
#define TCG_TARGET_INSN_UNIT_SIZE 1
-#ifdef __x86_64__
-# define TCG_TARGET_NB_REGS 32
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
-#else
-# define TCG_TARGET_NB_REGS 24
-# define MAX_CODE_GEN_BUFFER_SIZE UINT32_MAX
-#endif
+#define TCG_TARGET_NB_REGS 32
+#define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
typedef enum {
TCG_REG_EAX = 0,
@@ -45,8 +40,6 @@ typedef enum {
TCG_REG_ESI,
TCG_REG_EDI,
- /* 64-bit registers; always define the symbols to avoid
- too much if-deffing. */
TCG_REG_R8,
TCG_REG_R9,
TCG_REG_R10,
@@ -64,8 +57,6 @@ typedef enum {
TCG_REG_XMM5,
TCG_REG_XMM6,
TCG_REG_XMM7,
-
- /* 64-bit registers; likewise always define. */
TCG_REG_XMM8,
TCG_REG_XMM9,
TCG_REG_XMM10,
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index ee27266861..92251f8327 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -34,32 +34,22 @@
#if defined(_WIN64)
# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_BY_REF
# define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_BY_VEC
-#elif TCG_TARGET_REG_BITS == 64
-# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL
-# define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL
#else
# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL
-# define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_BY_REF
+# define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL
#endif
#ifdef CONFIG_DEBUG_TCG
static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
-#if TCG_TARGET_REG_BITS == 64
"%rax", "%rcx", "%rdx", "%rbx", "%rsp", "%rbp", "%rsi", "%rdi",
-#else
- "%eax", "%ecx", "%edx", "%ebx", "%esp", "%ebp", "%esi", "%edi",
-#endif
"%r8", "%r9", "%r10", "%r11", "%r12", "%r13", "%r14", "%r15",
"%xmm0", "%xmm1", "%xmm2", "%xmm3", "%xmm4", "%xmm5", "%xmm6", "%xmm7",
-#if TCG_TARGET_REG_BITS == 64
"%xmm8", "%xmm9", "%xmm10", "%xmm11",
"%xmm12", "%xmm13", "%xmm14", "%xmm15",
-#endif
};
#endif
static const int tcg_target_reg_alloc_order[] = {
-#if TCG_TARGET_REG_BITS == 64
TCG_REG_RBP,
TCG_REG_RBX,
TCG_REG_R12,
@@ -75,15 +65,6 @@ static const int tcg_target_reg_alloc_order[] = {
TCG_REG_RSI,
TCG_REG_RDI,
TCG_REG_RAX,
-#else
- TCG_REG_EBX,
- TCG_REG_ESI,
- TCG_REG_EDI,
- TCG_REG_EBP,
- TCG_REG_ECX,
- TCG_REG_EDX,
- TCG_REG_EAX,
-#endif
TCG_REG_XMM0,
TCG_REG_XMM1,
TCG_REG_XMM2,
@@ -95,7 +76,6 @@ static const int tcg_target_reg_alloc_order[] = {
any of them. Therefore only allow xmm0-xmm5 to be allocated. */
TCG_REG_XMM6,
TCG_REG_XMM7,
-#if TCG_TARGET_REG_BITS == 64
TCG_REG_XMM8,
TCG_REG_XMM9,
TCG_REG_XMM10,
@@ -105,13 +85,11 @@ static const int tcg_target_reg_alloc_order[] = {
TCG_REG_XMM14,
TCG_REG_XMM15,
#endif
-#endif
};
#define TCG_TMP_VEC TCG_REG_XMM5
static const int tcg_target_call_iarg_regs[] = {
-#if TCG_TARGET_REG_BITS == 64
#if defined(_WIN64)
TCG_REG_RCX,
TCG_REG_RDX,
@@ -123,9 +101,6 @@ static const int tcg_target_call_iarg_regs[] = {
#endif
TCG_REG_R8,
TCG_REG_R9,
-#else
- /* 32 bit mode uses stack based calling convention (GCC default). */
-#endif
};
static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot)
@@ -152,26 +127,13 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot)
#define TCG_CT_CONST_TST 0x1000
#define TCG_CT_CONST_ZERO 0x2000
-/* Registers used with L constraint, which are the first argument
- registers on x86_64, and two random call clobbered registers on
- i386. */
-#if TCG_TARGET_REG_BITS == 64
-# define TCG_REG_L0 tcg_target_call_iarg_regs[0]
-# define TCG_REG_L1 tcg_target_call_iarg_regs[1]
-#else
-# define TCG_REG_L0 TCG_REG_EAX
-# define TCG_REG_L1 TCG_REG_EDX
-#endif
+/* Registers used with L constraint. */
+#define TCG_REG_L0 tcg_target_call_iarg_regs[0]
+#define TCG_REG_L1 tcg_target_call_iarg_regs[1]
-#if TCG_TARGET_REG_BITS == 64
-# define ALL_GENERAL_REGS 0x0000ffffu
-# define ALL_VECTOR_REGS 0xffff0000u
-# define ALL_BYTEL_REGS ALL_GENERAL_REGS
-#else
-# define ALL_GENERAL_REGS 0x000000ffu
-# define ALL_VECTOR_REGS 0x00ff0000u
-# define ALL_BYTEL_REGS 0x0000000fu
-#endif
+#define ALL_GENERAL_REGS 0x0000ffffu
+#define ALL_VECTOR_REGS 0xffff0000u
+#define ALL_BYTEL_REGS ALL_GENERAL_REGS
#define SOFTMMU_RESERVE_REGS \
(tcg_use_softmmu ? (1 << TCG_REG_L0) | (1 << TCG_REG_L1) : 0)
@@ -184,14 +146,12 @@ static bool patch_reloc(tcg_insn_unit *code_ptr, int type,
intptr_t value, intptr_t addend)
{
value += addend;
- switch(type) {
+ switch (type) {
case R_386_PC32:
value -= (uintptr_t)tcg_splitwx_to_rx(code_ptr);
if (value != (int32_t)value) {
return false;
}
- /* FALLTHRU */
- case R_386_32:
tcg_patch32(code_ptr, value);
break;
case R_386_PC8:
@@ -256,17 +216,10 @@ static bool tcg_target_const_match(int64_t val, int ct,
#define P_EXT38 0x200 /* 0x0f 0x38 opcode prefix */
#define P_DATA16 0x400 /* 0x66 opcode prefix */
#define P_VEXW 0x1000 /* Set VEX.W = 1 */
-#if TCG_TARGET_REG_BITS == 64
-# define P_REXW P_VEXW /* Set REX.W = 1; match VEXW */
-# define P_REXB_R 0x2000 /* REG field as byte register */
-# define P_REXB_RM 0x4000 /* R/M field as byte register */
-# define P_GS 0x8000 /* gs segment override */
-#else
-# define P_REXW 0
-# define P_REXB_R 0
-# define P_REXB_RM 0
-# define P_GS 0
-#endif
+#define P_REXW P_VEXW /* Set REX.W = 1; match VEXW */
+#define P_REXB_R 0x2000 /* REG field as byte register */
+#define P_REXB_RM 0x4000 /* R/M field as byte register */
+#define P_GS 0x8000 /* gs segment override */
#define P_EXT3A 0x10000 /* 0x0f 0x3a opcode prefix */
#define P_SIMDF3 0x20000 /* 0xf3 opcode prefix */
#define P_SIMDF2 0x40000 /* 0xf2 opcode prefix */
@@ -571,7 +524,6 @@ static const uint8_t tcg_cond_to_jcc[] = {
[TCG_COND_TSTNE] = JCC_JNE,
};
-#if TCG_TARGET_REG_BITS == 64
static void tcg_out_opc(TCGContext *s, int opc, int r, int rm, int x)
{
int rex;
@@ -619,32 +571,6 @@ static void tcg_out_opc(TCGContext *s, int opc, int r, int rm, int x)
tcg_out8(s, opc);
}
-#else
-static void tcg_out_opc(TCGContext *s, int opc)
-{
- if (opc & P_DATA16) {
- tcg_out8(s, 0x66);
- }
- if (opc & P_SIMDF3) {
- tcg_out8(s, 0xf3);
- } else if (opc & P_SIMDF2) {
- tcg_out8(s, 0xf2);
- }
- if (opc & (P_EXT | P_EXT38 | P_EXT3A)) {
- tcg_out8(s, 0x0f);
- if (opc & P_EXT38) {
- tcg_out8(s, 0x38);
- } else if (opc & P_EXT3A) {
- tcg_out8(s, 0x3a);
- }
- }
- tcg_out8(s, opc);
-}
-/* Discard the register arguments to tcg_out_opc early, so as not to penalize
- the 32-bit compilation paths. This method works with all versions of gcc,
- whereas relying on optimization may not be able to exclude them. */
-#define tcg_out_opc(s, opc, r, rm, x) (tcg_out_opc)(s, opc)
-#endif
static void tcg_out_modrm(TCGContext *s, int opc, int r, int rm)
{
@@ -790,35 +716,32 @@ static void tcg_out_sib_offset(TCGContext *s, int r, int rm, int index,
int mod, len;
if (index < 0 && rm < 0) {
- if (TCG_TARGET_REG_BITS == 64) {
- /* Try for a rip-relative addressing mode. This has replaced
- the 32-bit-mode absolute addressing encoding. */
- intptr_t pc = (intptr_t)s->code_ptr + 5 + ~rm;
- intptr_t disp = offset - pc;
- if (disp == (int32_t)disp) {
- tcg_out8(s, (LOWREGMASK(r) << 3) | 5);
- tcg_out32(s, disp);
- return;
- }
+ /*
+ * Try for a rip-relative addressing mode. This has replaced
+ * the 32-bit-mode absolute addressing encoding.
+ */
+ intptr_t pc = (intptr_t)s->code_ptr + 5 + ~rm;
+ intptr_t disp = offset - pc;
+ if (disp == (int32_t)disp) {
+ tcg_out8(s, (LOWREGMASK(r) << 3) | 5);
+ tcg_out32(s, disp);
+ return;
+ }
- /* Try for an absolute address encoding. This requires the
- use of the MODRM+SIB encoding and is therefore larger than
- rip-relative addressing. */
- if (offset == (int32_t)offset) {
- tcg_out8(s, (LOWREGMASK(r) << 3) | 4);
- tcg_out8(s, (4 << 3) | 5);
- tcg_out32(s, offset);
- return;
- }
-
- /* ??? The memory isn't directly addressable. */
- g_assert_not_reached();
- } else {
- /* Absolute address. */
- tcg_out8(s, (r << 3) | 5);
+ /*
+ * Try for an absolute address encoding. This requires the
+ * use of the MODRM+SIB encoding and is therefore larger than
+ * rip-relative addressing.
+ */
+ if (offset == (int32_t)offset) {
+ tcg_out8(s, (LOWREGMASK(r) << 3) | 4);
+ tcg_out8(s, (4 << 3) | 5);
tcg_out32(s, offset);
return;
}
+
+ /* ??? The memory isn't directly addressable. */
+ g_assert_not_reached();
}
/* Find the length of the immediate addend. Note that the encoding
@@ -1045,27 +968,14 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type, unsigned vece,
return;
}
- if (TCG_TARGET_REG_BITS == 32 && vece < MO_64) {
- if (have_avx2) {
- tcg_out_vex_modrm_pool(s, OPC_VPBROADCASTD + vex_l, ret);
- } else {
- tcg_out_vex_modrm_pool(s, OPC_VBROADCASTSS, ret);
- }
- new_pool_label(s, arg, R_386_32, s->code_ptr - 4, 0);
+ if (type == TCG_TYPE_V64) {
+ tcg_out_vex_modrm_pool(s, OPC_MOVQ_VqWq, ret);
+ } else if (have_avx2) {
+ tcg_out_vex_modrm_pool(s, OPC_VPBROADCASTQ + vex_l, ret);
} else {
- if (type == TCG_TYPE_V64) {
- tcg_out_vex_modrm_pool(s, OPC_MOVQ_VqWq, ret);
- } else if (have_avx2) {
- tcg_out_vex_modrm_pool(s, OPC_VPBROADCASTQ + vex_l, ret);
- } else {
- tcg_out_vex_modrm_pool(s, OPC_MOVDDUP, ret);
- }
- if (TCG_TARGET_REG_BITS == 64) {
- new_pool_label(s, arg, R_386_PC32, s->code_ptr - 4, -4);
- } else {
- new_pool_l2(s, R_386_32, s->code_ptr - 4, 0, arg, arg >> 32);
- }
+ tcg_out_vex_modrm_pool(s, OPC_MOVDDUP, ret);
}
+ new_pool_label(s, arg, R_386_PC32, s->code_ptr - 4, -4);
}
static void tcg_out_movi_vec(TCGContext *s, TCGType type,
@@ -1082,11 +992,7 @@ static void tcg_out_movi_vec(TCGContext *s, TCGType type,
int rexw = (type == TCG_TYPE_I32 ? 0 : P_REXW);
tcg_out_vex_modrm_pool(s, OPC_MOVD_VyEy + rexw, ret);
- if (TCG_TARGET_REG_BITS == 64) {
- new_pool_label(s, arg, R_386_PC32, s->code_ptr - 4, -4);
- } else {
- new_pool_label(s, arg, R_386_32, s->code_ptr - 4, 0);
- }
+ new_pool_label(s, arg, R_386_PC32, s->code_ptr - 4, -4);
}
static void tcg_out_movi_int(TCGContext *s, TCGType type,
@@ -1127,9 +1033,7 @@ static void tcg_out_movi(TCGContext *s, TCGType type,
{
switch (type) {
case TCG_TYPE_I32:
-#if TCG_TARGET_REG_BITS == 64
case TCG_TYPE_I64:
-#endif
if (ret < 16) {
tcg_out_movi_int(s, type, ret, arg);
} else {
@@ -1292,7 +1196,7 @@ static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val,
TCGReg base, intptr_t ofs)
{
int rexw = 0;
- if (TCG_TARGET_REG_BITS == 64 && type == TCG_TYPE_I64) {
+ if (type == TCG_TYPE_I64) {
if (val != (int32_t)val) {
return false;
}
@@ -1331,31 +1235,12 @@ static inline void tcg_out_rolw_8(TCGContext *s, int reg)
static void tcg_out_ext8u(TCGContext *s, TCGReg dest, TCGReg src)
{
- if (TCG_TARGET_REG_BITS == 32 && src >= 4) {
- tcg_out_mov(s, TCG_TYPE_I32, dest, src);
- if (dest >= 4) {
- tcg_out_modrm(s, OPC_ARITH_EvIz, ARITH_AND, dest);
- tcg_out32(s, 0xff);
- return;
- }
- src = dest;
- }
tcg_out_modrm(s, OPC_MOVZBL + P_REXB_RM, dest, src);
}
static void tcg_out_ext8s(TCGContext *s, TCGType type, TCGReg dest, TCGReg src)
{
int rexw = type == TCG_TYPE_I32 ? 0 : P_REXW;
-
- if (TCG_TARGET_REG_BITS == 32 && src >= 4) {
- tcg_out_mov(s, TCG_TYPE_I32, dest, src);
- if (dest >= 4) {
- tcg_out_shifti(s, SHIFT_SHL, dest, 24);
- tcg_out_shifti(s, SHIFT_SAR, dest, 24);
- return;
- }
- src = dest;
- }
tcg_out_modrm(s, OPC_MOVSBL + P_REXB_RM + rexw, dest, src);
}
@@ -1380,7 +1265,6 @@ static void tcg_out_ext32u(TCGContext *s, TCGReg dest, TCGReg src)
static void tcg_out_ext32s(TCGContext *s, TCGReg dest, TCGReg src)
{
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_modrm(s, OPC_MOVSLQ, dest, src);
}
@@ -1409,12 +1293,9 @@ static inline void tcg_out_bswap64(TCGContext *s, int reg)
static void tgen_arithi(TCGContext *s, int c, int r0,
tcg_target_long val, int cf)
{
- int rexw = 0;
+ int rexw = c & -8;
- if (TCG_TARGET_REG_BITS == 64) {
- rexw = c & -8;
- c &= 7;
- }
+ c &= 7;
switch (c) {
case ARITH_ADD:
@@ -1427,16 +1308,12 @@ static void tgen_arithi(TCGContext *s, int c, int r0,
*/
if (val == 1 || val == -1) {
int is_inc = (c == ARITH_ADD) ^ (val < 0);
- if (TCG_TARGET_REG_BITS == 64) {
- /*
- * The single-byte increment encodings are re-tasked
- * as the REX prefixes. Use the MODRM encoding.
- */
- tcg_out_modrm(s, OPC_GRP5 + rexw,
- (is_inc ? EXT5_INC_Ev : EXT5_DEC_Ev), r0);
- } else {
- tcg_out8(s, (is_inc ? OPC_INC_r32 : OPC_DEC_r32) + r0);
- }
+ /*
+ * The single-byte increment encodings are re-tasked
+ * as the REX prefixes. Use the MODRM encoding.
+ */
+ tcg_out_modrm(s, OPC_GRP5 + rexw,
+ (is_inc ? EXT5_INC_Ev : EXT5_DEC_Ev), r0);
return;
}
if (val == 128) {
@@ -1451,17 +1328,15 @@ static void tgen_arithi(TCGContext *s, int c, int r0,
break;
case ARITH_AND:
- if (TCG_TARGET_REG_BITS == 64) {
- if (val == 0xffffffffu) {
- tcg_out_ext32u(s, r0, r0);
- return;
- }
- if (val == (uint32_t)val) {
- /* AND with no high bits set can use a 32-bit operation. */
- rexw = 0;
- }
+ if (val == 0xffffffffu) {
+ tcg_out_ext32u(s, r0, r0);
+ return;
}
- if (val == 0xffu && (r0 < 4 || TCG_TARGET_REG_BITS == 64)) {
+ if (val == (uint32_t)val) {
+ /* AND with no high bits set can use a 32-bit operation. */
+ rexw = 0;
+ }
+ if (val == 0xffu) {
tcg_out_ext8u(s, r0, r0);
return;
}
@@ -1473,8 +1348,7 @@ static void tgen_arithi(TCGContext *s, int c, int r0,
case ARITH_OR:
case ARITH_XOR:
- if (val >= 0x80 && val <= 0xff
- && (r0 < 4 || TCG_TARGET_REG_BITS == 64)) {
+ if (val >= 0x80 && val <= 0xff) {
tcg_out_modrm(s, OPC_ARITH_EbIb + P_REXB_RM, c, r0);
tcg_out8(s, val);
return;
@@ -1577,7 +1451,7 @@ static int tcg_out_cmp(TCGContext *s, TCGCond cond, TCGArg arg1,
return jz;
}
- if (arg2 <= 0xff && (TCG_TARGET_REG_BITS == 64 || arg1 < 4)) {
+ if (arg2 <= 0xff) {
if (arg2 == 0x80) {
tcg_out_modrm(s, OPC_TESTB | P_REXB_R, arg1, arg1);
return js;
@@ -1669,53 +1543,6 @@ static const TCGOutOpBrcond outop_brcond = {
.out_ri = tgen_brcondi,
};
-static void tcg_out_brcond2(TCGContext *s, TCGCond cond, TCGReg al,
- TCGReg ah, TCGArg bl, bool blconst,
- TCGArg bh, bool bhconst,
- TCGLabel *label_this, bool small)
-{
- TCGLabel *label_next = gen_new_label();
-
- switch (cond) {
- case TCG_COND_EQ:
- case TCG_COND_TSTEQ:
- tcg_out_brcond(s, 0, tcg_invert_cond(cond),
- al, bl, blconst, label_next, true);
- tcg_out_brcond(s, 0, cond, ah, bh, bhconst, label_this, small);
- break;
-
- case TCG_COND_NE:
- case TCG_COND_TSTNE:
- tcg_out_brcond(s, 0, cond, al, bl, blconst, label_this, small);
- tcg_out_brcond(s, 0, cond, ah, bh, bhconst, label_this, small);
- break;
-
- default:
- tcg_out_brcond(s, 0, tcg_high_cond(cond),
- ah, bh, bhconst, label_this, small);
- tcg_out_jxx(s, JCC_JNE, label_next, 1);
- tcg_out_brcond(s, 0, tcg_unsigned_cond(cond),
- al, bl, blconst, label_this, small);
- break;
- }
- tcg_out_label(s, label_next);
-}
-
-static void tgen_brcond2(TCGContext *s, TCGCond cond, TCGReg al,
- TCGReg ah, TCGArg bl, bool blconst,
- TCGArg bh, bool bhconst, TCGLabel *l)
-{
- tcg_out_brcond2(s, cond, al, ah, bl, blconst, bh, bhconst, l, false);
-}
-
-#if TCG_TARGET_REG_BITS != 32
-__attribute__((unused))
-#endif
-static const TCGOutOpBrcond2 outop_brcond2 = {
- .base.static_constraint = C_O0_I4(r, r, ri, ri),
- .out = tgen_brcond2,
-};
-
static void tcg_out_setcond(TCGContext *s, TCGType type, TCGCond cond,
TCGReg dest, TCGReg arg1, TCGArg arg2,
bool const_arg2, bool neg)
@@ -1867,54 +1694,6 @@ static const TCGOutOpSetcond outop_negsetcond = {
.out_rri = tgen_negsetcondi,
};
-static void tgen_setcond2(TCGContext *s, TCGCond cond, TCGReg ret,
- TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl,
- TCGArg bh, bool const_bh)
-{
- TCGLabel *label_over = gen_new_label();
-
- if (ret == al || ret == ah
- || (!const_bl && ret == bl)
- || (!const_bh && ret == bh)) {
- /*
- * When the destination overlaps with one of the argument
- * registers, don't do anything tricky.
- */
- TCGLabel *label_true = gen_new_label();
-
- tcg_out_brcond2(s, cond, al, ah, bl, const_bl,
- bh, const_bh, label_true, true);
-
- tcg_out_movi(s, TCG_TYPE_I32, ret, 0);
- tcg_out_jxx(s, JCC_JMP, label_over, 1);
- tcg_out_label(s, label_true);
-
- tcg_out_movi(s, TCG_TYPE_I32, ret, 1);
- } else {
- /*
- * When the destination does not overlap one of the arguments,
- * clear the destination first, jump if cond false, and emit an
- * increment in the true case. This results in smaller code.
- */
- tcg_out_movi(s, TCG_TYPE_I32, ret, 0);
-
- tcg_out_brcond2(s, tcg_invert_cond(cond), al, ah, bl, const_bl,
- bh, const_bh, label_over, true);
-
- tgen_arithi(s, ARITH_ADD, ret, 1, 0);
- }
- tcg_out_label(s, label_over);
-}
-
-#if TCG_TARGET_REG_BITS != 32
-__attribute__((unused))
-#endif
-static const TCGOutOpSetcond2 outop_setcond2 = {
- .base.static_constraint = C_O1_I4(r, r, r, ri, ri),
- .out = tgen_setcond2,
-};
-
static void tcg_out_cmov(TCGContext *s, int jcc, int rexw,
TCGReg dest, TCGReg v1)
{
@@ -1959,22 +1738,6 @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *dest,
const TCGHelperInfo *info)
{
tcg_out_branch(s, 1, dest);
-
-#ifndef _WIN32
- if (TCG_TARGET_REG_BITS == 32 && info->out_kind == TCG_CALL_RET_BY_REF) {
- /*
- * The sysv i386 abi for struct return places a reference as the
- * first argument of the stack, and pops that argument with the
- * return statement. Since we want to retain the aligned stack
- * pointer for the callee, we do not want to actually push that
- * argument before the call but rely on the normal store to the
- * stack slot. But we do need to compensate for the pop in order
- * to reset our correct stack pointer value.
- * Pushing a garbage value back onto the stack is quickest.
- */
- tcg_out_push(s, TCG_REG_EAX);
- }
-#endif
}
static void tcg_out_jmp(TCGContext *s, const tcg_insn_unit *dest)
@@ -2025,15 +1788,13 @@ bool tcg_target_has_memory_bswap(MemOp memop)
}
/*
- * Because i686 has no register parameters and because x86_64 has xchg
- * to handle addr/data register overlap, we have placed all input arguments
- * before we need might need a scratch reg.
+ * Because x86_64 has xchg to handle addr/data register overlap, we have
+ * placed all input arguments before we need might need a scratch reg.
*
* Even then, a scratch is only needed for l->raddr. Rather than expose
* a general-purpose scratch when we don't actually know it's available,
* use the ra_gen hook to load into RAX if needed.
*/
-#if TCG_TARGET_REG_BITS == 64
static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg)
{
if (arg < 0) {
@@ -2042,12 +1803,10 @@ static TCGReg ldst_ra_gen(TCGContext *s, const TCGLabelQemuLdst *l, int arg)
tcg_out_movi(s, TCG_TYPE_PTR, arg, (uintptr_t)l->raddr);
return arg;
}
+
static const TCGLdstHelperParam ldst_helper_param = {
.ra_gen = ldst_ra_gen
};
-#else
-static const TCGLdstHelperParam ldst_helper_param = { };
-#endif
static void tcg_out_vec_to_pair(TCGContext *s, TCGType type,
TCGReg l, TCGReg h, TCGReg v)
@@ -2121,7 +1880,7 @@ static HostAddress x86_guest_base = {
.index = -1
};
-#if defined(__x86_64__) && defined(__linux__)
+#if defined(__linux__)
# include <asm/prctl.h>
# include <sys/prctl.h>
int arch_prctl(int code, unsigned long addr);
@@ -2133,8 +1892,7 @@ static inline int setup_guest_base_seg(void)
return 0;
}
#define setup_guest_base_seg setup_guest_base_seg
-#elif defined(__x86_64__) && \
- (defined (__FreeBSD__) || defined (__FreeBSD_kernel__))
+#elif defined (__FreeBSD__) || defined (__FreeBSD_kernel__)
# include <machine/sysarch.h>
static inline int setup_guest_base_seg(void)
{
@@ -2195,14 +1953,12 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst->oi = oi;
ldst->addr_reg = addr;
- if (TCG_TARGET_REG_BITS == 64) {
- ttype = s->addr_type;
- trexw = (ttype == TCG_TYPE_I32 ? 0 : P_REXW);
- if (TCG_TYPE_PTR == TCG_TYPE_I64) {
- hrexw = P_REXW;
- tlbtype = TCG_TYPE_I64;
- tlbrexw = P_REXW;
- }
+ ttype = s->addr_type;
+ trexw = (ttype == TCG_TYPE_I32 ? 0 : P_REXW);
+ if (TCG_TYPE_PTR == TCG_TYPE_I64) {
+ hrexw = P_REXW;
+ tlbtype = TCG_TYPE_I64;
+ tlbrexw = P_REXW;
}
tcg_out_mov(s, tlbtype, TCG_REG_L0, addr);
@@ -2314,7 +2070,6 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
tcg_out_modrm_sib_offset(s, movop + h.seg, datalo,
h.base, h.index, 0, h.ofs);
break;
-#if TCG_TARGET_REG_BITS == 64
case MO_SL:
if (use_movbe) {
tcg_out_modrm_sib_offset(s, OPC_MOVBE_GyMy + h.seg, datalo,
@@ -2325,34 +2080,12 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
h.base, h.index, 0, h.ofs);
}
break;
-#endif
case MO_UQ:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo,
- h.base, h.index, 0, h.ofs);
- break;
- }
- if (use_movbe) {
- TCGReg t = datalo;
- datalo = datahi;
- datahi = t;
- }
- if (h.base == datalo || h.index == datalo) {
- tcg_out_modrm_sib_offset(s, OPC_LEA, datahi,
- h.base, h.index, 0, h.ofs);
- tcg_out_modrm_offset(s, movop + h.seg, datalo, datahi, 0);
- tcg_out_modrm_offset(s, movop + h.seg, datahi, datahi, 4);
- } else {
- tcg_out_modrm_sib_offset(s, movop + h.seg, datalo,
- h.base, h.index, 0, h.ofs);
- tcg_out_modrm_sib_offset(s, movop + h.seg, datahi,
- h.base, h.index, 0, h.ofs + 4);
- }
+ tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo,
+ h.base, h.index, 0, h.ofs);
break;
case MO_128:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
-
/*
* Without 16-byte atomicity, use integer regs.
* That is where we want the data, and it allows bswaps.
@@ -2483,8 +2216,6 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
switch (memop & MO_SIZE) {
case MO_8:
- /* This is handled with constraints in cset_qemu_st(). */
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64 || datalo < 4);
tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + h.seg,
datalo, h.base, h.index, 0, h.ofs);
break;
@@ -2497,25 +2228,11 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
h.base, h.index, 0, h.ofs);
break;
case MO_64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo,
- h.base, h.index, 0, h.ofs);
- } else {
- if (use_movbe) {
- TCGReg t = datalo;
- datalo = datahi;
- datahi = t;
- }
- tcg_out_modrm_sib_offset(s, movop + h.seg, datalo,
- h.base, h.index, 0, h.ofs);
- tcg_out_modrm_sib_offset(s, movop + h.seg, datahi,
- h.base, h.index, 0, h.ofs + 4);
- }
+ tcg_out_modrm_sib_offset(s, movop + P_REXW + h.seg, datalo,
+ h.base, h.index, 0, h.ofs);
break;
case MO_128:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
-
/*
* Without 16-byte atomicity, use integer regs.
* That is where we have the data, and it allows bswaps.
@@ -2592,16 +2309,8 @@ static void tgen_qemu_st(TCGContext *s, TCGType type, TCGReg data,
}
}
-static TCGConstraintSetIndex cset_qemu_st(TCGType type, unsigned flags)
-{
- return flags == MO_8 ? C_O0_I2(s, L) : C_O0_I2(L, L);
-}
-
static const TCGOutOpQemuLdSt outop_qemu_st = {
- .base.static_constraint =
- TCG_TARGET_REG_BITS == 32 ? C_Dynamic : C_O0_I2(L, L),
- .base.dynamic_constraint =
- TCG_TARGET_REG_BITS == 32 ? cset_qemu_st : NULL,
+ .base.static_constraint = C_O0_I2(L, L),
.out = tgen_qemu_st,
};
@@ -2958,7 +2667,6 @@ static const TCGOutOpBinary outop_eqv = {
.base.static_constraint = C_NotImplemented,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_extrh_i64_i32(TCGContext *s, TCGType t, TCGReg a0, TCGReg a1)
{
tcg_out_shifti(s, SHIFT_SHR + P_REXW, a0, 32);
@@ -2968,7 +2676,6 @@ static const TCGOutOpUnary outop_extrh_i64_i32 = {
.base.static_constraint = C_O1_I1(r, 0),
.out_rr = tgen_extrh_i64_i32,
};
-#endif /* TCG_TARGET_REG_BITS == 64 */
static void tgen_mul(TCGContext *s, TCGType type,
TCGReg a0, TCGReg a1, TCGReg a2)
@@ -3320,7 +3027,6 @@ static const TCGOutOpBswap outop_bswap32 = {
.out_rr = tgen_bswap32,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_bswap64(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
{
tcg_out_bswap64(s, a0);
@@ -3330,7 +3036,6 @@ static const TCGOutOpUnary outop_bswap64 = {
.base.static_constraint = C_O1_I1(r, 0),
.out_rr = tgen_bswap64,
};
-#endif
static void tgen_neg(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
{
@@ -3361,8 +3066,6 @@ static void tgen_deposit(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
tcg_out_modrm(s, OPC_MOVB_EvGv | P_REXB_R | P_REXB_RM, a2, a0);
} else if (ofs == 0 && len == 16) {
tcg_out_modrm(s, OPC_MOVL_EvGv | P_DATA16, a2, a0);
- } else if (TCG_TARGET_REG_BITS == 32 && ofs == 8 && len == 8) {
- tcg_out_modrm(s, OPC_MOVB_EvGv, a2, a0 + 4);
} else {
g_assert_not_reached();
}
@@ -3377,9 +3080,6 @@ static void tgen_depositi(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
} else if (ofs == 0 && len == 16) {
tcg_out_opc(s, OPC_MOVL_Iv | P_DATA16 | LOWREGMASK(a0), 0, a0, 0);
tcg_out16(s, a2);
- } else if (TCG_TARGET_REG_BITS == 32 && ofs == 8 && len == 8) {
- tcg_out8(s, OPC_MOVB_Ib + a0 + 4);
- tcg_out8(s, a2);
} else {
g_assert_not_reached();
}
@@ -3406,7 +3106,7 @@ static void tgen_extract(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
tcg_out_ext32u(s, a0, a1);
return;
}
- } else if (TCG_TARGET_REG_BITS == 64 && ofs + len == 32) {
+ } else if (ofs + len == 32) {
/* This is a 32-bit zero-extending right shift. */
tcg_out_mov(s, TCG_TYPE_I32, a0, a1);
tcg_out_shifti(s, SHIFT_SHR, a0, ofs);
@@ -3417,7 +3117,7 @@ static void tgen_extract(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1,
* Otherwise we emit the same ext16 + shift pattern that we
* would have gotten from the normal tcg-op.c expansion.
*/
- if (a1 < 4 && (TCG_TARGET_REG_BITS == 32 || a0 < 8)) {
+ if (a1 < 4 && a0 < 8) {
tcg_out_modrm(s, OPC_MOVZBL, a0, a1 + 4);
} else {
tcg_out_ext16u(s, a0, a1);
@@ -3526,7 +3226,6 @@ static const TCGOutOpLoad outop_ld16s = {
.out = tgen_ld16s,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_ld32u(TCGContext *s, TCGType type, TCGReg dest,
TCGReg base, ptrdiff_t offset)
{
@@ -3548,7 +3247,6 @@ static const TCGOutOpLoad outop_ld32s = {
.base.static_constraint = C_O1_I1(r, r),
.out = tgen_ld32s,
};
-#endif
static void tgen_st8_r(TCGContext *s, TCGType type, TCGReg data,
TCGReg base, ptrdiff_t offset)
@@ -3990,16 +3688,6 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
a1 = a2;
a2 = args[3];
goto gen_simd;
-#if TCG_TARGET_REG_BITS == 32
- case INDEX_op_dup2_vec:
- /* First merge the two 32-bit inputs to a single 64-bit element. */
- tcg_out_vex_modrm(s, OPC_PUNPCKLDQ, a0, a1, a2);
- /* Then replicate the 64-bit elements across the rest of the vector. */
- if (type != TCG_TYPE_V64) {
- tcg_out_dup_vec(s, type, MO_64, a0, a0);
- }
- break;
-#endif
case INDEX_op_abs_vec:
insn = abs_insn[vece];
a2 = a1;
@@ -4194,9 +3882,6 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_x86_punpckh_vec:
case INDEX_op_x86_vpshldi_vec:
case INDEX_op_x86_vgf2p8affineqb_vec:
-#if TCG_TARGET_REG_BITS == 32
- case INDEX_op_dup2_vec:
-#endif
return C_O1_I2(x, x, x);
case INDEX_op_abs_vec:
@@ -4732,7 +4417,6 @@ void tcg_expand_vec_op(TCGOpcode opc, TCGType type, unsigned vece,
}
static const int tcg_target_callee_save_regs[] = {
-#if TCG_TARGET_REG_BITS == 64
TCG_REG_RBP,
TCG_REG_RBX,
#if defined(_WIN64)
@@ -4743,20 +4427,13 @@ static const int tcg_target_callee_save_regs[] = {
TCG_REG_R13,
TCG_REG_R14, /* Currently used for the global env. */
TCG_REG_R15,
-#else
- TCG_REG_EBP, /* Currently used for the global env. */
- TCG_REG_EBX,
- TCG_REG_ESI,
- TCG_REG_EDI,
-#endif
};
/* Compute frame size via macros, to share between tcg_target_qemu_prologue
and tcg_register_jit. */
#define PUSH_SIZE \
- ((1 + ARRAY_SIZE(tcg_target_callee_save_regs)) \
- * (TCG_TARGET_REG_BITS / 8))
+ ((1 + ARRAY_SIZE(tcg_target_callee_save_regs)) * sizeof(tcg_target_long))
#define FRAME_SIZE \
((PUSH_SIZE \
@@ -4789,7 +4466,6 @@ static void tcg_target_qemu_prologue(TCGContext *s)
} else if (guest_base == (int32_t)guest_base) {
x86_guest_base.ofs = guest_base;
} else {
- assert(TCG_TARGET_REG_BITS == 64);
/* Choose R12 because, as a base, it requires a SIB byte. */
x86_guest_base.index = TCG_REG_R12;
tcg_out_movi(s, TCG_TYPE_PTR, x86_guest_base.index, guest_base);
@@ -4797,20 +4473,10 @@ static void tcg_target_qemu_prologue(TCGContext *s)
}
}
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_ld(s, TCG_TYPE_PTR, TCG_AREG0, TCG_REG_ESP,
- (ARRAY_SIZE(tcg_target_callee_save_regs) + 1) * 4);
- tcg_out_addi(s, TCG_REG_ESP, -stack_addend);
- /* jmp *tb. */
- tcg_out_modrm_offset(s, OPC_GRP5, EXT5_JMPN_Ev, TCG_REG_ESP,
- (ARRAY_SIZE(tcg_target_callee_save_regs) + 2) * 4
- + stack_addend);
- } else {
- tcg_out_mov(s, TCG_TYPE_PTR, TCG_AREG0, tcg_target_call_iarg_regs[0]);
- tcg_out_addi(s, TCG_REG_ESP, -stack_addend);
- /* jmp *tb. */
- tcg_out_modrm(s, OPC_GRP5, EXT5_JMPN_Ev, tcg_target_call_iarg_regs[1]);
- }
+ tcg_out_mov(s, TCG_TYPE_PTR, TCG_AREG0, tcg_target_call_iarg_regs[0]);
+ tcg_out_addi(s, TCG_REG_ESP, -stack_addend);
+ /* jmp *tb. */
+ tcg_out_modrm(s, OPC_GRP5, EXT5_JMPN_Ev, tcg_target_call_iarg_regs[1]);
/*
* Return path for goto_ptr. Set return value to 0, a-la exit_tb,
@@ -4846,9 +4512,7 @@ static void tcg_out_nop_fill(tcg_insn_unit *p, int count)
static void tcg_target_init(TCGContext *s)
{
tcg_target_available_regs[TCG_TYPE_I32] = ALL_GENERAL_REGS;
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_target_available_regs[TCG_TYPE_I64] = ALL_GENERAL_REGS;
- }
+ tcg_target_available_regs[TCG_TYPE_I64] = ALL_GENERAL_REGS;
if (have_avx1) {
tcg_target_available_regs[TCG_TYPE_V64] = ALL_VECTOR_REGS;
tcg_target_available_regs[TCG_TYPE_V128] = ALL_VECTOR_REGS;
@@ -4861,16 +4525,14 @@ static void tcg_target_init(TCGContext *s)
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_EAX);
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_EDX);
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_ECX);
- if (TCG_TARGET_REG_BITS == 64) {
#if !defined(_WIN64)
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_RDI);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_RSI);
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_RDI);
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_RSI);
#endif
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R8);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R9);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R10);
- tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R11);
- }
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R8);
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R9);
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R10);
+ tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_R11);
s->reserved_regs = 0;
tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK);
@@ -4899,10 +4561,9 @@ typedef struct {
/* We're expecting a 2 byte uleb128 encoded value. */
QEMU_BUILD_BUG_ON(FRAME_SIZE >= (1 << 14));
-#if !defined(__ELF__)
- /* Host machine without ELF. */
-#elif TCG_TARGET_REG_BITS == 64
+#ifdef __ELF__
#define ELF_HOST_MACHINE EM_X86_64
+
static const DebugFrame debug_frame = {
.h.cie.len = sizeof(DebugFrameCIE)-4, /* length after .len member */
.h.cie.id = -1,
@@ -4930,36 +4591,7 @@ static const DebugFrame debug_frame = {
0x8f, 7, /* DW_CFA_offset, %r15, -56 */
}
};
-#else
-#define ELF_HOST_MACHINE EM_386
-static const DebugFrame debug_frame = {
- .h.cie.len = sizeof(DebugFrameCIE)-4, /* length after .len member */
- .h.cie.id = -1,
- .h.cie.version = 1,
- .h.cie.code_align = 1,
- .h.cie.data_align = 0x7c, /* sleb128 -4 */
- .h.cie.return_column = 8,
- /* Total FDE size does not include the "len" member. */
- .h.fde.len = sizeof(DebugFrame) - offsetof(DebugFrame, h.fde.cie_offset),
-
- .fde_def_cfa = {
- 12, 4, /* DW_CFA_def_cfa %esp, ... */
- (FRAME_SIZE & 0x7f) | 0x80, /* ... uleb128 FRAME_SIZE */
- (FRAME_SIZE >> 7)
- },
- .fde_reg_ofs = {
- 0x88, 1, /* DW_CFA_offset, %eip, -4 */
- /* The following ordering must match tcg_target_callee_save_regs. */
- 0x85, 2, /* DW_CFA_offset, %ebp, -8 */
- 0x83, 3, /* DW_CFA_offset, %ebx, -12 */
- 0x86, 4, /* DW_CFA_offset, %esi, -16 */
- 0x87, 5, /* DW_CFA_offset, %edi, -20 */
- }
-};
-#endif
-
-#if defined(ELF_HOST_MACHINE)
void tcg_register_jit(const void *buf, size_t buf_size)
{
tcg_register_jit_int(buf, buf_size, &debug_frame, sizeof(debug_frame));
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 14/54] tcg/x86_64: Rename from i386
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (12 preceding siblings ...)
2026-01-18 22:03 ` [PULL 13/54] tcg/i386: Remove TCG_TARGET_REG_BITS tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 15/54] tcg/ppc64: Rename from ppc Richard Henderson
` (40 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
Emphasize that we're generating 64-bit code.
Drop the explicit rename from meson's cpu.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/{i386 => x86_64}/tcg-target-con-set.h | 0
tcg/{i386 => x86_64}/tcg-target-con-str.h | 0
tcg/{i386 => x86_64}/tcg-target-has.h | 0
tcg/{i386 => x86_64}/tcg-target-mo.h | 0
tcg/{i386 => x86_64}/tcg-target-reg-bits.h | 0
tcg/{i386 => x86_64}/tcg-target.h | 0
MAINTAINERS | 4 ++--
meson.build | 2 --
tcg/{i386 => x86_64}/tcg-target-opc.h.inc | 0
tcg/{i386 => x86_64}/tcg-target.c.inc | 0
10 files changed, 2 insertions(+), 4 deletions(-)
rename tcg/{i386 => x86_64}/tcg-target-con-set.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-con-str.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-has.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-mo.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-reg-bits.h (100%)
rename tcg/{i386 => x86_64}/tcg-target.h (100%)
rename tcg/{i386 => x86_64}/tcg-target-opc.h.inc (100%)
rename tcg/{i386 => x86_64}/tcg-target.c.inc (100%)
diff --git a/tcg/i386/tcg-target-con-set.h b/tcg/x86_64/tcg-target-con-set.h
similarity index 100%
rename from tcg/i386/tcg-target-con-set.h
rename to tcg/x86_64/tcg-target-con-set.h
diff --git a/tcg/i386/tcg-target-con-str.h b/tcg/x86_64/tcg-target-con-str.h
similarity index 100%
rename from tcg/i386/tcg-target-con-str.h
rename to tcg/x86_64/tcg-target-con-str.h
diff --git a/tcg/i386/tcg-target-has.h b/tcg/x86_64/tcg-target-has.h
similarity index 100%
rename from tcg/i386/tcg-target-has.h
rename to tcg/x86_64/tcg-target-has.h
diff --git a/tcg/i386/tcg-target-mo.h b/tcg/x86_64/tcg-target-mo.h
similarity index 100%
rename from tcg/i386/tcg-target-mo.h
rename to tcg/x86_64/tcg-target-mo.h
diff --git a/tcg/i386/tcg-target-reg-bits.h b/tcg/x86_64/tcg-target-reg-bits.h
similarity index 100%
rename from tcg/i386/tcg-target-reg-bits.h
rename to tcg/x86_64/tcg-target-reg-bits.h
diff --git a/tcg/i386/tcg-target.h b/tcg/x86_64/tcg-target.h
similarity index 100%
rename from tcg/i386/tcg-target.h
rename to tcg/x86_64/tcg-target.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 1a6e5bbafe..c39a8f54e8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4057,10 +4057,10 @@ S: Maintained
L: qemu-arm@nongnu.org
F: tcg/aarch64/
-i386 TCG target
+X86 TCG target
M: Richard Henderson <richard.henderson@linaro.org>
S: Maintained
-F: tcg/i386/
+F: tcg/x86_64/
LoongArch64 TCG target
M: WANG Xuerui <git@xen0n.name>
diff --git a/meson.build b/meson.build
index 7993e4cfb9..594e7d42c0 100644
--- a/meson.build
+++ b/meson.build
@@ -907,8 +907,6 @@ if have_tcg
endif
if get_option('tcg_interpreter')
tcg_arch = 'tci'
- elif host_arch == 'x86_64'
- tcg_arch = 'i386'
elif host_arch == 'ppc64'
tcg_arch = 'ppc'
endif
diff --git a/tcg/i386/tcg-target-opc.h.inc b/tcg/x86_64/tcg-target-opc.h.inc
similarity index 100%
rename from tcg/i386/tcg-target-opc.h.inc
rename to tcg/x86_64/tcg-target-opc.h.inc
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/x86_64/tcg-target.c.inc
similarity index 100%
rename from tcg/i386/tcg-target.c.inc
rename to tcg/x86_64/tcg-target.c.inc
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 15/54] tcg/ppc64: Rename from ppc
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (13 preceding siblings ...)
2026-01-18 22:03 ` [PULL 14/54] tcg/x86_64: Rename from i386 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 16/54] meson: Drop host_arch rename for mips64 Richard Henderson
` (39 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
Emphasize that we're generating 64-bit code.
Drop the explicit rename from meson's cpu.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/{ppc => ppc64}/tcg-target-con-set.h | 0
tcg/{ppc => ppc64}/tcg-target-con-str.h | 0
tcg/{ppc => ppc64}/tcg-target-has.h | 0
tcg/{ppc => ppc64}/tcg-target-mo.h | 0
tcg/{ppc => ppc64}/tcg-target-reg-bits.h | 0
tcg/{ppc => ppc64}/tcg-target.h | 0
MAINTAINERS | 2 +-
meson.build | 2 --
tcg/{ppc => ppc64}/tcg-target-opc.h.inc | 0
tcg/{ppc => ppc64}/tcg-target.c.inc | 0
10 files changed, 1 insertion(+), 3 deletions(-)
rename tcg/{ppc => ppc64}/tcg-target-con-set.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-con-str.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-has.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-mo.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-reg-bits.h (100%)
rename tcg/{ppc => ppc64}/tcg-target.h (100%)
rename tcg/{ppc => ppc64}/tcg-target-opc.h.inc (100%)
rename tcg/{ppc => ppc64}/tcg-target.c.inc (100%)
diff --git a/tcg/ppc/tcg-target-con-set.h b/tcg/ppc64/tcg-target-con-set.h
similarity index 100%
rename from tcg/ppc/tcg-target-con-set.h
rename to tcg/ppc64/tcg-target-con-set.h
diff --git a/tcg/ppc/tcg-target-con-str.h b/tcg/ppc64/tcg-target-con-str.h
similarity index 100%
rename from tcg/ppc/tcg-target-con-str.h
rename to tcg/ppc64/tcg-target-con-str.h
diff --git a/tcg/ppc/tcg-target-has.h b/tcg/ppc64/tcg-target-has.h
similarity index 100%
rename from tcg/ppc/tcg-target-has.h
rename to tcg/ppc64/tcg-target-has.h
diff --git a/tcg/ppc/tcg-target-mo.h b/tcg/ppc64/tcg-target-mo.h
similarity index 100%
rename from tcg/ppc/tcg-target-mo.h
rename to tcg/ppc64/tcg-target-mo.h
diff --git a/tcg/ppc/tcg-target-reg-bits.h b/tcg/ppc64/tcg-target-reg-bits.h
similarity index 100%
rename from tcg/ppc/tcg-target-reg-bits.h
rename to tcg/ppc64/tcg-target-reg-bits.h
diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc64/tcg-target.h
similarity index 100%
rename from tcg/ppc/tcg-target.h
rename to tcg/ppc64/tcg-target.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c39a8f54e8..c58fa93fd5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4079,7 +4079,7 @@ F: tcg/mips/
PPC TCG target
M: Richard Henderson <richard.henderson@linaro.org>
S: Odd Fixes
-F: tcg/ppc/
+F: tcg/ppc64/
RISC-V TCG target
M: Palmer Dabbelt <palmer@dabbelt.com>
diff --git a/meson.build b/meson.build
index 594e7d42c0..0647ca0c89 100644
--- a/meson.build
+++ b/meson.build
@@ -907,8 +907,6 @@ if have_tcg
endif
if get_option('tcg_interpreter')
tcg_arch = 'tci'
- elif host_arch == 'ppc64'
- tcg_arch = 'ppc'
endif
add_project_arguments('-iquote', meson.current_source_dir() / 'tcg' / tcg_arch,
language: all_languages)
diff --git a/tcg/ppc/tcg-target-opc.h.inc b/tcg/ppc64/tcg-target-opc.h.inc
similarity index 100%
rename from tcg/ppc/tcg-target-opc.h.inc
rename to tcg/ppc64/tcg-target-opc.h.inc
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc64/tcg-target.c.inc
similarity index 100%
rename from tcg/ppc/tcg-target.c.inc
rename to tcg/ppc64/tcg-target.c.inc
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 16/54] meson: Drop host_arch rename for mips64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (14 preceding siblings ...)
2026-01-18 22:03 ` [PULL 15/54] tcg/ppc64: Rename from ppc Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 17/54] meson: Drop host_arch rename for riscv64 Richard Henderson
` (38 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
This requires renaming several directories:
tcg/mips, linux-user/include/host/mips, and
common-user/host/mips.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/include/host/{mips => mips64}/host-signal.h | 0
tcg/{mips => mips64}/tcg-target-con-set.h | 0
tcg/{mips => mips64}/tcg-target-con-str.h | 0
tcg/{mips => mips64}/tcg-target-has.h | 0
tcg/{mips => mips64}/tcg-target-mo.h | 0
tcg/{mips => mips64}/tcg-target-reg-bits.h | 0
tcg/{mips => mips64}/tcg-target.h | 0
MAINTAINERS | 2 +-
common-user/host/{mips => mips64}/safe-syscall.inc.S | 0
configure | 8 +++-----
meson.build | 2 --
tcg/{mips => mips64}/tcg-target-opc.h.inc | 0
tcg/{mips => mips64}/tcg-target.c.inc | 0
13 files changed, 4 insertions(+), 8 deletions(-)
rename linux-user/include/host/{mips => mips64}/host-signal.h (100%)
rename tcg/{mips => mips64}/tcg-target-con-set.h (100%)
rename tcg/{mips => mips64}/tcg-target-con-str.h (100%)
rename tcg/{mips => mips64}/tcg-target-has.h (100%)
rename tcg/{mips => mips64}/tcg-target-mo.h (100%)
rename tcg/{mips => mips64}/tcg-target-reg-bits.h (100%)
rename tcg/{mips => mips64}/tcg-target.h (100%)
rename common-user/host/{mips => mips64}/safe-syscall.inc.S (100%)
rename tcg/{mips => mips64}/tcg-target-opc.h.inc (100%)
rename tcg/{mips => mips64}/tcg-target.c.inc (100%)
diff --git a/linux-user/include/host/mips/host-signal.h b/linux-user/include/host/mips64/host-signal.h
similarity index 100%
rename from linux-user/include/host/mips/host-signal.h
rename to linux-user/include/host/mips64/host-signal.h
diff --git a/tcg/mips/tcg-target-con-set.h b/tcg/mips64/tcg-target-con-set.h
similarity index 100%
rename from tcg/mips/tcg-target-con-set.h
rename to tcg/mips64/tcg-target-con-set.h
diff --git a/tcg/mips/tcg-target-con-str.h b/tcg/mips64/tcg-target-con-str.h
similarity index 100%
rename from tcg/mips/tcg-target-con-str.h
rename to tcg/mips64/tcg-target-con-str.h
diff --git a/tcg/mips/tcg-target-has.h b/tcg/mips64/tcg-target-has.h
similarity index 100%
rename from tcg/mips/tcg-target-has.h
rename to tcg/mips64/tcg-target-has.h
diff --git a/tcg/mips/tcg-target-mo.h b/tcg/mips64/tcg-target-mo.h
similarity index 100%
rename from tcg/mips/tcg-target-mo.h
rename to tcg/mips64/tcg-target-mo.h
diff --git a/tcg/mips/tcg-target-reg-bits.h b/tcg/mips64/tcg-target-reg-bits.h
similarity index 100%
rename from tcg/mips/tcg-target-reg-bits.h
rename to tcg/mips64/tcg-target-reg-bits.h
diff --git a/tcg/mips/tcg-target.h b/tcg/mips64/tcg-target.h
similarity index 100%
rename from tcg/mips/tcg-target.h
rename to tcg/mips64/tcg-target.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c58fa93fd5..d3e6041186 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4074,7 +4074,7 @@ R: Huacai Chen <chenhuacai@kernel.org>
R: Jiaxun Yang <jiaxun.yang@flygoat.com>
R: Aleksandar Rikalo <arikalo@gmail.com>
S: Odd Fixes
-F: tcg/mips/
+F: tcg/mips64/
PPC TCG target
M: Richard Henderson <richard.henderson@linaro.org>
diff --git a/common-user/host/mips/safe-syscall.inc.S b/common-user/host/mips64/safe-syscall.inc.S
similarity index 100%
rename from common-user/host/mips/safe-syscall.inc.S
rename to common-user/host/mips64/safe-syscall.inc.S
diff --git a/configure b/configure
index e9d0b9e2c0..04d0b214b6 100755
--- a/configure
+++ b/configure
@@ -400,10 +400,8 @@ elif check_define _ARCH_PPC64 ; then
else
cpu="ppc64"
fi
-elif check_define __mips__ ; then
- if check_define __mips64 ; then
- cpu="mips64"
- fi
+elif check_define __mips64 ; then
+ cpu="mips64"
elif check_define __s390__ ; then
if check_define __s390x__ ; then
cpu="s390x"
@@ -455,7 +453,7 @@ case "$cpu" in
mips64*|mipsisa64*)
cpu=mips64
- host_arch=mips
+ host_arch=mips64
linux_arch=mips
;;
diff --git a/meson.build b/meson.build
index 0647ca0c89..c36f2f6962 100644
--- a/meson.build
+++ b/meson.build
@@ -265,8 +265,6 @@ enable_modules = get_option('modules') \
if cpu not in supported_cpus
host_arch = 'unknown'
-elif cpu == 'mips64'
- host_arch = 'mips'
elif cpu in ['riscv32', 'riscv64']
host_arch = 'riscv'
else
diff --git a/tcg/mips/tcg-target-opc.h.inc b/tcg/mips64/tcg-target-opc.h.inc
similarity index 100%
rename from tcg/mips/tcg-target-opc.h.inc
rename to tcg/mips64/tcg-target-opc.h.inc
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips64/tcg-target.c.inc
similarity index 100%
rename from tcg/mips/tcg-target.c.inc
rename to tcg/mips64/tcg-target.c.inc
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 17/54] meson: Drop host_arch rename for riscv64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (15 preceding siblings ...)
2026-01-18 22:03 ` [PULL 16/54] meson: Drop host_arch rename for mips64 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 18/54] meson: Remove cpu == riscv32 tests Richard Henderson
` (37 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
This requires renaming several directories:
tcg/riscv, linux-user/include/host/riscv, and
common-user/host/riscv.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
host/include/{riscv => riscv64}/host/cpuinfo.h | 0
linux-user/include/host/{riscv => riscv64}/host-signal.h | 0
tcg/{riscv => riscv64}/tcg-target-con-set.h | 0
tcg/{riscv => riscv64}/tcg-target-con-str.h | 0
tcg/{riscv => riscv64}/tcg-target-has.h | 0
tcg/{riscv => riscv64}/tcg-target-mo.h | 0
tcg/{riscv => riscv64}/tcg-target-reg-bits.h | 0
tcg/{riscv => riscv64}/tcg-target.h | 0
MAINTAINERS | 2 +-
common-user/host/{riscv => riscv64}/safe-syscall.inc.S | 0
configure | 4 ++--
meson.build | 2 --
tcg/{riscv => riscv64}/tcg-target-opc.h.inc | 0
tcg/{riscv => riscv64}/tcg-target.c.inc | 0
14 files changed, 3 insertions(+), 5 deletions(-)
rename host/include/{riscv => riscv64}/host/cpuinfo.h (100%)
rename linux-user/include/host/{riscv => riscv64}/host-signal.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-con-set.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-con-str.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-has.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-mo.h (100%)
rename tcg/{riscv => riscv64}/tcg-target-reg-bits.h (100%)
rename tcg/{riscv => riscv64}/tcg-target.h (100%)
rename common-user/host/{riscv => riscv64}/safe-syscall.inc.S (100%)
rename tcg/{riscv => riscv64}/tcg-target-opc.h.inc (100%)
rename tcg/{riscv => riscv64}/tcg-target.c.inc (100%)
diff --git a/host/include/riscv/host/cpuinfo.h b/host/include/riscv64/host/cpuinfo.h
similarity index 100%
rename from host/include/riscv/host/cpuinfo.h
rename to host/include/riscv64/host/cpuinfo.h
diff --git a/linux-user/include/host/riscv/host-signal.h b/linux-user/include/host/riscv64/host-signal.h
similarity index 100%
rename from linux-user/include/host/riscv/host-signal.h
rename to linux-user/include/host/riscv64/host-signal.h
diff --git a/tcg/riscv/tcg-target-con-set.h b/tcg/riscv64/tcg-target-con-set.h
similarity index 100%
rename from tcg/riscv/tcg-target-con-set.h
rename to tcg/riscv64/tcg-target-con-set.h
diff --git a/tcg/riscv/tcg-target-con-str.h b/tcg/riscv64/tcg-target-con-str.h
similarity index 100%
rename from tcg/riscv/tcg-target-con-str.h
rename to tcg/riscv64/tcg-target-con-str.h
diff --git a/tcg/riscv/tcg-target-has.h b/tcg/riscv64/tcg-target-has.h
similarity index 100%
rename from tcg/riscv/tcg-target-has.h
rename to tcg/riscv64/tcg-target-has.h
diff --git a/tcg/riscv/tcg-target-mo.h b/tcg/riscv64/tcg-target-mo.h
similarity index 100%
rename from tcg/riscv/tcg-target-mo.h
rename to tcg/riscv64/tcg-target-mo.h
diff --git a/tcg/riscv/tcg-target-reg-bits.h b/tcg/riscv64/tcg-target-reg-bits.h
similarity index 100%
rename from tcg/riscv/tcg-target-reg-bits.h
rename to tcg/riscv64/tcg-target-reg-bits.h
diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv64/tcg-target.h
similarity index 100%
rename from tcg/riscv/tcg-target.h
rename to tcg/riscv64/tcg-target.h
diff --git a/MAINTAINERS b/MAINTAINERS
index d3e6041186..c1e586c58f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4086,7 +4086,7 @@ M: Palmer Dabbelt <palmer@dabbelt.com>
M: Alistair Francis <Alistair.Francis@wdc.com>
L: qemu-riscv@nongnu.org
S: Maintained
-F: tcg/riscv/
+F: tcg/riscv64/
F: disas/riscv.[ch]
S390 TCG target
diff --git a/common-user/host/riscv/safe-syscall.inc.S b/common-user/host/riscv64/safe-syscall.inc.S
similarity index 100%
rename from common-user/host/riscv/safe-syscall.inc.S
rename to common-user/host/riscv64/safe-syscall.inc.S
diff --git a/configure b/configure
index 04d0b214b6..ee09f90125 100755
--- a/configure
+++ b/configure
@@ -469,8 +469,8 @@ case "$cpu" in
CPU_CFLAGS="-m64 -mlittle-endian"
;;
- riscv32 | riscv64)
- host_arch=riscv
+ riscv64)
+ host_arch=riscv64
linux_arch=riscv
;;
diff --git a/meson.build b/meson.build
index c36f2f6962..e1ac764793 100644
--- a/meson.build
+++ b/meson.build
@@ -265,8 +265,6 @@ enable_modules = get_option('modules') \
if cpu not in supported_cpus
host_arch = 'unknown'
-elif cpu in ['riscv32', 'riscv64']
- host_arch = 'riscv'
else
host_arch = cpu
endif
diff --git a/tcg/riscv/tcg-target-opc.h.inc b/tcg/riscv64/tcg-target-opc.h.inc
similarity index 100%
rename from tcg/riscv/tcg-target-opc.h.inc
rename to tcg/riscv64/tcg-target-opc.h.inc
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv64/tcg-target.c.inc
similarity index 100%
rename from tcg/riscv/tcg-target.c.inc
rename to tcg/riscv64/tcg-target.c.inc
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 18/54] meson: Remove cpu == riscv32 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (16 preceding siblings ...)
2026-01-18 22:03 ` [PULL 17/54] meson: Drop host_arch rename for riscv64 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 19/54] tcg: Make TCG_TARGET_REG_BITS common Richard Henderson
` (36 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Philippe Mathieu-Daudé, Pierrick Bouvier
The 32-bit riscv host is no longer supported.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
configure | 10 +++-------
meson.build | 4 +---
2 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/configure b/configure
index ee09f90125..e69b3e474e 100755
--- a/configure
+++ b/configure
@@ -408,12 +408,8 @@ elif check_define __s390__ ; then
else
cpu="s390"
fi
-elif check_define __riscv ; then
- if check_define _LP64 ; then
- cpu="riscv64"
- else
- cpu="riscv32"
- fi
+elif check_define __riscv && check_define _LP64 ; then
+ cpu="riscv64"
elif check_define __aarch64__ ; then
cpu="aarch64"
elif check_define __loongarch64 ; then
@@ -1280,7 +1276,7 @@ EOF
test "$bigendian" = no && rust_arch=${rust_arch}el
;;
- riscv32|riscv64)
+ riscv64)
# e.g. riscv64gc-unknown-linux-gnu, but riscv64-linux-android
test "$android" = no && rust_arch=${rust_arch}gc
;;
diff --git a/meson.build b/meson.build
index e1ac764793..0189d8fd44 100644
--- a/meson.build
+++ b/meson.build
@@ -50,7 +50,7 @@ qapi_trace_events = []
bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin']
supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten']
-supported_cpus = ['ppc64', 's390x', 'riscv32', 'riscv64', 'x86_64',
+supported_cpus = ['ppc64', 's390x', 'riscv64', 'x86_64',
'aarch64', 'loongarch64', 'mips64', 'sparc64', 'wasm64']
cpu = host_machine.cpu_family()
@@ -279,8 +279,6 @@ elif cpu == 'ppc64'
kvm_targets = ['ppc-softmmu', 'ppc64-softmmu']
elif cpu == 'mips64'
kvm_targets = ['mips-softmmu', 'mipsel-softmmu', 'mips64-softmmu', 'mips64el-softmmu']
-elif cpu == 'riscv32'
- kvm_targets = ['riscv32-softmmu']
elif cpu == 'riscv64'
kvm_targets = ['riscv64-softmmu']
elif cpu == 'loongarch64'
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 19/54] tcg: Make TCG_TARGET_REG_BITS common
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (17 preceding siblings ...)
2026-01-18 22:03 ` [PULL 18/54] meson: Remove cpu == riscv32 tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 20/54] tcg: Replace TCG_TARGET_REG_BITS / 8 Richard Henderson
` (35 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
Since we only support 64-bit hosts, there's no real need
to parameterize TCG_TARGET_REG_BITS. It seems worth holding
on to the identifier though, for documentation purposes.
Move one tcg/*/tcg-target-reg-bits.h to tcg/target-reg-bits.h
and remove the others.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/helper-info.h | 2 +-
.../tcg/target-reg-bits.h | 8 +++----
include/tcg/tcg.h | 2 +-
tcg/aarch64/tcg-target-reg-bits.h | 12 -----------
tcg/loongarch64/tcg-target-reg-bits.h | 21 -------------------
tcg/mips64/tcg-target-reg-bits.h | 16 --------------
tcg/riscv64/tcg-target-reg-bits.h | 19 -----------------
tcg/s390x/tcg-target-reg-bits.h | 17 ---------------
tcg/sparc64/tcg-target-reg-bits.h | 12 -----------
tcg/tci/tcg-target-reg-bits.h | 18 ----------------
tcg/x86_64/tcg-target-reg-bits.h | 16 --------------
11 files changed, 6 insertions(+), 137 deletions(-)
rename tcg/ppc64/tcg-target-reg-bits.h => include/tcg/target-reg-bits.h (71%)
delete mode 100644 tcg/aarch64/tcg-target-reg-bits.h
delete mode 100644 tcg/loongarch64/tcg-target-reg-bits.h
delete mode 100644 tcg/mips64/tcg-target-reg-bits.h
delete mode 100644 tcg/riscv64/tcg-target-reg-bits.h
delete mode 100644 tcg/s390x/tcg-target-reg-bits.h
delete mode 100644 tcg/sparc64/tcg-target-reg-bits.h
delete mode 100644 tcg/tci/tcg-target-reg-bits.h
delete mode 100644 tcg/x86_64/tcg-target-reg-bits.h
diff --git a/include/tcg/helper-info.h b/include/tcg/helper-info.h
index 49a27e4eae..d5bda83a2e 100644
--- a/include/tcg/helper-info.h
+++ b/include/tcg/helper-info.h
@@ -24,7 +24,7 @@
#include <ffi.h>
#pragma GCC diagnostic pop
#endif
-#include "tcg-target-reg-bits.h"
+#include "tcg/target-reg-bits.h"
#define MAX_CALL_IARGS 7
diff --git a/tcg/ppc64/tcg-target-reg-bits.h b/include/tcg/target-reg-bits.h
similarity index 71%
rename from tcg/ppc64/tcg-target-reg-bits.h
rename to include/tcg/target-reg-bits.h
index 3a15d7bee4..8f4ad3ed99 100644
--- a/tcg/ppc64/tcg-target-reg-bits.h
+++ b/include/tcg/target-reg-bits.h
@@ -7,10 +7,10 @@
#ifndef TCG_TARGET_REG_BITS_H
#define TCG_TARGET_REG_BITS_H
-#ifndef _ARCH_PPC64
-# error Expecting 64-bit host architecture
-#endif
-
+/*
+ * We only support 64-bit hosts now.
+ * Retain the identifier for documentation.
+ */
#define TCG_TARGET_REG_BITS 64
#endif
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
index a6d9aa50d4..067150c542 100644
--- a/include/tcg/tcg.h
+++ b/include/tcg/tcg.h
@@ -31,7 +31,7 @@
#include "qemu/plugin.h"
#include "qemu/queue.h"
#include "tcg/tcg-mo.h"
-#include "tcg-target-reg-bits.h"
+#include "tcg/target-reg-bits.h"
#include "tcg-target.h"
#include "tcg/tcg-cond.h"
#include "tcg/insn-start-words.h"
diff --git a/tcg/aarch64/tcg-target-reg-bits.h b/tcg/aarch64/tcg-target-reg-bits.h
deleted file mode 100644
index 3b57a1aafb..0000000000
--- a/tcg/aarch64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * Define target-specific register size
- * Copyright (c) 2023 Linaro
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#define TCG_TARGET_REG_BITS 64
-
-#endif
diff --git a/tcg/loongarch64/tcg-target-reg-bits.h b/tcg/loongarch64/tcg-target-reg-bits.h
deleted file mode 100644
index 51373ad70a..0000000000
--- a/tcg/loongarch64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,21 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2021 WANG Xuerui <git@xen0n.name>
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-/*
- * Loongson removed the (incomplete) 32-bit support from kernel and toolchain
- * for the initial upstreaming of this architecture, so don't bother and just
- * support the LP64* ABI for now.
- */
-#if defined(__loongarch64)
-# define TCG_TARGET_REG_BITS 64
-#else
-# error unsupported LoongArch register size
-#endif
-
-#endif
diff --git a/tcg/mips64/tcg-target-reg-bits.h b/tcg/mips64/tcg-target-reg-bits.h
deleted file mode 100644
index ee346a3f25..0000000000
--- a/tcg/mips64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,16 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2008-2009 Arnaud Patard <arnaud.patard@rtp-net.org>
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#if !defined(_MIPS_SIM) || _MIPS_SIM != _ABI64
-# error "Unknown ABI"
-#endif
-
-#define TCG_TARGET_REG_BITS 64
-
-#endif
diff --git a/tcg/riscv64/tcg-target-reg-bits.h b/tcg/riscv64/tcg-target-reg-bits.h
deleted file mode 100644
index 761ca0d774..0000000000
--- a/tcg/riscv64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,19 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2018 SiFive, Inc
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-/*
- * We don't support oversize guests.
- * Since we will only build tcg once, this in turn requires a 64-bit host.
- */
-#if __riscv_xlen != 64
-#error "unsupported code generation mode"
-#endif
-#define TCG_TARGET_REG_BITS 64
-
-#endif
diff --git a/tcg/s390x/tcg-target-reg-bits.h b/tcg/s390x/tcg-target-reg-bits.h
deleted file mode 100644
index b01414e09d..0000000000
--- a/tcg/s390x/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2009 Ulrich Hecht <uli@suse.de>
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-/* We only support generating code for 64-bit mode. */
-#if UINTPTR_MAX == UINT64_MAX
-# define TCG_TARGET_REG_BITS 64
-#else
-# error "unsupported code generation mode"
-#endif
-
-#endif
diff --git a/tcg/sparc64/tcg-target-reg-bits.h b/tcg/sparc64/tcg-target-reg-bits.h
deleted file mode 100644
index 34a6711013..0000000000
--- a/tcg/sparc64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2023 Linaro
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#define TCG_TARGET_REG_BITS 64
-
-#endif
diff --git a/tcg/tci/tcg-target-reg-bits.h b/tcg/tci/tcg-target-reg-bits.h
deleted file mode 100644
index dcb1a203f8..0000000000
--- a/tcg/tci/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,18 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2009, 2011 Stefan Weil
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#if UINTPTR_MAX == UINT32_MAX
-# define TCG_TARGET_REG_BITS 32
-#elif UINTPTR_MAX == UINT64_MAX
-# define TCG_TARGET_REG_BITS 64
-#else
-# error Unknown pointer size for tci target
-#endif
-
-#endif
diff --git a/tcg/x86_64/tcg-target-reg-bits.h b/tcg/x86_64/tcg-target-reg-bits.h
deleted file mode 100644
index fc3377e829..0000000000
--- a/tcg/x86_64/tcg-target-reg-bits.h
+++ /dev/null
@@ -1,16 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Define target-specific register size
- * Copyright (c) 2008 Fabrice Bellard
- */
-
-#ifndef TCG_TARGET_REG_BITS_H
-#define TCG_TARGET_REG_BITS_H
-
-#ifdef __x86_64__
-# define TCG_TARGET_REG_BITS 64
-#else
-# error
-#endif
-
-#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 20/54] tcg: Replace TCG_TARGET_REG_BITS / 8
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (18 preceding siblings ...)
2026-01-18 22:03 ` [PULL 19/54] tcg: Make TCG_TARGET_REG_BITS common Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 21/54] *: Drop TCG_TARGET_REG_BITS test for prefer_i64 Richard Henderson
` (34 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Use sizeof(tcg_target_long) instead of division.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-op-gvec.c | 2 +-
tcg/loongarch64/tcg-target.c.inc | 4 ++--
tcg/ppc64/tcg-target.c.inc | 2 +-
tcg/riscv64/tcg-target.c.inc | 4 ++--
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 2d184547ba..9c33430638 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -607,7 +607,7 @@ static void do_dup(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
}
/* Otherwise, inline with an integer type, unless "large". */
- if (check_size_impl(oprsz, TCG_TARGET_REG_BITS / 8)) {
+ if (check_size_impl(oprsz, sizeof(tcg_target_long))) {
t_64 = NULL;
t_32 = NULL;
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index 10c69211ac..c3350c90fc 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -2604,7 +2604,7 @@ static const int tcg_target_callee_save_regs[] = {
};
/* Stack frame parameters. */
-#define REG_SIZE (TCG_TARGET_REG_BITS / 8)
+#define REG_SIZE ((int)sizeof(tcg_target_long))
#define SAVE_SIZE ((int)ARRAY_SIZE(tcg_target_callee_save_regs) * REG_SIZE)
#define TEMP_SIZE (CPU_TEMP_BUF_NLONGS * (int)sizeof(long))
#define FRAME_SIZE ((TCG_STATIC_CALL_ARGS_SIZE + TEMP_SIZE + SAVE_SIZE \
@@ -2731,7 +2731,7 @@ static const DebugFrame debug_frame = {
.h.cie.id = -1,
.h.cie.version = 1,
.h.cie.code_align = 1,
- .h.cie.data_align = -(TCG_TARGET_REG_BITS / 8) & 0x7f, /* sleb128 */
+ .h.cie.data_align = -sizeof(tcg_target_long) & 0x7f, /* sleb128 */
.h.cie.return_column = TCG_REG_RA,
/* Total FDE size does not include the "len" member. */
diff --git a/tcg/ppc64/tcg-target.c.inc b/tcg/ppc64/tcg-target.c.inc
index 3c36b26f25..b54afa0b6d 100644
--- a/tcg/ppc64/tcg-target.c.inc
+++ b/tcg/ppc64/tcg-target.c.inc
@@ -70,7 +70,7 @@
#define SZP ((int)sizeof(void *))
/* Shorthand for size of a register. */
-#define SZR (TCG_TARGET_REG_BITS / 8)
+#define SZR ((int)sizeof(tcg_target_long))
#define TCG_CT_CONST_S16 0x00100
#define TCG_CT_CONST_U16 0x00200
diff --git a/tcg/riscv64/tcg-target.c.inc b/tcg/riscv64/tcg-target.c.inc
index 0967a445a3..76dd4fca97 100644
--- a/tcg/riscv64/tcg-target.c.inc
+++ b/tcg/riscv64/tcg-target.c.inc
@@ -2934,7 +2934,7 @@ static const int tcg_target_callee_save_regs[] = {
};
/* Stack frame parameters. */
-#define REG_SIZE (TCG_TARGET_REG_BITS / 8)
+#define REG_SIZE ((int)sizeof(tcg_target_long))
#define SAVE_SIZE ((int)ARRAY_SIZE(tcg_target_callee_save_regs) * REG_SIZE)
#define TEMP_SIZE (CPU_TEMP_BUF_NLONGS * (int)sizeof(long))
#define FRAME_SIZE ((TCG_STATIC_CALL_ARGS_SIZE + TEMP_SIZE + SAVE_SIZE \
@@ -3114,7 +3114,7 @@ static const DebugFrame debug_frame = {
.h.cie.id = -1,
.h.cie.version = 1,
.h.cie.code_align = 1,
- .h.cie.data_align = -(TCG_TARGET_REG_BITS / 8) & 0x7f, /* sleb128 */
+ .h.cie.data_align = -sizeof(tcg_target_long) & 0x7f, /* sleb128 */
.h.cie.return_column = TCG_REG_RA,
/* Total FDE size does not include the "len" member. */
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 21/54] *: Drop TCG_TARGET_REG_BITS test for prefer_i64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (19 preceding siblings ...)
2026-01-18 22:03 ` [PULL 20/54] tcg: Replace TCG_TARGET_REG_BITS / 8 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 22/54] tcg: Remove INDEX_op_brcond2_i32 Richard Henderson
` (33 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
Mechanically via sed -i.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/gengvec.c | 32 ++++++-------
target/arm/tcg/gengvec64.c | 4 +-
target/arm/tcg/translate-sve.c | 26 +++++------
tcg/tcg-op-gvec.c | 62 ++++++++++++-------------
target/i386/tcg/emit.c.inc | 2 +-
target/riscv/insn_trans/trans_rvv.c.inc | 2 +-
6 files changed, 64 insertions(+), 64 deletions(-)
diff --git a/target/arm/tcg/gengvec.c b/target/arm/tcg/gengvec.c
index 01867f8ace..f97d63549c 100644
--- a/target/arm/tcg/gengvec.c
+++ b/target/arm/tcg/gengvec.c
@@ -165,7 +165,7 @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_ssra64_i64,
.fniv = gen_ssra_vec,
.fno = gen_helper_gvec_ssra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.load_dest = true,
.vece = MO_64 },
@@ -241,7 +241,7 @@ void gen_gvec_usra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_usra64_i64,
.fniv = gen_usra_vec,
.fno = gen_helper_gvec_usra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.load_dest = true,
.opt_opc = vecop_list,
.vece = MO_64, },
@@ -349,7 +349,7 @@ void gen_gvec_srshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_srshr64_i64,
.fniv = gen_srshr_vec,
.fno = gen_helper_gvec_srshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
@@ -439,7 +439,7 @@ void gen_gvec_srsra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_srsra64_i64,
.fniv = gen_srsra_vec,
.fno = gen_helper_gvec_srsra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.load_dest = true,
.vece = MO_64 },
@@ -543,7 +543,7 @@ void gen_gvec_urshr(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_urshr64_i64,
.fniv = gen_urshr_vec,
.fno = gen_helper_gvec_urshr_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
@@ -652,7 +652,7 @@ void gen_gvec_ursra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_ursra64_i64,
.fniv = gen_ursra_vec,
.fno = gen_helper_gvec_ursra_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.load_dest = true,
.vece = MO_64 },
@@ -736,7 +736,7 @@ void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_shr64_ins_i64,
.fniv = gen_shr_ins_vec,
.fno = gen_helper_gvec_sri_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.load_dest = true,
.opt_opc = vecop_list,
.vece = MO_64 },
@@ -823,7 +823,7 @@ void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
{ .fni8 = gen_shl64_ins_i64,
.fniv = gen_shl_ins_vec,
.fno = gen_helper_gvec_sli_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.load_dest = true,
.opt_opc = vecop_list,
.vece = MO_64 },
@@ -927,7 +927,7 @@ void gen_gvec_mla(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.vece = MO_32 },
{ .fni8 = gen_mla64_i64,
.fniv = gen_mla_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.load_dest = true,
.opt_opc = vecop_list,
.vece = MO_64 },
@@ -959,7 +959,7 @@ void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.vece = MO_32 },
{ .fni8 = gen_mls64_i64,
.fniv = gen_mls_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.load_dest = true,
.opt_opc = vecop_list,
.vece = MO_64 },
@@ -1002,7 +1002,7 @@ void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.vece = MO_32 },
{ .fni8 = gen_cmtst_i64,
.fniv = gen_cmtst_vec,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
@@ -1691,7 +1691,7 @@ void gen_gvec_sabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
{ .fni8 = gen_sabd_i64,
.fniv = gen_sabd_vec,
.fno = gen_helper_gvec_sabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
@@ -1748,7 +1748,7 @@ void gen_gvec_uabd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
{ .fni8 = gen_uabd_i64,
.fniv = gen_uabd_vec,
.fno = gen_helper_gvec_uabd_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
@@ -1803,7 +1803,7 @@ void gen_gvec_saba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
{ .fni8 = gen_saba_i64,
.fniv = gen_saba_vec,
.fno = gen_helper_gvec_saba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.load_dest = true,
.vece = MO_64 },
@@ -1859,7 +1859,7 @@ void gen_gvec_uaba(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
{ .fni8 = gen_uaba_i64,
.fniv = gen_uaba_vec,
.fno = gen_helper_gvec_uaba_d,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.opt_opc = vecop_list,
.load_dest = true,
.vece = MO_64 },
@@ -2429,7 +2429,7 @@ void gen_gvec_rev32(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
static const GVecGen2 g = {
.fni8 = gen_bswap32_i64,
.fni4 = tcg_gen_bswap32_i32,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_32
};
diff --git a/target/arm/tcg/gengvec64.c b/target/arm/tcg/gengvec64.c
index 2429cab1b8..c425d2b149 100644
--- a/target/arm/tcg/gengvec64.c
+++ b/target/arm/tcg/gengvec64.c
@@ -157,7 +157,7 @@ void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fniv = gen_eor3_vec,
.fno = gen_helper_sve2_eor3,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
@@ -183,7 +183,7 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fniv = gen_bcax_vec,
.fno = gen_helper_sve2_bcax,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
index 07b827fa8e..64adb5c1ce 100644
--- a/target/arm/tcg/translate-sve.c
+++ b/target/arm/tcg/translate-sve.c
@@ -623,7 +623,7 @@ static void gen_bsl1n(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fniv = gen_bsl1n_vec,
.fno = gen_helper_sve2_bsl1n,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
@@ -661,7 +661,7 @@ static void gen_bsl2n(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fniv = gen_bsl2n_vec,
.fno = gen_helper_sve2_bsl2n,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
@@ -690,7 +690,7 @@ static void gen_nbsl(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fniv = gen_nbsl_vec,
.fno = gen_helper_sve2_nbsl,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
@@ -1367,7 +1367,7 @@ static bool trans_AND_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_and_pg_i64,
.fniv = gen_and_pg_vec,
.fno = gen_helper_sve_and_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1405,7 +1405,7 @@ static bool trans_BIC_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_bic_pg_i64,
.fniv = gen_bic_pg_vec,
.fno = gen_helper_sve_bic_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1436,7 +1436,7 @@ static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_eor_pg_i64,
.fniv = gen_eor_pg_vec,
.fno = gen_helper_sve_eor_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1483,7 +1483,7 @@ static bool trans_ORR_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_orr_pg_i64,
.fniv = gen_orr_pg_vec,
.fno = gen_helper_sve_orr_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1514,7 +1514,7 @@ static bool trans_ORN_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_orn_pg_i64,
.fniv = gen_orn_pg_vec,
.fno = gen_helper_sve_orn_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1542,7 +1542,7 @@ static bool trans_NOR_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_nor_pg_i64,
.fniv = gen_nor_pg_vec,
.fno = gen_helper_sve_nor_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -1570,7 +1570,7 @@ static bool trans_NAND_pppp(DisasContext *s, arg_rprr_s *a)
.fni8 = gen_nand_pg_i64,
.fniv = gen_nand_pg_vec,
.fno = gen_helper_sve_nand_pppp,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (!dc_isar_feature(aa64_sve, s)) {
@@ -3680,7 +3680,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
.fniv = tcg_gen_sub_vec,
.fno = gen_helper_sve_subri_d,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64,
.scalar_first = true }
};
@@ -8024,7 +8024,7 @@ static void gen_sclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fno = gen_helper_gvec_sclamp_d,
.opt_opc = vecop,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64 }
+ .prefer_i64 = true }
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &ops[vece]);
}
@@ -8075,7 +8075,7 @@ static void gen_uclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
.fno = gen_helper_gvec_uclamp_d,
.opt_opc = vecop,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64 }
+ .prefer_i64 = true }
};
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &ops[vece]);
}
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 9c33430638..2cfc7e9409 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -1754,7 +1754,7 @@ void tcg_gen_gvec_mov_var(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
.fni8 = tcg_gen_mov_i64,
.fniv = vec_mov2,
.fno = gen_helper_gvec_mov,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (dofs == aofs && dbase == abase) {
@@ -1917,7 +1917,7 @@ void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_not_i64,
.fniv = tcg_gen_not_vec,
.fno = gen_helper_gvec_not,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g);
}
@@ -2030,7 +2030,7 @@ void tcg_gen_gvec_add_var(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
.fniv = tcg_gen_add_vec,
.fno = gen_helper_gvec_add64,
.opt_opc = vecop_list_add,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2069,7 +2069,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_add_vec,
.fno = gen_helper_gvec_adds64,
.opt_opc = vecop_list_add,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2109,7 +2109,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_sub_vec,
.fno = gen_helper_gvec_subs64,
.opt_opc = vecop_list_sub,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2221,7 +2221,7 @@ void tcg_gen_gvec_sub_var(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
.fniv = tcg_gen_sub_vec,
.fno = gen_helper_gvec_sub64,
.opt_opc = vecop_list_sub,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2260,7 +2260,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_mul_vec,
.fno = gen_helper_gvec_mul64,
.opt_opc = vecop_list_mul,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2289,7 +2289,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_mul_vec,
.fno = gen_helper_gvec_muls64,
.opt_opc = vecop_list_mul,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2618,7 +2618,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_neg_vec,
.fno = gen_helper_gvec_neg64,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2682,7 +2682,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_abs_vec,
.fno = gen_helper_gvec_abs64,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -2697,7 +2697,7 @@ void tcg_gen_gvec_and(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_and_i64,
.fniv = tcg_gen_and_vec,
.fno = gen_helper_gvec_and,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2714,7 +2714,7 @@ void tcg_gen_gvec_or(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_or_i64,
.fniv = tcg_gen_or_vec,
.fno = gen_helper_gvec_or,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2731,7 +2731,7 @@ void tcg_gen_gvec_xor(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_xor_i64,
.fniv = tcg_gen_xor_vec,
.fno = gen_helper_gvec_xor,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2748,7 +2748,7 @@ void tcg_gen_gvec_andc(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_andc_i64,
.fniv = tcg_gen_andc_vec,
.fno = gen_helper_gvec_andc,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2765,7 +2765,7 @@ void tcg_gen_gvec_orc(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_orc_i64,
.fniv = tcg_gen_orc_vec,
.fno = gen_helper_gvec_orc,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2782,7 +2782,7 @@ void tcg_gen_gvec_nand(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_nand_i64,
.fniv = tcg_gen_nand_vec,
.fno = gen_helper_gvec_nand,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2799,7 +2799,7 @@ void tcg_gen_gvec_nor(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_nor_i64,
.fniv = tcg_gen_nor_vec,
.fno = gen_helper_gvec_nor,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2816,7 +2816,7 @@ void tcg_gen_gvec_eqv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_eqv_i64,
.fniv = tcg_gen_eqv_vec,
.fno = gen_helper_gvec_eqv,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
};
if (aofs == bofs) {
@@ -2830,7 +2830,7 @@ static const GVecGen2s gop_ands = {
.fni8 = tcg_gen_and_i64,
.fniv = tcg_gen_and_vec,
.fno = gen_helper_gvec_ands,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64
};
@@ -2857,7 +2857,7 @@ void tcg_gen_gvec_andcs(unsigned vece, uint32_t dofs, uint32_t aofs,
.fni8 = tcg_gen_andc_i64,
.fniv = tcg_gen_andc_vec,
.fno = gen_helper_gvec_andcs,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64
};
@@ -2871,7 +2871,7 @@ static const GVecGen2s gop_xors = {
.fni8 = tcg_gen_xor_i64,
.fniv = tcg_gen_xor_vec,
.fno = gen_helper_gvec_xors,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64
};
@@ -2895,7 +2895,7 @@ static const GVecGen2s gop_ors = {
.fni8 = tcg_gen_or_i64,
.fniv = tcg_gen_or_vec,
.fno = gen_helper_gvec_ors,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64
};
@@ -2967,7 +2967,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_shli_vec,
.fno = gen_helper_gvec_shl64i,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3032,7 +3032,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_shri_vec,
.fno = gen_helper_gvec_shr64i,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3125,7 +3125,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_sari_vec,
.fno = gen_helper_gvec_sar64i,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3184,7 +3184,7 @@ void tcg_gen_gvec_rotli(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_rotli_vec,
.fno = gen_helper_gvec_rotl64i,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3513,7 +3513,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_shlv_mod_vec,
.fno = gen_helper_gvec_shl64v,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3576,7 +3576,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_shrv_mod_vec,
.fno = gen_helper_gvec_shr64v,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3639,7 +3639,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_sarv_mod_vec,
.fno = gen_helper_gvec_sar64v,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3702,7 +3702,7 @@ void tcg_gen_gvec_rotlv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_rotlv_mod_vec,
.fno = gen_helper_gvec_rotl64v,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
@@ -3761,7 +3761,7 @@ void tcg_gen_gvec_rotrv(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = tcg_gen_rotrv_mod_vec,
.fno = gen_helper_gvec_rotr64v,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc
index bc3a07f972..41bf047b8d 100644
--- a/target/i386/tcg/emit.c.inc
+++ b/target/i386/tcg/emit.c.inc
@@ -3006,7 +3006,7 @@ static void gen_PMOVMSKB(DisasContext *s, X86DecodedInsn *decode)
.fniv = gen_pmovmskb_vec,
.opt_opc = vecop_list,
.vece = MO_64,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64
+ .prefer_i64 = true
};
MemOp ot = decode->op[2].ot;
int vec_len = vector_len(s, decode);
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index 2a487179f6..caefd38216 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -1489,7 +1489,7 @@ static void tcg_gen_gvec_rsubs(unsigned vece, uint32_t dofs, uint32_t aofs,
.fniv = gen_rsub_vec,
.fno = gen_helper_vec_rsubs64,
.opt_opc = vecop_list,
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
+ .prefer_i64 = true,
.vece = MO_64 },
};
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 22/54] tcg: Remove INDEX_op_brcond2_i32
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (20 preceding siblings ...)
2026-01-18 22:03 ` [PULL 21/54] *: Drop TCG_TARGET_REG_BITS test for prefer_i64 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 23/54] tcg: Remove INDEX_op_setcond2_i32 Richard Henderson
` (32 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
This opcode was exclusively for 32-bit hosts.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/tcg-opc.h | 1 -
tcg/optimize.c | 99 ----------------------------------------
tcg/tcg-op.c | 32 ++-----------
tcg/tcg.c | 34 --------------
docs/devel/tcg-ops.rst | 7 +--
tcg/tci/tcg-target.c.inc | 17 -------
6 files changed, 4 insertions(+), 186 deletions(-)
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
index e988edd93a..55283af326 100644
--- a/include/tcg/tcg-opc.h
+++ b/include/tcg/tcg-opc.h
@@ -103,7 +103,6 @@ DEF(subb1o, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_OUT)
DEF(subbi, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_IN)
DEF(subbio, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_IN | TCG_OPF_CARRY_OUT)
-DEF(brcond2_i32, 0, 4, 2, TCG_OPF_BB_END | TCG_OPF_COND_BRANCH)
DEF(setcond2_i32, 1, 4, 1, 0)
/* size changing ops */
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 5ae26e4a10..a544c055b8 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1597,102 +1597,6 @@ static bool fold_brcond(OptContext *ctx, TCGOp *op)
return true;
}
-static bool fold_brcond2(OptContext *ctx, TCGOp *op)
-{
- TCGCond cond;
- TCGArg label;
- int i, inv = 0;
-
- i = do_constant_folding_cond2(ctx, op, &op->args[0]);
- cond = op->args[4];
- label = op->args[5];
- if (i >= 0) {
- goto do_brcond_const;
- }
-
- switch (cond) {
- case TCG_COND_LT:
- case TCG_COND_GE:
- /*
- * Simplify LT/GE comparisons vs zero to a single compare
- * vs the high word of the input.
- */
- if (arg_is_const_val(op->args[2], 0) &&
- arg_is_const_val(op->args[3], 0)) {
- goto do_brcond_high;
- }
- break;
-
- case TCG_COND_NE:
- inv = 1;
- QEMU_FALLTHROUGH;
- case TCG_COND_EQ:
- /*
- * Simplify EQ/NE comparisons where one of the pairs
- * can be simplified.
- */
- i = do_constant_folding_cond(TCG_TYPE_I32, op->args[0],
- op->args[2], cond);
- switch (i ^ inv) {
- case 0:
- goto do_brcond_const;
- case 1:
- goto do_brcond_high;
- }
-
- i = do_constant_folding_cond(TCG_TYPE_I32, op->args[1],
- op->args[3], cond);
- switch (i ^ inv) {
- case 0:
- goto do_brcond_const;
- case 1:
- goto do_brcond_low;
- }
- break;
-
- case TCG_COND_TSTEQ:
- case TCG_COND_TSTNE:
- if (arg_is_const_val(op->args[2], 0)) {
- goto do_brcond_high;
- }
- if (arg_is_const_val(op->args[3], 0)) {
- goto do_brcond_low;
- }
- break;
-
- default:
- break;
-
- do_brcond_low:
- op->opc = INDEX_op_brcond;
- op->args[1] = op->args[2];
- op->args[2] = cond;
- op->args[3] = label;
- return fold_brcond(ctx, op);
-
- do_brcond_high:
- op->opc = INDEX_op_brcond;
- op->args[0] = op->args[1];
- op->args[1] = op->args[3];
- op->args[2] = cond;
- op->args[3] = label;
- return fold_brcond(ctx, op);
-
- do_brcond_const:
- if (i == 0) {
- tcg_op_remove(ctx->tcg, op);
- return true;
- }
- op->opc = INDEX_op_br;
- op->args[0] = label;
- finish_ebb(ctx);
- return true;
- }
-
- finish_bb(ctx);
- return true;
-}
-
static bool fold_bswap(OptContext *ctx, TCGOp *op)
{
uint64_t z_mask, o_mask, s_mask;
@@ -3163,9 +3067,6 @@ void tcg_optimize(TCGContext *s)
case INDEX_op_brcond:
done = fold_brcond(&ctx, op);
break;
- case INDEX_op_brcond2_i32:
- done = fold_brcond2(&ctx, op);
- break;
case INDEX_op_bswap16:
case INDEX_op_bswap32:
case INDEX_op_bswap64:
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index ab7b409be6..61f6fd9095 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -265,14 +265,6 @@ static void DNI tcg_gen_op6i_i64(TCGOpcode opc, TCGv_i64 a1, TCGv_i64 a2,
tcgv_i64_arg(a3), tcgv_i64_arg(a4), tcgv_i64_arg(a5), a6);
}
-static TCGOp * DNI tcg_gen_op6ii_i32(TCGOpcode opc, TCGv_i32 a1, TCGv_i32 a2,
- TCGv_i32 a3, TCGv_i32 a4,
- TCGArg a5, TCGArg a6)
-{
- return tcg_gen_op6(opc, TCG_TYPE_I32, tcgv_i32_arg(a1), tcgv_i32_arg(a2),
- tcgv_i32_arg(a3), tcgv_i32_arg(a4), a5, a6);
-}
-
/* Generic ops. */
void gen_set_label(TCGLabel *l)
@@ -1873,33 +1865,15 @@ void tcg_gen_brcond_i64(TCGCond cond, TCGv_i64 arg1, TCGv_i64 arg2, TCGLabel *l)
if (cond == TCG_COND_ALWAYS) {
tcg_gen_br(l);
} else if (cond != TCG_COND_NEVER) {
- TCGOp *op;
- if (TCG_TARGET_REG_BITS == 32) {
- op = tcg_gen_op6ii_i32(INDEX_op_brcond2_i32, TCGV_LOW(arg1),
- TCGV_HIGH(arg1), TCGV_LOW(arg2),
- TCGV_HIGH(arg2), cond, label_arg(l));
- } else {
- op = tcg_gen_op4ii_i64(INDEX_op_brcond, arg1, arg2, cond,
- label_arg(l));
- }
+ TCGOp *op = tcg_gen_op4ii_i64(INDEX_op_brcond, arg1, arg2, cond,
+ label_arg(l));
add_as_label_use(l, op);
}
}
void tcg_gen_brcondi_i64(TCGCond cond, TCGv_i64 arg1, int64_t arg2, TCGLabel *l)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_brcond_i64(cond, arg1, tcg_constant_i64(arg2), l);
- } else if (cond == TCG_COND_ALWAYS) {
- tcg_gen_br(l);
- } else if (cond != TCG_COND_NEVER) {
- TCGOp *op = tcg_gen_op6ii_i32(INDEX_op_brcond2_i32,
- TCGV_LOW(arg1), TCGV_HIGH(arg1),
- tcg_constant_i32(arg2),
- tcg_constant_i32(arg2 >> 32),
- cond, label_arg(l));
- add_as_label_use(l, op);
- }
+ tcg_gen_brcond_i64(cond, arg1, tcg_constant_i64(arg2), l);
}
void tcg_gen_setcond_i64(TCGCond cond, TCGv_i64 ret,
diff --git a/tcg/tcg.c b/tcg/tcg.c
index fbf09f5c82..0521767c46 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -1010,13 +1010,6 @@ typedef struct TCGOutOpBrcond {
TCGReg a1, tcg_target_long a2, TCGLabel *label);
} TCGOutOpBrcond;
-typedef struct TCGOutOpBrcond2 {
- TCGOutOp base;
- void (*out)(TCGContext *s, TCGCond cond, TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl,
- TCGArg bh, bool const_bh, TCGLabel *l);
-} TCGOutOpBrcond2;
-
typedef struct TCGOutOpBswap {
TCGOutOp base;
void (*out_rr)(TCGContext *s, TCGType type,
@@ -1248,7 +1241,6 @@ static const TCGOutOp * const all_outop[NB_OPS] = {
[INDEX_op_goto_ptr] = &outop_goto_ptr,
#if TCG_TARGET_REG_BITS == 32
- OUTOP(INDEX_op_brcond2_i32, TCGOutOpBrcond2, outop_brcond2),
OUTOP(INDEX_op_setcond2_i32, TCGOutOpSetcond2, outop_setcond2),
#else
OUTOP(INDEX_op_bswap64, TCGOutOpUnary, outop_bswap64),
@@ -2490,7 +2482,6 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_xor:
return has_type;
- case INDEX_op_brcond2_i32:
case INDEX_op_setcond2_i32:
return TCG_TARGET_REG_BITS == 32;
@@ -3022,7 +3013,6 @@ void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs)
case INDEX_op_setcond:
case INDEX_op_negsetcond:
case INDEX_op_movcond:
- case INDEX_op_brcond2_i32:
case INDEX_op_setcond2_i32:
case INDEX_op_cmp_vec:
case INDEX_op_cmpsel_vec:
@@ -3106,7 +3096,6 @@ void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs)
case INDEX_op_set_label:
case INDEX_op_br:
case INDEX_op_brcond:
- case INDEX_op_brcond2_i32:
col += ne_fprintf(f, "%s$L%d", k ? "," : "",
arg_label(op->args[k])->id);
i++, k++;
@@ -3563,9 +3552,6 @@ void tcg_op_remove(TCGContext *s, TCGOp *op)
case INDEX_op_brcond:
remove_label_use(op, 3);
break;
- case INDEX_op_brcond2_i32:
- remove_label_use(op, 5);
- break;
default:
break;
}
@@ -3664,9 +3650,6 @@ static void move_label_uses(TCGLabel *to, TCGLabel *from)
case INDEX_op_brcond:
op->args[3] = label_arg(to);
break;
- case INDEX_op_brcond2_i32:
- op->args[5] = label_arg(to);
- break;
default:
g_assert_not_reached();
}
@@ -5285,9 +5268,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
case INDEX_op_cmp_vec:
op_cond = op->args[3];
break;
- case INDEX_op_brcond2_i32:
- op_cond = op->args[4];
- break;
case INDEX_op_movcond:
case INDEX_op_setcond2_i32:
case INDEX_op_cmpsel_vec:
@@ -5890,19 +5870,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
break;
#if TCG_TARGET_REG_BITS == 32
- case INDEX_op_brcond2_i32:
- {
- const TCGOutOpBrcond2 *out = &outop_brcond2;
- TCGCond cond = new_args[4];
- TCGLabel *label = arg_label(new_args[5]);
-
- tcg_debug_assert(!const_args[0]);
- tcg_debug_assert(!const_args[1]);
- out->out(s, cond, new_args[0], new_args[1],
- new_args[2], const_args[2],
- new_args[3], const_args[3], label);
- }
- break;
case INDEX_op_setcond2_i32:
{
const TCGOutOpSetcond2 *out = &outop_setcond2;
@@ -5915,7 +5882,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
}
break;
#else
- case INDEX_op_brcond2_i32:
case INDEX_op_setcond2_i32:
g_assert_not_reached();
#endif
diff --git a/docs/devel/tcg-ops.rst b/docs/devel/tcg-ops.rst
index f26b837a30..10d5edb4ca 100644
--- a/docs/devel/tcg-ops.rst
+++ b/docs/devel/tcg-ops.rst
@@ -705,11 +705,6 @@ They are emitted as needed by inline functions within ``tcg-op.h``.
.. list-table::
- * - brcond2_i32 *t0_low*, *t0_high*, *t1_low*, *t1_high*, *cond*, *label*
-
- - | Similar to brcond, except that the 64-bit values *t0* and *t1*
- are formed from two 32-bit arguments.
-
* - setcond2_i32 *dest*, *t1_low*, *t1_high*, *t2_low*, *t2_high*, *cond*
- | Similar to setcond, except that the 64-bit values *t1* and *t2* are
@@ -940,7 +935,7 @@ The target word size (``TCG_TARGET_REG_BITS``) is expected to be 32 bit or
On a 32 bit target, all 64 bit operations are converted to 32 bits.
A few specific operations must be implemented to allow it
-(see brcond2_i32, setcond2_i32).
+(see setcond2_i32).
On a 64 bit target, the values are transferred between 32 and 64-bit
registers using the following ops:
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
index 532f87262c..1756ffc59c 100644
--- a/tcg/tci/tcg-target.c.inc
+++ b/tcg/tci/tcg-target.c.inc
@@ -1047,23 +1047,6 @@ static const TCGOutOpMovcond outop_movcond = {
.out = tgen_movcond,
};
-static void tgen_brcond2(TCGContext *s, TCGCond cond, TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl,
- TCGArg bh, bool const_bh, TCGLabel *l)
-{
- tcg_out_op_rrrrrc(s, INDEX_op_setcond2_i32, TCG_REG_TMP,
- al, ah, bl, bh, cond);
- tcg_out_op_rl(s, INDEX_op_brcond, TCG_REG_TMP, l);
-}
-
-#if TCG_TARGET_REG_BITS != 32
-__attribute__((unused))
-#endif
-static const TCGOutOpBrcond2 outop_brcond2 = {
- .base.static_constraint = C_O0_I4(r, r, r, r),
- .out = tgen_brcond2,
-};
-
static void tgen_setcond2(TCGContext *s, TCGCond cond, TCGReg ret,
TCGReg al, TCGReg ah,
TCGArg bl, bool const_bl,
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 23/54] tcg: Remove INDEX_op_setcond2_i32
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (21 preceding siblings ...)
2026-01-18 22:03 ` [PULL 22/54] tcg: Remove INDEX_op_brcond2_i32 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 24/54] tcg: Remove INDEX_op_dup2_vec Richard Henderson
` (31 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
This opcode was exclusively for 32-bit hosts.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/tcg-opc.h | 2 -
tcg/optimize.c | 205 ---------------------------------------
tcg/tcg-op.c | 47 +--------
tcg/tcg.c | 32 ------
tcg/tci.c | 10 --
docs/devel/tcg-ops.rst | 27 +-----
tcg/tci/tcg-target.c.inc | 16 ---
7 files changed, 8 insertions(+), 331 deletions(-)
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
index 55283af326..fc1270f01e 100644
--- a/include/tcg/tcg-opc.h
+++ b/include/tcg/tcg-opc.h
@@ -103,8 +103,6 @@ DEF(subb1o, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_OUT)
DEF(subbi, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_IN)
DEF(subbio, 1, 2, 0, TCG_OPF_INT | TCG_OPF_CARRY_IN | TCG_OPF_CARRY_OUT)
-DEF(setcond2_i32, 1, 4, 1, 0)
-
/* size changing ops */
DEF(ext_i32_i64, 1, 1, 0, 0)
DEF(extu_i32_i64, 1, 1, 0, 0)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index a544c055b8..d845c7eef2 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -764,22 +764,6 @@ static bool swap_commutative(TCGArg dest, TCGArg *p1, TCGArg *p2)
return false;
}
-static bool swap_commutative2(TCGArg *p1, TCGArg *p2)
-{
- int sum = 0;
- sum += pref_commutative(arg_info(p1[0]));
- sum += pref_commutative(arg_info(p1[1]));
- sum -= pref_commutative(arg_info(p2[0]));
- sum -= pref_commutative(arg_info(p2[1]));
- if (sum > 0) {
- TCGArg t;
- t = p1[0], p1[0] = p2[0], p2[0] = t;
- t = p1[1], p1[1] = p2[1], p2[1] = t;
- return true;
- }
- return false;
-}
-
/*
* Return -1 if the condition can't be simplified,
* and the result of the condition (0 or 1) if it can.
@@ -844,108 +828,6 @@ static int do_constant_folding_cond1(OptContext *ctx, TCGOp *op, TCGArg dest,
return -1;
}
-static int do_constant_folding_cond2(OptContext *ctx, TCGOp *op, TCGArg *args)
-{
- TCGArg al, ah, bl, bh;
- TCGCond c;
- bool swap;
- int r;
-
- swap = swap_commutative2(args, args + 2);
- c = args[4];
- if (swap) {
- args[4] = c = tcg_swap_cond(c);
- }
-
- al = args[0];
- ah = args[1];
- bl = args[2];
- bh = args[3];
-
- if (arg_is_const(bl) && arg_is_const(bh)) {
- tcg_target_ulong blv = arg_const_val(bl);
- tcg_target_ulong bhv = arg_const_val(bh);
- uint64_t b = deposit64(blv, 32, 32, bhv);
-
- if (arg_is_const(al) && arg_is_const(ah)) {
- tcg_target_ulong alv = arg_const_val(al);
- tcg_target_ulong ahv = arg_const_val(ah);
- uint64_t a = deposit64(alv, 32, 32, ahv);
-
- r = do_constant_folding_cond_64(a, b, c);
- if (r >= 0) {
- return r;
- }
- }
-
- if (b == 0) {
- switch (c) {
- case TCG_COND_LTU:
- case TCG_COND_TSTNE:
- return 0;
- case TCG_COND_GEU:
- case TCG_COND_TSTEQ:
- return 1;
- default:
- break;
- }
- }
-
- /* TSTNE x,-1 -> NE x,0 */
- if (b == -1 && is_tst_cond(c)) {
- args[3] = args[2] = arg_new_constant(ctx, 0);
- args[4] = tcg_tst_eqne_cond(c);
- return -1;
- }
-
- /* TSTNE x,sign -> LT x,0 */
- if (b == INT64_MIN && is_tst_cond(c)) {
- /* bl must be 0, so copy that to bh */
- args[3] = bl;
- args[4] = tcg_tst_ltge_cond(c);
- return -1;
- }
- }
-
- if (args_are_copies(al, bl) && args_are_copies(ah, bh)) {
- r = do_constant_folding_cond_eq(c);
- if (r >= 0) {
- return r;
- }
-
- /* TSTNE x,x -> NE x,0 */
- if (is_tst_cond(c)) {
- args[3] = args[2] = arg_new_constant(ctx, 0);
- args[4] = tcg_tst_eqne_cond(c);
- return -1;
- }
- }
-
- /* Expand to AND with a temporary if no backend support. */
- if (!TCG_TARGET_HAS_tst && is_tst_cond(c)) {
- TCGOp *op1 = opt_insert_before(ctx, op, INDEX_op_and, 3);
- TCGOp *op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
- TCGArg t1 = arg_new_temp(ctx);
- TCGArg t2 = arg_new_temp(ctx);
-
- op1->args[0] = t1;
- op1->args[1] = al;
- op1->args[2] = bl;
- fold_and(ctx, op1);
-
- op2->args[0] = t2;
- op2->args[1] = ah;
- op2->args[2] = bh;
- fold_and(ctx, op1);
-
- args[0] = t1;
- args[1] = t2;
- args[3] = args[2] = arg_new_constant(ctx, 0);
- args[4] = tcg_tst_eqne_cond(c);
- }
- return -1;
-}
-
static void init_arguments(OptContext *ctx, TCGOp *op, int nb_args)
{
for (int i = 0; i < nb_args; i++) {
@@ -2503,90 +2385,6 @@ static bool fold_negsetcond(OptContext *ctx, TCGOp *op)
return fold_masks_s(ctx, op, -1);
}
-static bool fold_setcond2(OptContext *ctx, TCGOp *op)
-{
- TCGCond cond;
- int i, inv = 0;
-
- i = do_constant_folding_cond2(ctx, op, &op->args[1]);
- cond = op->args[5];
- if (i >= 0) {
- goto do_setcond_const;
- }
-
- switch (cond) {
- case TCG_COND_LT:
- case TCG_COND_GE:
- /*
- * Simplify LT/GE comparisons vs zero to a single compare
- * vs the high word of the input.
- */
- if (arg_is_const_val(op->args[3], 0) &&
- arg_is_const_val(op->args[4], 0)) {
- goto do_setcond_high;
- }
- break;
-
- case TCG_COND_NE:
- inv = 1;
- QEMU_FALLTHROUGH;
- case TCG_COND_EQ:
- /*
- * Simplify EQ/NE comparisons where one of the pairs
- * can be simplified.
- */
- i = do_constant_folding_cond(TCG_TYPE_I32, op->args[1],
- op->args[3], cond);
- switch (i ^ inv) {
- case 0:
- goto do_setcond_const;
- case 1:
- goto do_setcond_high;
- }
-
- i = do_constant_folding_cond(TCG_TYPE_I32, op->args[2],
- op->args[4], cond);
- switch (i ^ inv) {
- case 0:
- goto do_setcond_const;
- case 1:
- goto do_setcond_low;
- }
- break;
-
- case TCG_COND_TSTEQ:
- case TCG_COND_TSTNE:
- if (arg_is_const_val(op->args[3], 0)) {
- goto do_setcond_high;
- }
- if (arg_is_const_val(op->args[4], 0)) {
- goto do_setcond_low;
- }
- break;
-
- default:
- break;
-
- do_setcond_low:
- op->args[2] = op->args[3];
- op->args[3] = cond;
- op->opc = INDEX_op_setcond;
- return fold_setcond(ctx, op);
-
- do_setcond_high:
- op->args[1] = op->args[2];
- op->args[2] = op->args[4];
- op->args[3] = cond;
- op->opc = INDEX_op_setcond;
- return fold_setcond(ctx, op);
- }
-
- return fold_masks_z(ctx, op, 1);
-
- do_setcond_const:
- return tcg_opt_gen_movi(ctx, op, op->args[0], i);
-}
-
static bool fold_sextract(OptContext *ctx, TCGOp *op)
{
uint64_t z_mask, o_mask, s_mask, a_mask;
@@ -3202,9 +3000,6 @@ void tcg_optimize(TCGContext *s)
case INDEX_op_negsetcond:
done = fold_negsetcond(&ctx, op);
break;
- case INDEX_op_setcond2_i32:
- done = fold_setcond2(&ctx, op);
- break;
case INDEX_op_cmp_vec:
done = fold_cmp_vec(&ctx, op);
break;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 61f6fd9095..d20888dd8f 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -1884,33 +1884,14 @@ void tcg_gen_setcond_i64(TCGCond cond, TCGv_i64 ret,
} else if (cond == TCG_COND_NEVER) {
tcg_gen_movi_i64(ret, 0);
} else {
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_op6i_i32(INDEX_op_setcond2_i32, TCGV_LOW(ret),
- TCGV_LOW(arg1), TCGV_HIGH(arg1),
- TCGV_LOW(arg2), TCGV_HIGH(arg2), cond);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- } else {
- tcg_gen_op4i_i64(INDEX_op_setcond, ret, arg1, arg2, cond);
- }
+ tcg_gen_op4i_i64(INDEX_op_setcond, ret, arg1, arg2, cond);
}
}
void tcg_gen_setcondi_i64(TCGCond cond, TCGv_i64 ret,
TCGv_i64 arg1, int64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_setcond_i64(cond, ret, arg1, tcg_constant_i64(arg2));
- } else if (cond == TCG_COND_ALWAYS) {
- tcg_gen_movi_i64(ret, 1);
- } else if (cond == TCG_COND_NEVER) {
- tcg_gen_movi_i64(ret, 0);
- } else {
- tcg_gen_op6i_i32(INDEX_op_setcond2_i32, TCGV_LOW(ret),
- TCGV_LOW(arg1), TCGV_HIGH(arg1),
- tcg_constant_i32(arg2),
- tcg_constant_i32(arg2 >> 32), cond);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
+ tcg_gen_setcond_i64(cond, ret, arg1, tcg_constant_i64(arg2));
}
void tcg_gen_negsetcondi_i64(TCGCond cond, TCGv_i64 ret,
@@ -1926,14 +1907,8 @@ void tcg_gen_negsetcond_i64(TCGCond cond, TCGv_i64 ret,
tcg_gen_movi_i64(ret, -1);
} else if (cond == TCG_COND_NEVER) {
tcg_gen_movi_i64(ret, 0);
- } else if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op4i_i64(INDEX_op_negsetcond, ret, arg1, arg2, cond);
} else {
- tcg_gen_op6i_i32(INDEX_op_setcond2_i32, TCGV_LOW(ret),
- TCGV_LOW(arg1), TCGV_HIGH(arg1),
- TCGV_LOW(arg2), TCGV_HIGH(arg2), cond);
- tcg_gen_neg_i32(TCGV_LOW(ret), TCGV_LOW(ret));
- tcg_gen_mov_i32(TCGV_HIGH(ret), TCGV_LOW(ret));
+ tcg_gen_op4i_i64(INDEX_op_negsetcond, ret, arg1, arg2, cond);
}
}
@@ -2777,22 +2752,8 @@ void tcg_gen_movcond_i64(TCGCond cond, TCGv_i64 ret, TCGv_i64 c1,
tcg_gen_mov_i64(ret, v1);
} else if (cond == TCG_COND_NEVER) {
tcg_gen_mov_i64(ret, v2);
- } else if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op6i_i64(INDEX_op_movcond, ret, c1, c2, v1, v2, cond);
} else {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
-
- tcg_gen_op6i_i32(INDEX_op_setcond2_i32, t0,
- TCGV_LOW(c1), TCGV_HIGH(c1),
- TCGV_LOW(c2), TCGV_HIGH(c2), cond);
-
- tcg_gen_movcond_i32(TCG_COND_NE, TCGV_LOW(ret), t0, zero,
- TCGV_LOW(v1), TCGV_LOW(v2));
- tcg_gen_movcond_i32(TCG_COND_NE, TCGV_HIGH(ret), t0, zero,
- TCGV_HIGH(v1), TCGV_HIGH(v2));
-
- tcg_temp_free_i32(t0);
+ tcg_gen_op6i_i64(INDEX_op_movcond, ret, c1, c2, v1, v2, cond);
}
}
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 0521767c46..b6a65fe224 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -1088,12 +1088,6 @@ typedef struct TCGOutOpSetcond {
TCGReg ret, TCGReg a1, tcg_target_long a2);
} TCGOutOpSetcond;
-typedef struct TCGOutOpSetcond2 {
- TCGOutOp base;
- void (*out)(TCGContext *s, TCGCond cond, TCGReg ret, TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl, TCGArg bh, bool const_bh);
-} TCGOutOpSetcond2;
-
typedef struct TCGOutOpStore {
TCGOutOp base;
void (*out_r)(TCGContext *s, TCGType type, TCGReg data,
@@ -1240,9 +1234,6 @@ static const TCGOutOp * const all_outop[NB_OPS] = {
[INDEX_op_goto_ptr] = &outop_goto_ptr,
-#if TCG_TARGET_REG_BITS == 32
- OUTOP(INDEX_op_setcond2_i32, TCGOutOpSetcond2, outop_setcond2),
-#else
OUTOP(INDEX_op_bswap64, TCGOutOpUnary, outop_bswap64),
OUTOP(INDEX_op_ext_i32_i64, TCGOutOpUnary, outop_exts_i32_i64),
OUTOP(INDEX_op_extu_i32_i64, TCGOutOpUnary, outop_extu_i32_i64),
@@ -1251,7 +1242,6 @@ static const TCGOutOp * const all_outop[NB_OPS] = {
OUTOP(INDEX_op_ld32u, TCGOutOpLoad, outop_ld32u),
OUTOP(INDEX_op_ld32s, TCGOutOpLoad, outop_ld32s),
OUTOP(INDEX_op_st32, TCGOutOpStore, outop_st),
-#endif
};
#undef OUTOP
@@ -2482,9 +2472,6 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_xor:
return has_type;
- case INDEX_op_setcond2_i32:
- return TCG_TARGET_REG_BITS == 32;
-
case INDEX_op_ld32u:
case INDEX_op_ld32s:
case INDEX_op_st32:
@@ -3013,7 +3000,6 @@ void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs)
case INDEX_op_setcond:
case INDEX_op_negsetcond:
case INDEX_op_movcond:
- case INDEX_op_setcond2_i32:
case INDEX_op_cmp_vec:
case INDEX_op_cmpsel_vec:
if (op->args[k] < ARRAY_SIZE(cond_name)
@@ -5269,7 +5255,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
op_cond = op->args[3];
break;
case INDEX_op_movcond:
- case INDEX_op_setcond2_i32:
case INDEX_op_cmpsel_vec:
op_cond = op->args[5];
break;
@@ -5869,23 +5854,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
}
break;
-#if TCG_TARGET_REG_BITS == 32
- case INDEX_op_setcond2_i32:
- {
- const TCGOutOpSetcond2 *out = &outop_setcond2;
- TCGCond cond = new_args[5];
-
- tcg_debug_assert(!const_args[1]);
- tcg_debug_assert(!const_args[2]);
- out->out(s, cond, new_args[0], new_args[1], new_args[2],
- new_args[3], const_args[3], new_args[4], const_args[4]);
- }
- break;
-#else
- case INDEX_op_setcond2_i32:
- g_assert_not_reached();
-#endif
-
case INDEX_op_goto_ptr:
tcg_debug_assert(!const_args[0]);
tcg_out_goto_ptr(s, new_args[0]);
diff --git a/tcg/tci.c b/tcg/tci.c
index e15d4e8e08..7f3ba9b5da 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -418,14 +418,6 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tci_args_l(insn, tb_ptr, &ptr);
tb_ptr = ptr;
continue;
-#if TCG_TARGET_REG_BITS == 32
- case INDEX_op_setcond2_i32:
- tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition);
- regs[r0] = tci_compare64(tci_uint64(regs[r2], regs[r1]),
- tci_uint64(regs[r4], regs[r3]),
- condition);
- break;
-#elif TCG_TARGET_REG_BITS == 64
case INDEX_op_setcond:
tci_args_rrrc(insn, &r0, &r1, &r2, &condition);
regs[r0] = tci_compare64(regs[r1], regs[r2], condition);
@@ -435,7 +427,6 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tmp32 = tci_compare64(regs[r1], regs[r2], condition);
regs[r0] = regs[tmp32 ? r3 : r4];
break;
-#endif
case INDEX_op_mov:
tci_args_rr(insn, &r0, &r1);
regs[r0] = regs[r1];
@@ -1040,7 +1031,6 @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
case INDEX_op_tci_movcond32:
case INDEX_op_movcond:
- case INDEX_op_setcond2_i32:
tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &c);
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %s",
op_name, str_r(r0), str_r(r1), str_r(r2),
diff --git a/docs/devel/tcg-ops.rst b/docs/devel/tcg-ops.rst
index 10d5edb4ca..fd3a50bf4c 100644
--- a/docs/devel/tcg-ops.rst
+++ b/docs/devel/tcg-ops.rst
@@ -696,21 +696,6 @@ Memory Barrier support
| Please see :ref:`atomics-ref` for more information on memory barriers.
-64-bit guest on 32-bit host support
------------------------------------
-
-The following opcodes are internal to TCG. Thus they are to be implemented by
-32-bit host code generators, but are not to be emitted by guest translators.
-They are emitted as needed by inline functions within ``tcg-op.h``.
-
-.. list-table::
-
- * - setcond2_i32 *dest*, *t1_low*, *t1_high*, *t2_low*, *t2_high*, *cond*
-
- - | Similar to setcond, except that the 64-bit values *t1* and *t2* are
- formed from two 32-bit arguments. The result is a 32-bit value.
-
-
QEMU specific operations
------------------------
@@ -930,15 +915,11 @@ than being a standalone C file.
Assumptions
-----------
-The target word size (``TCG_TARGET_REG_BITS``) is expected to be 32 bit or
-64 bit. It is expected that the pointer has the same size as the word.
+The target word size (``TCG_TARGET_REG_BITS``) is expected to be 64 bit.
+It is expected that the pointer has the same size as the word.
-On a 32 bit target, all 64 bit operations are converted to 32 bits.
-A few specific operations must be implemented to allow it
-(see setcond2_i32).
-
-On a 64 bit target, the values are transferred between 32 and 64-bit
-registers using the following ops:
+Values are transferred between 32 and 64-bit registers using the
+following ops:
- extrl_i64_i32
- extrh_i64_i32
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
index 1756ffc59c..8bd8db4401 100644
--- a/tcg/tci/tcg-target.c.inc
+++ b/tcg/tci/tcg-target.c.inc
@@ -1047,22 +1047,6 @@ static const TCGOutOpMovcond outop_movcond = {
.out = tgen_movcond,
};
-static void tgen_setcond2(TCGContext *s, TCGCond cond, TCGReg ret,
- TCGReg al, TCGReg ah,
- TCGArg bl, bool const_bl,
- TCGArg bh, bool const_bh)
-{
- tcg_out_op_rrrrrc(s, INDEX_op_setcond2_i32, ret, al, ah, bl, bh, cond);
-}
-
-#if TCG_TARGET_REG_BITS != 32
-__attribute__((unused))
-#endif
-static const TCGOutOpSetcond2 outop_setcond2 = {
- .base.static_constraint = C_O1_I4(r, r, r, r, r),
- .out = tgen_setcond2,
-};
-
static void tcg_out_mb(TCGContext *s, unsigned a0)
{
tcg_out_op_v(s, INDEX_op_mb);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 24/54] tcg: Remove INDEX_op_dup2_vec
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (22 preceding siblings ...)
2026-01-18 22:03 ` [PULL 23/54] tcg: Remove INDEX_op_setcond2_i32 Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 25/54] tcg/tci: Drop TCG_TARGET_REG_BITS tests Richard Henderson
` (30 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
This opcode was exclusively for 32-bit hosts.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/tcg-opc.h | 1 -
tcg/optimize.c | 18 ---------
tcg/tcg-op-vec.c | 14 +------
tcg/tcg.c | 94 -------------------------------------------
4 files changed, 2 insertions(+), 125 deletions(-)
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
index fc1270f01e..28806057c5 100644
--- a/include/tcg/tcg-opc.h
+++ b/include/tcg/tcg-opc.h
@@ -130,7 +130,6 @@ DEF(qemu_st2, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS | TCG_OPF_INT
DEF(mov_vec, 1, 1, 0, TCG_OPF_VECTOR | TCG_OPF_NOT_PRESENT)
DEF(dup_vec, 1, 1, 0, TCG_OPF_VECTOR)
-DEF(dup2_vec, 1, 2, 0, TCG_OPF_VECTOR)
DEF(ld_vec, 1, 1, 1, TCG_OPF_VECTOR)
DEF(st_vec, 0, 2, 1, TCG_OPF_VECTOR)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d845c7eef2..801a0a2c68 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1716,21 +1716,6 @@ static bool fold_dup(OptContext *ctx, TCGOp *op)
return finish_folding(ctx, op);
}
-static bool fold_dup2(OptContext *ctx, TCGOp *op)
-{
- if (arg_is_const(op->args[1]) && arg_is_const(op->args[2])) {
- uint64_t t = deposit64(arg_const_val(op->args[1]), 32, 32,
- arg_const_val(op->args[2]));
- return tcg_opt_gen_movi(ctx, op, op->args[0], t);
- }
-
- if (args_are_copies(op->args[1], op->args[2])) {
- op->opc = INDEX_op_dup_vec;
- TCGOP_VECE(op) = MO_32;
- }
- return finish_folding(ctx, op);
-}
-
static bool fold_eqv(OptContext *ctx, TCGOp *op)
{
uint64_t z_mask, o_mask, s_mask;
@@ -2887,9 +2872,6 @@ void tcg_optimize(TCGContext *s)
case INDEX_op_dup_vec:
done = fold_dup(&ctx, op);
break;
- case INDEX_op_dup2_vec:
- done = fold_dup2(&ctx, op);
- break;
case INDEX_op_eqv:
case INDEX_op_eqv_vec:
done = fold_eqv(&ctx, op);
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index 893d68e7d8..67e837174b 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -75,7 +75,6 @@ bool tcg_can_emit_vecop_list(const TCGOpcode *list,
case INDEX_op_xor_vec:
case INDEX_op_mov_vec:
case INDEX_op_dup_vec:
- case INDEX_op_dup2_vec:
case INDEX_op_ld_vec:
case INDEX_op_st_vec:
case INDEX_op_bitsel_vec:
@@ -228,20 +227,11 @@ void tcg_gen_dupi_vec(unsigned vece, TCGv_vec r, uint64_t a)
void tcg_gen_dup_i64_vec(unsigned vece, TCGv_vec r, TCGv_i64 a)
{
TCGArg ri = tcgv_vec_arg(r);
+ TCGArg ai = tcgv_i64_arg(a);
TCGTemp *rt = arg_temp(ri);
TCGType type = rt->base_type;
- if (TCG_TARGET_REG_BITS == 64) {
- TCGArg ai = tcgv_i64_arg(a);
- vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
- } else if (vece == MO_64) {
- TCGArg al = tcgv_i32_arg(TCGV_LOW(a));
- TCGArg ah = tcgv_i32_arg(TCGV_HIGH(a));
- vec_gen_3(INDEX_op_dup2_vec, type, MO_64, ri, al, ah);
- } else {
- TCGArg ai = tcgv_i32_arg(TCGV_LOW(a));
- vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
- }
+ vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
}
void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec r, TCGv_i32 a)
diff --git a/tcg/tcg.c b/tcg/tcg.c
index b6a65fe224..2b3bcbe750 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2493,8 +2493,6 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_xor_vec:
case INDEX_op_cmp_vec:
return has_type;
- case INDEX_op_dup2_vec:
- return has_type && TCG_TARGET_REG_BITS == 32;
case INDEX_op_not_vec:
return has_type && TCG_TARGET_HAS_not_vec;
case INDEX_op_neg_vec:
@@ -5888,93 +5886,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
}
}
-static bool tcg_reg_alloc_dup2(TCGContext *s, const TCGOp *op)
-{
- const TCGLifeData arg_life = op->life;
- TCGTemp *ots, *itsl, *itsh;
- TCGType vtype = TCGOP_TYPE(op);
-
- /* This opcode is only valid for 32-bit hosts, for 64-bit elements. */
- tcg_debug_assert(TCG_TARGET_REG_BITS == 32);
- tcg_debug_assert(TCGOP_VECE(op) == MO_64);
-
- ots = arg_temp(op->args[0]);
- itsl = arg_temp(op->args[1]);
- itsh = arg_temp(op->args[2]);
-
- /* ENV should not be modified. */
- tcg_debug_assert(!temp_readonly(ots));
-
- /* Allocate the output register now. */
- if (ots->val_type != TEMP_VAL_REG) {
- TCGRegSet allocated_regs = s->reserved_regs;
- TCGRegSet dup_out_regs = opcode_args_ct(op)[0].regs;
- TCGReg oreg;
-
- /* Make sure to not spill the input registers. */
- if (!IS_DEAD_ARG(1) && itsl->val_type == TEMP_VAL_REG) {
- tcg_regset_set_reg(allocated_regs, itsl->reg);
- }
- if (!IS_DEAD_ARG(2) && itsh->val_type == TEMP_VAL_REG) {
- tcg_regset_set_reg(allocated_regs, itsh->reg);
- }
-
- oreg = tcg_reg_alloc(s, dup_out_regs, allocated_regs,
- output_pref(op, 0), ots->indirect_base);
- set_temp_val_reg(s, ots, oreg);
- }
-
- /* Promote dup2 of immediates to dupi_vec. */
- if (itsl->val_type == TEMP_VAL_CONST && itsh->val_type == TEMP_VAL_CONST) {
- uint64_t val = deposit64(itsl->val, 32, 32, itsh->val);
- MemOp vece = MO_64;
-
- if (val == dup_const(MO_8, val)) {
- vece = MO_8;
- } else if (val == dup_const(MO_16, val)) {
- vece = MO_16;
- } else if (val == dup_const(MO_32, val)) {
- vece = MO_32;
- }
-
- tcg_out_dupi_vec(s, vtype, vece, ots->reg, val);
- goto done;
- }
-
- /* If the two inputs form one 64-bit value, try dupm_vec. */
- if (itsl->temp_subindex == HOST_BIG_ENDIAN &&
- itsh->temp_subindex == !HOST_BIG_ENDIAN &&
- itsl == itsh + (HOST_BIG_ENDIAN ? 1 : -1)) {
- TCGTemp *its = itsl - HOST_BIG_ENDIAN;
-
- temp_sync(s, its + 0, s->reserved_regs, 0, 0);
- temp_sync(s, its + 1, s->reserved_regs, 0, 0);
-
- if (tcg_out_dupm_vec(s, vtype, MO_64, ots->reg,
- its->mem_base->reg, its->mem_offset)) {
- goto done;
- }
- }
-
- /* Fall back to generic expansion. */
- return false;
-
- done:
- ots->mem_coherent = 0;
- if (IS_DEAD_ARG(1)) {
- temp_dead(s, itsl);
- }
- if (IS_DEAD_ARG(2)) {
- temp_dead(s, itsh);
- }
- if (NEED_SYNC_ARG(0)) {
- temp_sync(s, ots, s->reserved_regs, 0, IS_DEAD_ARG(0));
- } else if (IS_DEAD_ARG(0)) {
- temp_dead(s, ots);
- }
- return true;
-}
-
static void load_arg_reg(TCGContext *s, TCGReg reg, TCGTemp *ts,
TCGRegSet allocated_regs)
{
@@ -6939,11 +6850,6 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb, uint64_t pc_start)
case INDEX_op_mb:
tcg_out_mb(s, op->args[0]);
break;
- case INDEX_op_dup2_vec:
- if (tcg_reg_alloc_dup2(s, op)) {
- break;
- }
- /* fall through */
default:
do_default:
/* Sanity check that we've not introduced any unhandled opcodes. */
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 25/54] tcg/tci: Drop TCG_TARGET_REG_BITS tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (23 preceding siblings ...)
2026-01-18 22:03 ` [PULL 24/54] tcg: Remove INDEX_op_dup2_vec Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 26/54] tcg/tci: Remove glue TCG_TARGET_REG_BITS renames Richard Henderson
` (29 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tci/tcg-target-has.h | 2 --
tcg/tci.c | 50 ++-------------------------------------
tcg/tci/tcg-target.c.inc | 51 ++++++----------------------------------
3 files changed, 9 insertions(+), 94 deletions(-)
diff --git a/tcg/tci/tcg-target-has.h b/tcg/tci/tcg-target-has.h
index ab07ce1fcb..64742cf0b7 100644
--- a/tcg/tci/tcg-target-has.h
+++ b/tcg/tci/tcg-target-has.h
@@ -7,9 +7,7 @@
#ifndef TCG_TARGET_HAS_H
#define TCG_TARGET_HAS_H
-#if TCG_TARGET_REG_BITS == 64
#define TCG_TARGET_HAS_extr_i64_i32 0
-#endif /* TCG_TARGET_REG_BITS == 64 */
#define TCG_TARGET_HAS_qemu_ldst_i128 0
diff --git a/tcg/tci.c b/tcg/tci.c
index 7f3ba9b5da..f71993c287 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -43,19 +43,6 @@
__thread uintptr_t tci_tb_ptr;
-static void tci_write_reg64(tcg_target_ulong *regs, uint32_t high_index,
- uint32_t low_index, uint64_t value)
-{
- regs[low_index] = (uint32_t)value;
- regs[high_index] = value >> 32;
-}
-
-/* Create a 64 bit value from two 32 bit values. */
-static uint64_t tci_uint64(uint32_t high, uint32_t low)
-{
- return ((uint64_t)high << 32) + low;
-}
-
/*
* Load sets of arguments all at once. The naming convention is:
* tci_args_<arguments>
@@ -352,7 +339,7 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
TCGCond condition;
uint8_t pos, len;
uint32_t tmp32;
- uint64_t tmp64, taddr;
+ uint64_t taddr;
MemOpIdx oi;
int32_t ofs;
void *ptr;
@@ -400,10 +387,6 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
}
break;
case 2: /* uint64_t */
- /*
- * For TCG_TARGET_REG_BITS == 32, the register pair
- * must stay in host memory order.
- */
memcpy(®s[TCG_REG_R0], stack, 8);
break;
case 3: /* Int128 */
@@ -586,21 +569,11 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
break;
case INDEX_op_muls2:
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
-#if TCG_TARGET_REG_BITS == 32
- tmp64 = (int64_t)(int32_t)regs[r2] * (int32_t)regs[r3];
- tci_write_reg64(regs, r1, r0, tmp64);
-#else
muls64(®s[r0], ®s[r1], regs[r2], regs[r3]);
-#endif
break;
case INDEX_op_mulu2:
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
-#if TCG_TARGET_REG_BITS == 32
- tmp64 = (uint64_t)(uint32_t)regs[r2] * (uint32_t)regs[r3];
- tci_write_reg64(regs, r1, r0, tmp64);
-#else
mulu64(®s[r0], ®s[r1], regs[r2], regs[r3]);
-#endif
break;
/* Arithmetic operations (32 bit). */
@@ -690,7 +663,7 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tci_args_rr(insn, &r0, &r1);
regs[r0] = bswap32(regs[r1]);
break;
-#if TCG_TARGET_REG_BITS == 64
+
/* Load/store operations (64 bit). */
case INDEX_op_ld32u:
@@ -758,7 +731,6 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tci_args_rr(insn, &r0, &r1);
regs[r0] = bswap64(regs[r1]);
break;
-#endif /* TCG_TARGET_REG_BITS == 64 */
/* QEMU specific operations. */
@@ -804,24 +776,6 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tci_qemu_st(env, taddr, regs[r0], oi, tb_ptr);
break;
- case INDEX_op_qemu_ld2:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 32);
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- taddr = regs[r2];
- oi = regs[r3];
- tmp64 = tci_qemu_ld(env, taddr, oi, tb_ptr);
- tci_write_reg64(regs, r1, r0, tmp64);
- break;
-
- case INDEX_op_qemu_st2:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 32);
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- tmp64 = tci_uint64(regs[r1], regs[r0]);
- taddr = regs[r2];
- oi = regs[r3];
- tci_qemu_st(env, taddr, tmp64, oi, tb_ptr);
- break;
-
case INDEX_op_mb:
/* Ensure ordering for all kinds */
smp_mb();
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
index 8bd8db4401..1b22c70616 100644
--- a/tcg/tci/tcg-target.c.inc
+++ b/tcg/tci/tcg-target.c.inc
@@ -25,15 +25,9 @@
/* Used for function call generation. */
#define TCG_TARGET_CALL_STACK_OFFSET 0
#define TCG_TARGET_STACK_ALIGN 8
-#if TCG_TARGET_REG_BITS == 32
-# define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_EVEN
-# define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_EVEN
-# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN
-#else
-# define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_NORMAL
-# define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_NORMAL
-# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL
-#endif
+#define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_NORMAL
+#define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_NORMAL
+#define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL
#define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL
static TCGConstraintSetIndex
@@ -320,7 +314,7 @@ static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg val, TCGReg base,
{
TCGOpcode op = INDEX_op_ld;
- if (TCG_TARGET_REG_BITS == 64 && type == TCG_TYPE_I32) {
+ if (type == TCG_TYPE_I32) {
op = INDEX_op_ld32u;
}
tcg_out_ldst(s, op, val, base, offset);
@@ -337,11 +331,9 @@ static void tcg_out_movi(TCGContext *s, TCGType type,
{
switch (type) {
case TCG_TYPE_I32:
-#if TCG_TARGET_REG_BITS == 64
arg = (int32_t)arg;
/* fall through */
case TCG_TYPE_I64:
-#endif
break;
default:
g_assert_not_reached();
@@ -407,13 +399,11 @@ static void tcg_out_ext16u(TCGContext *s, TCGReg rd, TCGReg rs)
static void tcg_out_ext32s(TCGContext *s, TCGReg rd, TCGReg rs)
{
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_sextract(s, TCG_TYPE_I64, rd, rs, 0, 32);
}
static void tcg_out_ext32u(TCGContext *s, TCGReg rd, TCGReg rs)
{
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_extract(s, TCG_TYPE_I64, rd, rs, 0, 32);
}
@@ -429,7 +419,6 @@ static void tcg_out_extu_i32_i64(TCGContext *s, TCGReg rd, TCGReg rs)
static void tcg_out_extrl_i64_i32(TCGContext *s, TCGReg rd, TCGReg rs)
{
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_mov(s, TCG_TYPE_I32, rd, rs);
}
@@ -654,7 +643,6 @@ static const TCGOutOpBinary outop_eqv = {
.out_rrr = tgen_eqv,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_extrh_i64_i32(TCGContext *s, TCGType t, TCGReg a0, TCGReg a1)
{
tcg_out_extract(s, TCG_TYPE_I64, a0, a1, 32, 32);
@@ -664,7 +652,6 @@ static const TCGOutOpUnary outop_extrh_i64_i32 = {
.base.static_constraint = C_O1_I1(r, r),
.out_rr = tgen_extrh_i64_i32,
};
-#endif
static void tgen_mul(TCGContext *s, TCGType type,
TCGReg a0, TCGReg a1, TCGReg a2)
@@ -962,7 +949,6 @@ static const TCGOutOpBswap outop_bswap32 = {
.out_rr = tgen_bswap32,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_bswap64(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
{
tcg_out_op_rr(s, INDEX_op_bswap64, a0, a1);
@@ -972,7 +958,6 @@ static const TCGOutOpUnary outop_bswap64 = {
.base.static_constraint = C_O1_I1(r, r),
.out_rr = tgen_bswap64,
};
-#endif
static void tgen_neg(TCGContext *s, TCGType type, TCGReg a0, TCGReg a1)
{
@@ -1101,7 +1086,6 @@ static const TCGOutOpLoad outop_ld16s = {
.out = tgen_ld16s,
};
-#if TCG_TARGET_REG_BITS == 64
static void tgen_ld32u(TCGContext *s, TCGType type, TCGReg dest,
TCGReg base, ptrdiff_t offset)
{
@@ -1123,7 +1107,6 @@ static const TCGOutOpLoad outop_ld32s = {
.base.static_constraint = C_O1_I1(r, r),
.out = tgen_ld32s,
};
-#endif
static void tgen_st8(TCGContext *s, TCGType type, TCGReg data,
TCGReg base, ptrdiff_t offset)
@@ -1168,18 +1151,8 @@ static const TCGOutOpQemuLdSt outop_qemu_ld = {
.out = tgen_qemu_ld,
};
-static void tgen_qemu_ld2(TCGContext *s, TCGType type, TCGReg datalo,
- TCGReg datahi, TCGReg addr, MemOpIdx oi)
-{
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, oi);
- tcg_out_op_rrrr(s, INDEX_op_qemu_ld2, datalo, datahi, addr, TCG_REG_TMP);
-}
-
static const TCGOutOpQemuLdSt2 outop_qemu_ld2 = {
- .base.static_constraint =
- TCG_TARGET_REG_BITS == 64 ? C_NotImplemented : C_O2_I1(r, r, r),
- .out =
- TCG_TARGET_REG_BITS == 64 ? NULL : tgen_qemu_ld2,
+ .base.static_constraint = C_NotImplemented,
};
static void tgen_qemu_st(TCGContext *s, TCGType type, TCGReg data,
@@ -1198,18 +1171,8 @@ static const TCGOutOpQemuLdSt outop_qemu_st = {
.out = tgen_qemu_st,
};
-static void tgen_qemu_st2(TCGContext *s, TCGType type, TCGReg datalo,
- TCGReg datahi, TCGReg addr, MemOpIdx oi)
-{
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, oi);
- tcg_out_op_rrrr(s, INDEX_op_qemu_st2, datalo, datahi, addr, TCG_REG_TMP);
-}
-
static const TCGOutOpQemuLdSt2 outop_qemu_st2 = {
- .base.static_constraint =
- TCG_TARGET_REG_BITS == 64 ? C_NotImplemented : C_O0_I3(r, r, r),
- .out =
- TCG_TARGET_REG_BITS == 64 ? NULL : tgen_qemu_st2,
+ .base.static_constraint = C_NotImplemented,
};
static void tcg_out_st(TCGContext *s, TCGType type, TCGReg val, TCGReg base,
@@ -1217,7 +1180,7 @@ static void tcg_out_st(TCGContext *s, TCGType type, TCGReg val, TCGReg base,
{
TCGOpcode op = INDEX_op_st;
- if (TCG_TARGET_REG_BITS == 64 && type == TCG_TYPE_I32) {
+ if (type == TCG_TYPE_I32) {
op = INDEX_op_st32;
}
tcg_out_ldst(s, op, val, base, offset);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 26/54] tcg/tci: Remove glue TCG_TARGET_REG_BITS renames
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (24 preceding siblings ...)
2026-01-18 22:03 ` [PULL 25/54] tcg/tci: Drop TCG_TARGET_REG_BITS tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 27/54] tcg: Drop TCG_TARGET_REG_BITS test in region.c Richard Henderson
` (28 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tci.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/tcg/tci.c b/tcg/tci.c
index f71993c287..29ecb39929 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -26,11 +26,6 @@
#include <ffi.h>
-#define ctpop_tr glue(ctpop, TCG_TARGET_REG_BITS)
-#define deposit_tr glue(deposit, TCG_TARGET_REG_BITS)
-#define extract_tr glue(extract, TCG_TARGET_REG_BITS)
-#define sextract_tr glue(sextract, TCG_TARGET_REG_BITS)
-
/*
* Enable TCI assertions only when debugging TCG (and without NDEBUG defined).
* Without assertions, the interpreter runs much faster.
@@ -525,7 +520,7 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
break;
case INDEX_op_ctpop:
tci_args_rr(insn, &r0, &r1);
- regs[r0] = ctpop_tr(regs[r1]);
+ regs[r0] = ctpop64(regs[r1]);
break;
case INDEX_op_addco:
tci_args_rrr(insn, &r0, &r1, &r2);
@@ -639,15 +634,15 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
break;
case INDEX_op_deposit:
tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
- regs[r0] = deposit_tr(regs[r1], pos, len, regs[r2]);
+ regs[r0] = deposit64(regs[r1], pos, len, regs[r2]);
break;
case INDEX_op_extract:
tci_args_rrbb(insn, &r0, &r1, &pos, &len);
- regs[r0] = extract_tr(regs[r1], pos, len);
+ regs[r0] = extract64(regs[r1], pos, len);
break;
case INDEX_op_sextract:
tci_args_rrbb(insn, &r0, &r1, &pos, &len);
- regs[r0] = sextract_tr(regs[r1], pos, len);
+ regs[r0] = sextract64(regs[r1], pos, len);
break;
case INDEX_op_brcond:
tci_args_rl(insn, tb_ptr, &r0, &ptr);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 27/54] tcg: Drop TCG_TARGET_REG_BITS test in region.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (25 preceding siblings ...)
2026-01-18 22:03 ` [PULL 26/54] tcg/tci: Remove glue TCG_TARGET_REG_BITS renames Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 28/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op.c Richard Henderson
` (27 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/region.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/tcg/region.c b/tcg/region.c
index 2181267e48..5d4be1453b 100644
--- a/tcg/region.c
+++ b/tcg/region.c
@@ -464,17 +464,6 @@ static size_t tcg_n_regions(size_t tb_size, unsigned max_threads)
*/
#define MIN_CODE_GEN_BUFFER_SIZE (1 * MiB)
-#if TCG_TARGET_REG_BITS == 32
-#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (32 * MiB)
-#ifdef CONFIG_USER_ONLY
-/*
- * For user mode on smaller 32 bit systems we may run into trouble
- * allocating big chunks of data in the right place. On these systems
- * we utilise a static code generation buffer directly in the binary.
- */
-#define USE_STATIC_CODE_GEN_BUFFER
-#endif
-#else /* TCG_TARGET_REG_BITS == 64 */
#ifdef CONFIG_USER_ONLY
/*
* As user-mode emulation typically means running multiple instances
@@ -490,7 +479,6 @@ static size_t tcg_n_regions(size_t tb_size, unsigned max_threads)
*/
#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (1 * GiB)
#endif
-#endif
#define DEFAULT_CODE_GEN_BUFFER_SIZE \
(DEFAULT_CODE_GEN_BUFFER_SIZE_1 < MAX_CODE_GEN_BUFFER_SIZE \
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 28/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (26 preceding siblings ...)
2026-01-18 22:03 ` [PULL 27/54] tcg: Drop TCG_TARGET_REG_BITS test in region.c Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 29/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-gvec.c Richard Henderson
` (26 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-op.c | 686 +++++++--------------------------------------------
1 file changed, 90 insertions(+), 596 deletions(-)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index d20888dd8f..8d67acc4fc 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -1154,7 +1154,7 @@ void tcg_gen_mulu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
tcg_gen_op3_i32(INDEX_op_muluh, rh, arg1, arg2);
tcg_gen_mov_i32(rl, t);
tcg_temp_free_i32(t);
- } else if (TCG_TARGET_REG_BITS == 64) {
+ } else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
TCGv_i64 t1 = tcg_temp_ebb_new_i64();
tcg_gen_extu_i32_i64(t0, arg1);
@@ -1163,8 +1163,6 @@ void tcg_gen_mulu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
tcg_gen_extr_i64_i32(rl, rh, t0);
tcg_temp_free_i64(t0);
tcg_temp_free_i64(t1);
- } else {
- g_assert_not_reached();
}
}
@@ -1178,24 +1176,6 @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
tcg_gen_op3_i32(INDEX_op_mulsh, rh, arg1, arg2);
tcg_gen_mov_i32(rl, t);
tcg_temp_free_i32(t);
- } else if (TCG_TARGET_REG_BITS == 32) {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- TCGv_i32 t1 = tcg_temp_ebb_new_i32();
- TCGv_i32 t2 = tcg_temp_ebb_new_i32();
- TCGv_i32 t3 = tcg_temp_ebb_new_i32();
- tcg_gen_mulu2_i32(t0, t1, arg1, arg2);
- /* Adjust for negative inputs. */
- tcg_gen_sari_i32(t2, arg1, 31);
- tcg_gen_sari_i32(t3, arg2, 31);
- tcg_gen_and_i32(t2, t2, arg2);
- tcg_gen_and_i32(t3, t3, arg1);
- tcg_gen_sub_i32(rh, t1, t2);
- tcg_gen_sub_i32(rh, rh, t3);
- tcg_gen_mov_i32(rl, t0);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
- tcg_temp_free_i32(t2);
- tcg_temp_free_i32(t3);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
TCGv_i64 t1 = tcg_temp_ebb_new_i64();
@@ -1210,29 +1190,14 @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
void tcg_gen_mulsu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- TCGv_i32 t1 = tcg_temp_ebb_new_i32();
- TCGv_i32 t2 = tcg_temp_ebb_new_i32();
- tcg_gen_mulu2_i32(t0, t1, arg1, arg2);
- /* Adjust for negative input for the signed arg1. */
- tcg_gen_sari_i32(t2, arg1, 31);
- tcg_gen_and_i32(t2, t2, arg2);
- tcg_gen_sub_i32(rh, t1, t2);
- tcg_gen_mov_i32(rl, t0);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
- tcg_temp_free_i32(t2);
- } else {
- TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- TCGv_i64 t1 = tcg_temp_ebb_new_i64();
- tcg_gen_ext_i32_i64(t0, arg1);
- tcg_gen_extu_i32_i64(t1, arg2);
- tcg_gen_mul_i64(t0, t0, t1);
- tcg_gen_extr_i64_i32(rl, rh, t0);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
- }
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
+ tcg_gen_ext_i32_i64(t0, arg1);
+ tcg_gen_extu_i32_i64(t1, arg2);
+ tcg_gen_mul_i64(t0, t0, t1);
+ tcg_gen_extr_i64_i32(rl, rh, t0);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
}
void tcg_gen_ext8s_i32(TCGv_i32 ret, TCGv_i32 arg)
@@ -1414,263 +1379,119 @@ void tcg_gen_st_i32(TCGv_i32 arg1, TCGv_ptr arg2, tcg_target_long offset)
void tcg_gen_discard_i64(TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op1_i64(INDEX_op_discard, TCG_TYPE_I64, arg);
- } else {
- tcg_gen_discard_i32(TCGV_LOW(arg));
- tcg_gen_discard_i32(TCGV_HIGH(arg));
- }
+ tcg_gen_op1_i64(INDEX_op_discard, TCG_TYPE_I64, arg);
}
void tcg_gen_mov_i64(TCGv_i64 ret, TCGv_i64 arg)
{
- if (ret == arg) {
- return;
- }
- if (TCG_TARGET_REG_BITS == 64) {
+ if (ret != arg) {
tcg_gen_op2_i64(INDEX_op_mov, ret, arg);
- } else {
- TCGTemp *ts = tcgv_i64_temp(arg);
-
- /* Canonicalize TCGv_i64 TEMP_CONST into TCGv_i32 TEMP_CONST. */
- if (ts->kind == TEMP_CONST) {
- tcg_gen_movi_i64(ret, ts->val);
- } else {
- tcg_gen_mov_i32(TCGV_LOW(ret), TCGV_LOW(arg));
- tcg_gen_mov_i32(TCGV_HIGH(ret), TCGV_HIGH(arg));
- }
}
}
void tcg_gen_movi_i64(TCGv_i64 ret, int64_t arg)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_mov_i64(ret, tcg_constant_i64(arg));
- } else {
- tcg_gen_movi_i32(TCGV_LOW(ret), arg);
- tcg_gen_movi_i32(TCGV_HIGH(ret), arg >> 32);
- }
+ tcg_gen_mov_i64(ret, tcg_constant_i64(arg));
}
void tcg_gen_ld8u_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld8u, ret, arg2, offset);
- } else {
- tcg_gen_ld8u_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld8u, ret, arg2, offset);
}
void tcg_gen_ld8s_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld8s, ret, arg2, offset);
- } else {
- tcg_gen_ld8s_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld8s, ret, arg2, offset);
}
void tcg_gen_ld16u_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld16u, ret, arg2, offset);
- } else {
- tcg_gen_ld16u_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld16u, ret, arg2, offset);
}
void tcg_gen_ld16s_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld16s, ret, arg2, offset);
- } else {
- tcg_gen_ld16s_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld16s, ret, arg2, offset);
}
void tcg_gen_ld32u_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld32u, ret, arg2, offset);
- } else {
- tcg_gen_ld_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld32u, ret, arg2, offset);
}
void tcg_gen_ld32s_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld32s, ret, arg2, offset);
- } else {
- tcg_gen_ld_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld32s, ret, arg2, offset);
}
void tcg_gen_ld_i64(TCGv_i64 ret, TCGv_ptr arg2, tcg_target_long offset)
{
- /*
- * For 32-bit host, since arg2 and ret have different types,
- * they cannot be the same temporary -- no chance of overlap.
- */
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_ld, ret, arg2, offset);
- } else if (HOST_BIG_ENDIAN) {
- tcg_gen_ld_i32(TCGV_HIGH(ret), arg2, offset);
- tcg_gen_ld_i32(TCGV_LOW(ret), arg2, offset + 4);
- } else {
- tcg_gen_ld_i32(TCGV_LOW(ret), arg2, offset);
- tcg_gen_ld_i32(TCGV_HIGH(ret), arg2, offset + 4);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_ld, ret, arg2, offset);
}
void tcg_gen_st8_i64(TCGv_i64 arg1, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_st8, arg1, arg2, offset);
- } else {
- tcg_gen_st8_i32(TCGV_LOW(arg1), arg2, offset);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_st8, arg1, arg2, offset);
}
void tcg_gen_st16_i64(TCGv_i64 arg1, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_st16, arg1, arg2, offset);
- } else {
- tcg_gen_st16_i32(TCGV_LOW(arg1), arg2, offset);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_st16, arg1, arg2, offset);
}
void tcg_gen_st32_i64(TCGv_i64 arg1, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_st32, arg1, arg2, offset);
- } else {
- tcg_gen_st_i32(TCGV_LOW(arg1), arg2, offset);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_st32, arg1, arg2, offset);
}
void tcg_gen_st_i64(TCGv_i64 arg1, TCGv_ptr arg2, tcg_target_long offset)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_ldst_op_i64(INDEX_op_st, arg1, arg2, offset);
- } else if (HOST_BIG_ENDIAN) {
- tcg_gen_st_i32(TCGV_HIGH(arg1), arg2, offset);
- tcg_gen_st_i32(TCGV_LOW(arg1), arg2, offset + 4);
- } else {
- tcg_gen_st_i32(TCGV_LOW(arg1), arg2, offset);
- tcg_gen_st_i32(TCGV_HIGH(arg1), arg2, offset + 4);
- }
+ tcg_gen_ldst_op_i64(INDEX_op_st, arg1, arg2, offset);
}
void tcg_gen_add_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_add, ret, arg1, arg2);
- } else {
- tcg_gen_add2_i32(TCGV_LOW(ret), TCGV_HIGH(ret), TCGV_LOW(arg1),
- TCGV_HIGH(arg1), TCGV_LOW(arg2), TCGV_HIGH(arg2));
- }
+ tcg_gen_op3_i64(INDEX_op_add, ret, arg1, arg2);
}
void tcg_gen_sub_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_sub, ret, arg1, arg2);
- } else {
- tcg_gen_sub2_i32(TCGV_LOW(ret), TCGV_HIGH(ret), TCGV_LOW(arg1),
- TCGV_HIGH(arg1), TCGV_LOW(arg2), TCGV_HIGH(arg2));
- }
+ tcg_gen_op3_i64(INDEX_op_sub, ret, arg1, arg2);
}
void tcg_gen_and_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_and, ret, arg1, arg2);
- } else {
- tcg_gen_and_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_and_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- }
+ tcg_gen_op3_i64(INDEX_op_and, ret, arg1, arg2);
}
void tcg_gen_or_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_or, ret, arg1, arg2);
- } else {
- tcg_gen_or_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_or_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- }
+ tcg_gen_op3_i64(INDEX_op_or, ret, arg1, arg2);
}
void tcg_gen_xor_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_xor, ret, arg1, arg2);
- } else {
- tcg_gen_xor_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_xor_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- }
+ tcg_gen_op3_i64(INDEX_op_xor, ret, arg1, arg2);
}
void tcg_gen_shl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_shl, ret, arg1, arg2);
- } else {
- gen_helper_shl_i64(ret, arg1, arg2);
- }
+ tcg_gen_op3_i64(INDEX_op_shl, ret, arg1, arg2);
}
void tcg_gen_shr_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_shr, ret, arg1, arg2);
- } else {
- gen_helper_shr_i64(ret, arg1, arg2);
- }
+ tcg_gen_op3_i64(INDEX_op_shr, ret, arg1, arg2);
}
void tcg_gen_sar_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_sar, ret, arg1, arg2);
- } else {
- gen_helper_sar_i64(ret, arg1, arg2);
- }
+ tcg_gen_op3_i64(INDEX_op_sar, ret, arg1, arg2);
}
void tcg_gen_mul_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- TCGv_i64 t0;
- TCGv_i32 t1;
-
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op3_i64(INDEX_op_mul, ret, arg1, arg2);
- return;
- }
-
-
- t0 = tcg_temp_ebb_new_i64();
- t1 = tcg_temp_ebb_new_i32();
-
- tcg_gen_mulu2_i32(TCGV_LOW(t0), TCGV_HIGH(t0),
- TCGV_LOW(arg1), TCGV_LOW(arg2));
-
- tcg_gen_mul_i32(t1, TCGV_LOW(arg1), TCGV_HIGH(arg2));
- tcg_gen_add_i32(TCGV_HIGH(t0), TCGV_HIGH(t0), t1);
- tcg_gen_mul_i32(t1, TCGV_HIGH(arg1), TCGV_LOW(arg2));
- tcg_gen_add_i32(TCGV_HIGH(t0), TCGV_HIGH(t0), t1);
-
- tcg_gen_mov_i64(ret, t0);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i32(t1);
+ tcg_gen_op3_i64(INDEX_op_mul, ret, arg1, arg2);
}
void tcg_gen_addi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
@@ -1678,12 +1499,8 @@ void tcg_gen_addi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
/* some cases can be optimized here */
if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
- } else if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_add_i64(ret, arg1, tcg_constant_i64(arg2));
} else {
- tcg_gen_add2_i32(TCGV_LOW(ret), TCGV_HIGH(ret),
- TCGV_LOW(arg1), TCGV_HIGH(arg1),
- tcg_constant_i32(arg2), tcg_constant_i32(arg2 >> 32));
+ tcg_gen_add_i64(ret, arg1, tcg_constant_i64(arg2));
}
}
@@ -1691,12 +1508,8 @@ void tcg_gen_subfi_i64(TCGv_i64 ret, int64_t arg1, TCGv_i64 arg2)
{
if (arg1 == 0) {
tcg_gen_neg_i64(ret, arg2);
- } else if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_sub_i64(ret, tcg_constant_i64(arg1), arg2);
} else {
- tcg_gen_sub2_i32(TCGV_LOW(ret), TCGV_HIGH(ret),
- tcg_constant_i32(arg1), tcg_constant_i32(arg1 >> 32),
- TCGV_LOW(arg2), TCGV_HIGH(arg2));
+ tcg_gen_sub_i64(ret, tcg_constant_i64(arg1), arg2);
}
}
@@ -1707,23 +1520,11 @@ void tcg_gen_subi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_neg_i64(TCGv_i64 ret, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_gen_op2_i64(INDEX_op_neg, ret, arg);
- } else {
- TCGv_i32 zero = tcg_constant_i32(0);
- tcg_gen_sub2_i32(TCGV_LOW(ret), TCGV_HIGH(ret),
- zero, zero, TCGV_LOW(arg), TCGV_HIGH(arg));
- }
+ tcg_gen_op2_i64(INDEX_op_neg, ret, arg);
}
void tcg_gen_andi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_andi_i32(TCGV_LOW(ret), TCGV_LOW(arg1), arg2);
- tcg_gen_andi_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), arg2 >> 32);
- return;
- }
-
/* Some cases can be optimized here. */
switch (arg2) {
case 0:
@@ -1754,11 +1555,6 @@ void tcg_gen_andi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_ori_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_ori_i32(TCGV_LOW(ret), TCGV_LOW(arg1), arg2);
- tcg_gen_ori_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), arg2 >> 32);
- return;
- }
/* Some cases can be optimized here. */
if (arg2 == -1) {
tcg_gen_movi_i64(ret, -1);
@@ -1771,11 +1567,6 @@ void tcg_gen_ori_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_xori_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_xori_i32(TCGV_LOW(ret), TCGV_LOW(arg1), arg2);
- tcg_gen_xori_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), arg2 >> 32);
- return;
- }
/* Some cases can be optimized here. */
if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
@@ -1788,48 +1579,10 @@ void tcg_gen_xori_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
}
}
-static inline void tcg_gen_shifti_i64(TCGv_i64 ret, TCGv_i64 arg1,
- unsigned c, bool right, bool arith)
-{
- tcg_debug_assert(c < 64);
- if (c == 0) {
- tcg_gen_mov_i32(TCGV_LOW(ret), TCGV_LOW(arg1));
- tcg_gen_mov_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1));
- } else if (c >= 32) {
- c -= 32;
- if (right) {
- if (arith) {
- tcg_gen_sari_i32(TCGV_LOW(ret), TCGV_HIGH(arg1), c);
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), 31);
- } else {
- tcg_gen_shri_i32(TCGV_LOW(ret), TCGV_HIGH(arg1), c);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
- } else {
- tcg_gen_shli_i32(TCGV_HIGH(ret), TCGV_LOW(arg1), c);
- tcg_gen_movi_i32(TCGV_LOW(ret), 0);
- }
- } else if (right) {
- tcg_gen_extract2_i32(TCGV_LOW(ret), TCGV_LOW(arg1),
- TCGV_HIGH(arg1), c);
- if (arith) {
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), c);
- } else {
- tcg_gen_shri_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), c);
- }
- } else {
- tcg_gen_extract2_i32(TCGV_HIGH(ret), TCGV_LOW(arg1),
- TCGV_HIGH(arg1), 32 - c);
- tcg_gen_shli_i32(TCGV_LOW(ret), TCGV_LOW(arg1), c);
- }
-}
-
void tcg_gen_shli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 64);
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_shifti_i64(ret, arg1, arg2, 0, 0);
- } else if (arg2 == 0) {
+ if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
} else {
tcg_gen_shl_i64(ret, arg1, tcg_constant_i64(arg2));
@@ -1839,9 +1592,7 @@ void tcg_gen_shli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_shri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 64);
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_shifti_i64(ret, arg1, arg2, 1, 0);
- } else if (arg2 == 0) {
+ if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
} else {
tcg_gen_shr_i64(ret, arg1, tcg_constant_i64(arg2));
@@ -1851,9 +1602,7 @@ void tcg_gen_shri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_sari_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 64);
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_shifti_i64(ret, arg1, arg2, 1, 1);
- } else if (arg2 == 0) {
+ if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
} else {
tcg_gen_sar_i64(ret, arg1, tcg_constant_i64(arg2));
@@ -2034,14 +1783,7 @@ void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg, int flags)
/* Only one extension flag may be present. */
tcg_debug_assert(!(flags & TCG_BSWAP_OS) || !(flags & TCG_BSWAP_OZ));
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_bswap16_i32(TCGV_LOW(ret), TCGV_LOW(arg), flags);
- if (flags & TCG_BSWAP_OS) {
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- } else {
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
- } else if (tcg_op_supported(INDEX_op_bswap16, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_bswap16, TCG_TYPE_I64, 0)) {
tcg_gen_op3i_i64(INDEX_op_bswap16, ret, arg, flags);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
@@ -2084,14 +1826,7 @@ void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg, int flags)
/* Only one extension flag may be present. */
tcg_debug_assert(!(flags & TCG_BSWAP_OS) || !(flags & TCG_BSWAP_OZ));
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_bswap32_i32(TCGV_LOW(ret), TCGV_LOW(arg));
- if (flags & TCG_BSWAP_OS) {
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- } else {
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- }
- } else if (tcg_op_supported(INDEX_op_bswap32, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_bswap32, TCG_TYPE_I64, 0)) {
tcg_gen_op3i_i64(INDEX_op_bswap32, ret, arg, flags);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
@@ -2127,18 +1862,7 @@ void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg, int flags)
*/
void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- TCGv_i32 t0, t1;
- t0 = tcg_temp_ebb_new_i32();
- t1 = tcg_temp_ebb_new_i32();
-
- tcg_gen_bswap32_i32(t0, TCGV_LOW(arg));
- tcg_gen_bswap32_i32(t1, TCGV_HIGH(arg));
- tcg_gen_mov_i32(TCGV_LOW(ret), t1);
- tcg_gen_mov_i32(TCGV_HIGH(ret), t0);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
- } else if (tcg_op_supported(INDEX_op_bswap64, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_bswap64, TCG_TYPE_I64, 0)) {
tcg_gen_op3i_i64(INDEX_op_bswap64, ret, arg, 0);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
@@ -2207,10 +1931,7 @@ void tcg_gen_wswap_i64(TCGv_i64 ret, TCGv_i64 arg)
void tcg_gen_not_i64(TCGv_i64 ret, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_not_i32(TCGV_LOW(ret), TCGV_LOW(arg));
- tcg_gen_not_i32(TCGV_HIGH(ret), TCGV_HIGH(arg));
- } else if (tcg_op_supported(INDEX_op_not, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_not, TCG_TYPE_I64, 0)) {
tcg_gen_op2_i64(INDEX_op_not, ret, arg);
} else {
tcg_gen_xori_i64(ret, arg, -1);
@@ -2219,10 +1940,7 @@ void tcg_gen_not_i64(TCGv_i64 ret, TCGv_i64 arg)
void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_andc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_andc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- } else if (tcg_op_supported(INDEX_op_andc, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_andc, TCG_TYPE_I64, 0)) {
tcg_gen_op3_i64(INDEX_op_andc, ret, arg1, arg2);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
@@ -2234,10 +1952,7 @@ void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_eqv_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_eqv_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_eqv_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- } else if (tcg_op_supported(INDEX_op_eqv, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_eqv, TCG_TYPE_I64, 0)) {
tcg_gen_op3_i64(INDEX_op_eqv, ret, arg1, arg2);
} else {
tcg_gen_xor_i64(ret, arg1, arg2);
@@ -2247,10 +1962,7 @@ void tcg_gen_eqv_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_nand_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_nand_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_nand_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- } else if (tcg_op_supported(INDEX_op_nand, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_nand, TCG_TYPE_I64, 0)) {
tcg_gen_op3_i64(INDEX_op_nand, ret, arg1, arg2);
} else {
tcg_gen_and_i64(ret, arg1, arg2);
@@ -2260,10 +1972,7 @@ void tcg_gen_nand_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_nor_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_nor_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_nor_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- } else if (tcg_op_supported(INDEX_op_nor, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_nor, TCG_TYPE_I64, 0)) {
tcg_gen_op3_i64(INDEX_op_nor, ret, arg1, arg2);
} else {
tcg_gen_or_i64(ret, arg1, arg2);
@@ -2273,10 +1982,7 @@ void tcg_gen_nor_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_orc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_orc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
- tcg_gen_orc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
- } else if (tcg_op_supported(INDEX_op_orc, TCG_TYPE_I64, 0)) {
+ if (tcg_op_supported(INDEX_op_orc, TCG_TYPE_I64, 0)) {
tcg_gen_op3_i64(INDEX_op_orc, ret, arg1, arg2);
} else {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
@@ -2297,18 +2003,7 @@ void tcg_gen_clz_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_clzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 32
- && arg2 <= 0xffffffffu
- && tcg_op_supported(INDEX_op_clz, TCG_TYPE_I32, 0)) {
- TCGv_i32 t = tcg_temp_ebb_new_i32();
- tcg_gen_clzi_i32(t, TCGV_LOW(arg1), arg2 - 32);
- tcg_gen_addi_i32(t, t, 32);
- tcg_gen_clz_i32(TCGV_LOW(ret), TCGV_HIGH(arg1), t);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- tcg_temp_free_i32(t);
- } else {
- tcg_gen_clz_i64(ret, arg1, tcg_constant_i64(arg2));
- }
+ tcg_gen_clz_i64(ret, arg1, tcg_constant_i64(arg2));
}
void tcg_gen_ctz_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
@@ -2342,18 +2037,9 @@ void tcg_gen_ctz_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
{
- if (TCG_TARGET_REG_BITS == 32
- && arg2 <= 0xffffffffu
- && tcg_op_supported(INDEX_op_ctz, TCG_TYPE_I32, 0)) {
- TCGv_i32 t32 = tcg_temp_ebb_new_i32();
- tcg_gen_ctzi_i32(t32, TCGV_HIGH(arg1), arg2 - 32);
- tcg_gen_addi_i32(t32, t32, 32);
- tcg_gen_ctz_i32(TCGV_LOW(ret), TCGV_LOW(arg1), t32);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- tcg_temp_free_i32(t32);
- } else if (arg2 == 64
- && !tcg_op_supported(INDEX_op_ctz, TCG_TYPE_I64, 0)
- && tcg_op_supported(INDEX_op_ctpop, TCG_TYPE_I64, 0)) {
+ if (arg2 == 64
+ && !tcg_op_supported(INDEX_op_ctz, TCG_TYPE_I64, 0)
+ && tcg_op_supported(INDEX_op_ctpop, TCG_TYPE_I64, 0)) {
/* This equivalence has the advantage of not requiring a fixup. */
TCGv_i64 t = tcg_temp_ebb_new_i64();
tcg_gen_subi_i64(t, arg1, 1);
@@ -2381,21 +2067,11 @@ void tcg_gen_clrsb_i64(TCGv_i64 ret, TCGv_i64 arg)
void tcg_gen_ctpop_i64(TCGv_i64 ret, TCGv_i64 arg1)
{
- if (TCG_TARGET_REG_BITS == 64) {
- if (tcg_op_supported(INDEX_op_ctpop, TCG_TYPE_I64, 0)) {
- tcg_gen_op2_i64(INDEX_op_ctpop, ret, arg1);
- return;
- }
+ if (tcg_op_supported(INDEX_op_ctpop, TCG_TYPE_I64, 0)) {
+ tcg_gen_op2_i64(INDEX_op_ctpop, ret, arg1);
} else {
- if (tcg_op_supported(INDEX_op_ctpop, TCG_TYPE_I32, 0)) {
- tcg_gen_ctpop_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1));
- tcg_gen_ctpop_i32(TCGV_LOW(ret), TCGV_LOW(arg1));
- tcg_gen_add_i32(TCGV_LOW(ret), TCGV_LOW(ret), TCGV_HIGH(ret));
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- return;
- }
+ gen_helper_ctpop_i64(ret, arg1);
}
- gen_helper_ctpop_i64(ret, arg1);
}
void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
@@ -2485,24 +2161,9 @@ void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2,
return;
}
- if (TCG_TARGET_REG_BITS == 64) {
- if (TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
- tcg_gen_op5ii_i64(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
- return;
- }
- } else {
- if (ofs >= 32) {
- tcg_gen_deposit_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1),
- TCGV_LOW(arg2), ofs - 32, len);
- tcg_gen_mov_i32(TCGV_LOW(ret), TCGV_LOW(arg1));
- return;
- }
- if (ofs + len <= 32) {
- tcg_gen_deposit_i32(TCGV_LOW(ret), TCGV_LOW(arg1),
- TCGV_LOW(arg2), ofs, len);
- tcg_gen_mov_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1));
- return;
- }
+ if (TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
+ tcg_gen_op5ii_i64(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
+ return;
}
t1 = tcg_temp_ebb_new_i64();
@@ -2545,24 +2206,10 @@ void tcg_gen_deposit_z_i64(TCGv_i64 ret, TCGv_i64 arg,
tcg_gen_shli_i64(ret, arg, ofs);
} else if (ofs == 0) {
tcg_gen_andi_i64(ret, arg, (1ull << len) - 1);
- } else if (TCG_TARGET_REG_BITS == 64 &&
- TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
+ } else if (TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
TCGv_i64 zero = tcg_constant_i64(0);
tcg_gen_op5ii_i64(INDEX_op_deposit, ret, zero, arg, ofs, len);
} else {
- if (TCG_TARGET_REG_BITS == 32) {
- if (ofs >= 32) {
- tcg_gen_deposit_z_i32(TCGV_HIGH(ret), TCGV_LOW(arg),
- ofs - 32, len);
- tcg_gen_movi_i32(TCGV_LOW(ret), 0);
- return;
- }
- if (ofs + len <= 32) {
- tcg_gen_deposit_z_i32(TCGV_LOW(ret), TCGV_LOW(arg), ofs, len);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- return;
- }
- }
/*
* To help two-operand hosts we prefer to zero-extend first,
* which allows ARG to stay live.
@@ -2597,32 +2244,6 @@ void tcg_gen_extract_i64(TCGv_i64 ret, TCGv_i64 arg,
return;
}
- if (TCG_TARGET_REG_BITS == 32) {
- /* Look for a 32-bit extract within one of the two words. */
- if (ofs >= 32) {
- tcg_gen_extract_i32(TCGV_LOW(ret), TCGV_HIGH(arg), ofs - 32, len);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- return;
- }
- if (ofs + len <= 32) {
- tcg_gen_extract_i32(TCGV_LOW(ret), TCGV_LOW(arg), ofs, len);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- return;
- }
-
- /* The field is split across two words. */
- tcg_gen_extract2_i32(TCGV_LOW(ret), TCGV_LOW(arg),
- TCGV_HIGH(arg), ofs);
- if (len <= 32) {
- tcg_gen_extract_i32(TCGV_LOW(ret), TCGV_LOW(ret), 0, len);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- } else {
- tcg_gen_extract_i32(TCGV_HIGH(ret), TCGV_HIGH(arg),
- ofs, len - 32);
- }
- return;
- }
-
if (TCG_TARGET_extract_valid(TCG_TYPE_I64, ofs, len)) {
tcg_gen_op4ii_i64(INDEX_op_extract, ret, arg, ofs, len);
return;
@@ -2668,38 +2289,6 @@ void tcg_gen_sextract_i64(TCGv_i64 ret, TCGv_i64 arg,
return;
}
- if (TCG_TARGET_REG_BITS == 32) {
- /* Look for a 32-bit extract within one of the two words. */
- if (ofs >= 32) {
- tcg_gen_sextract_i32(TCGV_LOW(ret), TCGV_HIGH(arg), ofs - 32, len);
- } else if (ofs + len <= 32) {
- tcg_gen_sextract_i32(TCGV_LOW(ret), TCGV_LOW(arg), ofs, len);
- } else if (ofs == 0) {
- tcg_gen_mov_i32(TCGV_LOW(ret), TCGV_LOW(arg));
- tcg_gen_sextract_i32(TCGV_HIGH(ret), TCGV_HIGH(arg), 0, len - 32);
- return;
- } else if (len > 32) {
- TCGv_i32 t = tcg_temp_ebb_new_i32();
- /* Extract the bits for the high word normally. */
- tcg_gen_sextract_i32(t, TCGV_HIGH(arg), ofs + 32, len - 32);
- /* Shift the field down for the low part. */
- tcg_gen_shri_i64(ret, arg, ofs);
- /* Overwrite the shift into the high part. */
- tcg_gen_mov_i32(TCGV_HIGH(ret), t);
- tcg_temp_free_i32(t);
- return;
- } else {
- /* Shift the field down for the low part, such that the
- field sits at the MSB. */
- tcg_gen_shri_i64(ret, arg, ofs + len - 32);
- /* Shift the field down from the MSB, sign extending. */
- tcg_gen_sari_i32(TCGV_LOW(ret), TCGV_LOW(ret), 32 - len);
- }
- /* Sign-extend the field from 32 bits. */
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- return;
- }
-
if (TCG_TARGET_sextract_valid(TCG_TYPE_I64, ofs, len)) {
tcg_gen_op4ii_i64(INDEX_op_sextract, ret, arg, ofs, len);
return;
@@ -2763,20 +2352,8 @@ void tcg_gen_add2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 al,
if (tcg_op_supported(INDEX_op_addci, TCG_TYPE_REG, 0)) {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_op3_i32(INDEX_op_addco, TCGV_LOW(t0),
- TCGV_LOW(al), TCGV_LOW(bl));
- tcg_gen_op3_i32(INDEX_op_addcio, TCGV_HIGH(t0),
- TCGV_HIGH(al), TCGV_HIGH(bl));
- tcg_gen_op3_i32(INDEX_op_addcio, TCGV_LOW(rh),
- TCGV_LOW(ah), TCGV_LOW(bh));
- tcg_gen_op3_i32(INDEX_op_addci, TCGV_HIGH(rh),
- TCGV_HIGH(ah), TCGV_HIGH(bh));
- } else {
- tcg_gen_op3_i64(INDEX_op_addco, t0, al, bl);
- tcg_gen_op3_i64(INDEX_op_addci, rh, ah, bh);
- }
-
+ tcg_gen_op3_i64(INDEX_op_addco, t0, al, bl);
+ tcg_gen_op3_i64(INDEX_op_addci, rh, ah, bh);
tcg_gen_mov_i64(rl, t0);
tcg_temp_free_i64(t0);
} else {
@@ -2795,68 +2372,27 @@ void tcg_gen_add2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 al,
void tcg_gen_addcio_i64(TCGv_i64 r, TCGv_i64 co,
TCGv_i64 a, TCGv_i64 b, TCGv_i64 ci)
{
- if (TCG_TARGET_REG_BITS == 64) {
- if (tcg_op_supported(INDEX_op_addci, TCG_TYPE_I64, 0)) {
- TCGv_i64 discard = tcg_temp_ebb_new_i64();
- TCGv_i64 zero = tcg_constant_i64(0);
- TCGv_i64 mone = tcg_constant_i64(-1);
+ if (tcg_op_supported(INDEX_op_addci, TCG_TYPE_I64, 0)) {
+ TCGv_i64 discard = tcg_temp_ebb_new_i64();
+ TCGv_i64 zero = tcg_constant_i64(0);
+ TCGv_i64 mone = tcg_constant_i64(-1);
- tcg_gen_op3_i64(INDEX_op_addco, discard, ci, mone);
- tcg_gen_op3_i64(INDEX_op_addcio, r, a, b);
- tcg_gen_op3_i64(INDEX_op_addci, co, zero, zero);
- tcg_temp_free_i64(discard);
- } else {
- TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- TCGv_i64 t1 = tcg_temp_ebb_new_i64();
-
- tcg_gen_add_i64(t0, a, b);
- tcg_gen_setcond_i64(TCG_COND_LTU, t1, t0, a);
- tcg_gen_add_i64(r, t0, ci);
- tcg_gen_setcond_i64(TCG_COND_LTU, t0, r, t0);
- tcg_gen_or_i64(co, t0, t1);
-
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
- }
+ tcg_gen_op3_i64(INDEX_op_addco, discard, ci, mone);
+ tcg_gen_op3_i64(INDEX_op_addcio, r, a, b);
+ tcg_gen_op3_i64(INDEX_op_addci, co, zero, zero);
+ tcg_temp_free_i64(discard);
} else {
- if (tcg_op_supported(INDEX_op_addci, TCG_TYPE_I32, 0)) {
- TCGv_i32 discard = tcg_temp_ebb_new_i32();
- TCGv_i32 zero = tcg_constant_i32(0);
- TCGv_i32 mone = tcg_constant_i32(-1);
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
- tcg_gen_op3_i32(INDEX_op_addco, discard, TCGV_LOW(ci), mone);
- tcg_gen_op3_i32(INDEX_op_addcio, discard, TCGV_HIGH(ci), mone);
- tcg_gen_op3_i32(INDEX_op_addcio, TCGV_LOW(r),
- TCGV_LOW(a), TCGV_LOW(b));
- tcg_gen_op3_i32(INDEX_op_addcio, TCGV_HIGH(r),
- TCGV_HIGH(a), TCGV_HIGH(b));
- tcg_gen_op3_i32(INDEX_op_addci, TCGV_LOW(co), zero, zero);
- tcg_temp_free_i32(discard);
- } else {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- TCGv_i32 c0 = tcg_temp_ebb_new_i32();
- TCGv_i32 c1 = tcg_temp_ebb_new_i32();
+ tcg_gen_add_i64(t0, a, b);
+ tcg_gen_setcond_i64(TCG_COND_LTU, t1, t0, a);
+ tcg_gen_add_i64(r, t0, ci);
+ tcg_gen_setcond_i64(TCG_COND_LTU, t0, r, t0);
+ tcg_gen_or_i64(co, t0, t1);
- tcg_gen_or_i32(c1, TCGV_LOW(ci), TCGV_HIGH(ci));
- tcg_gen_setcondi_i32(TCG_COND_NE, c1, c1, 0);
-
- tcg_gen_add_i32(t0, TCGV_LOW(a), TCGV_LOW(b));
- tcg_gen_setcond_i32(TCG_COND_LTU, c0, t0, TCGV_LOW(a));
- tcg_gen_add_i32(TCGV_LOW(r), t0, c1);
- tcg_gen_setcond_i32(TCG_COND_LTU, c1, TCGV_LOW(r), c1);
- tcg_gen_or_i32(c1, c1, c0);
-
- tcg_gen_add_i32(t0, TCGV_HIGH(a), TCGV_HIGH(b));
- tcg_gen_setcond_i32(TCG_COND_LTU, c0, t0, TCGV_HIGH(a));
- tcg_gen_add_i32(TCGV_HIGH(r), t0, c1);
- tcg_gen_setcond_i32(TCG_COND_LTU, c1, TCGV_HIGH(r), c1);
- tcg_gen_or_i32(TCGV_LOW(co), c0, c1);
-
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(c0);
- tcg_temp_free_i32(c1);
- }
- tcg_gen_movi_i32(TCGV_HIGH(co), 0);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
}
}
@@ -2866,20 +2402,8 @@ void tcg_gen_sub2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 al,
if (tcg_op_supported(INDEX_op_subbi, TCG_TYPE_REG, 0)) {
TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_op3_i32(INDEX_op_subbo, TCGV_LOW(t0),
- TCGV_LOW(al), TCGV_LOW(bl));
- tcg_gen_op3_i32(INDEX_op_subbio, TCGV_HIGH(t0),
- TCGV_HIGH(al), TCGV_HIGH(bl));
- tcg_gen_op3_i32(INDEX_op_subbio, TCGV_LOW(rh),
- TCGV_LOW(ah), TCGV_LOW(bh));
- tcg_gen_op3_i32(INDEX_op_subbi, TCGV_HIGH(rh),
- TCGV_HIGH(ah), TCGV_HIGH(bh));
- } else {
- tcg_gen_op3_i64(INDEX_op_subbo, t0, al, bl);
- tcg_gen_op3_i64(INDEX_op_subbi, rh, ah, bh);
- }
-
+ tcg_gen_op3_i64(INDEX_op_subbo, t0, al, bl);
+ tcg_gen_op3_i64(INDEX_op_subbi, rh, ah, bh);
tcg_gen_mov_i64(rl, t0);
tcg_temp_free_i64(t0);
} else {
@@ -3002,57 +2526,32 @@ void tcg_gen_abs_i64(TCGv_i64 ret, TCGv_i64 a)
void tcg_gen_extrl_i64_i32(TCGv_i32 ret, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(ret, TCGV_LOW(arg));
- } else {
- tcg_gen_op2(INDEX_op_extrl_i64_i32, TCG_TYPE_I32,
- tcgv_i32_arg(ret), tcgv_i64_arg(arg));
- }
+ tcg_gen_op2(INDEX_op_extrl_i64_i32, TCG_TYPE_I32,
+ tcgv_i32_arg(ret), tcgv_i64_arg(arg));
}
void tcg_gen_extrh_i64_i32(TCGv_i32 ret, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(ret, TCGV_HIGH(arg));
- } else {
- tcg_gen_op2(INDEX_op_extrh_i64_i32, TCG_TYPE_I32,
- tcgv_i32_arg(ret), tcgv_i64_arg(arg));
- }
+ tcg_gen_op2(INDEX_op_extrh_i64_i32, TCG_TYPE_I32,
+ tcgv_i32_arg(ret), tcgv_i64_arg(arg));
}
void tcg_gen_extu_i32_i64(TCGv_i64 ret, TCGv_i32 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(TCGV_LOW(ret), arg);
- tcg_gen_movi_i32(TCGV_HIGH(ret), 0);
- } else {
- tcg_gen_op2(INDEX_op_extu_i32_i64, TCG_TYPE_I64,
- tcgv_i64_arg(ret), tcgv_i32_arg(arg));
- }
+ tcg_gen_op2(INDEX_op_extu_i32_i64, TCG_TYPE_I64,
+ tcgv_i64_arg(ret), tcgv_i32_arg(arg));
}
void tcg_gen_ext_i32_i64(TCGv_i64 ret, TCGv_i32 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(TCGV_LOW(ret), arg);
- tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);
- } else {
- tcg_gen_op2(INDEX_op_ext_i32_i64, TCG_TYPE_I64,
- tcgv_i64_arg(ret), tcgv_i32_arg(arg));
- }
+ tcg_gen_op2(INDEX_op_ext_i32_i64, TCG_TYPE_I64,
+ tcgv_i64_arg(ret), tcgv_i32_arg(arg));
}
void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 low, TCGv_i32 high)
{
- TCGv_i64 tmp;
+ TCGv_i64 tmp = tcg_temp_ebb_new_i64();
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(TCGV_LOW(dest), low);
- tcg_gen_mov_i32(TCGV_HIGH(dest), high);
- return;
- }
-
- tmp = tcg_temp_ebb_new_i64();
/* These extensions are only needed for type correctness.
We may be able to do better given target specific information. */
tcg_gen_extu_i32_i64(tmp, high);
@@ -3070,13 +2569,8 @@ void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 low, TCGv_i32 high)
void tcg_gen_extr_i64_i32(TCGv_i32 lo, TCGv_i32 hi, TCGv_i64 arg)
{
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_mov_i32(lo, TCGV_LOW(arg));
- tcg_gen_mov_i32(hi, TCGV_HIGH(arg));
- } else {
- tcg_gen_extrl_i64_i32(lo, arg);
- tcg_gen_extrh_i64_i32(hi, arg);
- }
+ tcg_gen_extrl_i64_i32(lo, arg);
+ tcg_gen_extrh_i64_i32(hi, arg);
}
void tcg_gen_extr32_i64(TCGv_i64 lo, TCGv_i64 hi, TCGv_i64 arg)
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 29/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-gvec.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (27 preceding siblings ...)
2026-01-18 22:03 ` [PULL 28/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op.c Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 30/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-ldst.c Richard Henderson
` (25 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-op-gvec.c | 49 +++++++++++++++++++++++------------------------
1 file changed, 24 insertions(+), 25 deletions(-)
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 2cfc7e9409..bc323e2500 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -586,12 +586,11 @@ static void do_dup(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
}
}
- /* Implement inline with a vector type, if possible.
- * Prefer integer when 64-bit host and no variable dup.
+ /*
+ * Implement inline with a vector type, if possible;
+ * prefer_i64 with 64-bit variable dup.
*/
- type = choose_vector_type(NULL, vece, oprsz,
- (TCG_TARGET_REG_BITS == 64 && in_32 == NULL
- && (in_64 == NULL || vece == MO_64)));
+ type = choose_vector_type(NULL, vece, oprsz, vece == MO_64 && in_64);
if (type != 0) {
TCGv_vec t_vec = tcg_temp_new_vec(type);
@@ -612,11 +611,11 @@ static void do_dup(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
t_32 = NULL;
if (in_32) {
- /* We are given a 32-bit variable input. For a 64-bit host,
- use a 64-bit operation unless the 32-bit operation would
- be simple enough. */
- if (TCG_TARGET_REG_BITS == 64
- && (vece != MO_32 || !check_size_impl(oprsz, 4))) {
+ /*
+ * We are given a 32-bit variable input. Use a 64-bit operation
+ * unless the 32-bit operation would be simple enough.
+ */
+ if (vece != MO_32 || !check_size_impl(oprsz, 4)) {
t_64 = tcg_temp_ebb_new_i64();
tcg_gen_extu_i32_i64(t_64, in_32);
tcg_gen_dup_i64(vece, t_64, t_64);
@@ -629,14 +628,16 @@ static void do_dup(unsigned vece, TCGv_ptr dbase, uint32_t dofs,
t_64 = tcg_temp_ebb_new_i64();
tcg_gen_dup_i64(vece, t_64, in_64);
} else {
- /* We are given a constant input. */
- /* For 64-bit hosts, use 64-bit constants for "simple" constants
- or when we'd need too many 32-bit stores, or when a 64-bit
- constant is really required. */
+ /*
+ * We are given a constant input.
+ * Use 64-bit constants for "simple" constants or when we'd
+ * need too many 32-bit stores, or when a 64-bit constant
+ * is really required.
+ */
if (vece == MO_64
- || (TCG_TARGET_REG_BITS == 64
- && (in_c == 0 || in_c == -1
- || !check_size_impl(oprsz, 4)))) {
+ || in_c == 0
+ || in_c == -1
+ || !check_size_impl(oprsz, 4)) {
t_64 = tcg_constant_i64(in_c);
} else {
t_32 = tcg_constant_i32(in_c);
@@ -3872,12 +3873,11 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
}
/*
- * Implement inline with a vector type, if possible.
- * Prefer integer when 64-bit host and 64-bit comparison.
+ * Implement inline with a vector type, if possible;
+ * prefer_i64 for a 64-bit comparison.
*/
hold_list = tcg_swap_vecop_list(cmp_list);
- type = choose_vector_type(cmp_list, vece, oprsz,
- TCG_TARGET_REG_BITS == 64 && vece == MO_64);
+ type = choose_vector_type(cmp_list, vece, oprsz, vece == MO_64);
switch (type) {
case TCG_TYPE_V256:
/* Recall that ARM SVE allows vector sizes that are not a
@@ -3992,11 +3992,10 @@ void tcg_gen_gvec_cmps(TCGCond cond, unsigned vece, uint32_t dofs,
}
/*
- * Implement inline with a vector type, if possible.
- * Prefer integer when 64-bit host and 64-bit comparison.
+ * Implement inline with a vector type, if possible;
+ * prefer_i64 for a 64-bit comparison.
*/
- type = choose_vector_type(cmp_list, vece, oprsz,
- TCG_TARGET_REG_BITS == 64 && vece == MO_64);
+ type = choose_vector_type(cmp_list, vece, oprsz, vece == MO_64);
if (type != 0) {
const TCGOpcode *hold_list = tcg_swap_vecop_list(cmp_list);
TCGv_vec t_vec = tcg_temp_new_vec(type);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 30/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-ldst.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (28 preceding siblings ...)
2026-01-18 22:03 ` [PULL 29/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-gvec.c Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 31/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg.c Richard Henderson
` (24 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-op-ldst.c | 113 +++++++++++-----------------------------------
1 file changed, 27 insertions(+), 86 deletions(-)
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index 7716c3ad7c..55bfbf3a20 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -106,24 +106,12 @@ static void gen_ldst2(TCGOpcode opc, TCGType type, TCGTemp *vl, TCGTemp *vh,
static void gen_ld_i64(TCGv_i64 v, TCGTemp *addr, MemOpIdx oi)
{
- if (TCG_TARGET_REG_BITS == 32) {
- gen_ldst2(INDEX_op_qemu_ld2, TCG_TYPE_I64,
- tcgv_i32_temp(TCGV_LOW(v)), tcgv_i32_temp(TCGV_HIGH(v)),
- addr, oi);
- } else {
- gen_ldst1(INDEX_op_qemu_ld, TCG_TYPE_I64, tcgv_i64_temp(v), addr, oi);
- }
+ gen_ldst1(INDEX_op_qemu_ld, TCG_TYPE_I64, tcgv_i64_temp(v), addr, oi);
}
static void gen_st_i64(TCGv_i64 v, TCGTemp *addr, MemOpIdx oi)
{
- if (TCG_TARGET_REG_BITS == 32) {
- gen_ldst2(INDEX_op_qemu_st2, TCG_TYPE_I64,
- tcgv_i32_temp(TCGV_LOW(v)), tcgv_i32_temp(TCGV_HIGH(v)),
- addr, oi);
- } else {
- gen_ldst1(INDEX_op_qemu_st, TCG_TYPE_I64, tcgv_i64_temp(v), addr, oi);
- }
+ gen_ldst1(INDEX_op_qemu_st, TCG_TYPE_I64, tcgv_i64_temp(v), addr, oi);
}
static void tcg_gen_req_mo(TCGBar type)
@@ -143,7 +131,7 @@ static TCGTemp *tci_extend_addr(TCGTemp *addr)
* Compare to the extension performed by tcg_out_{ld,st}_helper_args
* for native code generation.
*/
- if (TCG_TARGET_REG_BITS == 64 && tcg_ctx->addr_type == TCG_TYPE_I32) {
+ if (tcg_ctx->addr_type == TCG_TYPE_I32) {
TCGv_i64 temp = tcg_temp_ebb_new_i64();
tcg_gen_extu_i32_i64(temp, temp_tcgv_i32(addr));
return tcgv_i64_temp(temp);
@@ -356,16 +344,6 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr,
TCGv_i64 copy_addr;
TCGTemp *addr_new;
- if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
- tcg_gen_qemu_ld_i32_int(TCGV_LOW(val), addr, idx, memop);
- if (memop & MO_SIGN) {
- tcg_gen_sari_i32(TCGV_HIGH(val), TCGV_LOW(val), 31);
- } else {
- tcg_gen_movi_i32(TCGV_HIGH(val), 0);
- }
- return;
- }
-
tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
orig_memop = memop = tcg_canonicalize_memop(memop, 1, 0);
orig_oi = oi = make_memop_idx(memop, idx);
@@ -421,11 +399,6 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr,
MemOpIdx orig_oi, oi;
TCGTemp *addr_new;
- if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
- tcg_gen_qemu_st_i32_int(TCGV_LOW(val), addr, idx, memop);
- return;
- }
-
tcg_gen_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
memop = tcg_canonicalize_memop(memop, 1, 1);
orig_oi = oi = make_memop_idx(memop, idx);
@@ -577,7 +550,7 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
orig_oi = make_memop_idx(memop, idx);
/* TODO: For now, force 32-bit hosts to use the helper. */
- if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) {
+ if (TCG_TARGET_HAS_qemu_ldst_i128) {
TCGv_i64 lo, hi;
bool need_bswap = false;
MemOpIdx oi = orig_oi;
@@ -691,7 +664,7 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
/* TODO: For now, force 32-bit hosts to use the helper. */
- if (TCG_TARGET_HAS_qemu_ldst_i128 && TCG_TARGET_REG_BITS == 64) {
+ if (TCG_TARGET_HAS_qemu_ldst_i128) {
TCGv_i64 lo, hi;
MemOpIdx oi = orig_oi;
bool need_bswap = false;
@@ -950,17 +923,6 @@ static void tcg_gen_nonatomic_cmpxchg_i64_int(TCGv_i64 retv, TCGTemp *addr,
{
TCGv_i64 t1, t2;
- if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
- tcg_gen_nonatomic_cmpxchg_i32_int(TCGV_LOW(retv), addr, TCGV_LOW(cmpv),
- TCGV_LOW(newv), idx, memop);
- if (memop & MO_SIGN) {
- tcg_gen_sari_i32(TCGV_HIGH(retv), TCGV_LOW(retv), 31);
- } else {
- tcg_gen_movi_i32(TCGV_HIGH(retv), 0);
- }
- return;
- }
-
t1 = tcg_temp_ebb_new_i64();
t2 = tcg_temp_ebb_new_i64();
@@ -1019,17 +981,6 @@ static void tcg_gen_atomic_cmpxchg_i64_int(TCGv_i64 retv, TCGTemp *addr,
* is removed.
*/
tcg_gen_movi_i64(retv, 0);
- return;
- }
-
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_gen_atomic_cmpxchg_i32_int(TCGV_LOW(retv), addr, TCGV_LOW(cmpv),
- TCGV_LOW(newv), idx, memop);
- if (memop & MO_SIGN) {
- tcg_gen_sari_i32(TCGV_HIGH(retv), TCGV_LOW(retv), 31);
- } else {
- tcg_gen_movi_i32(TCGV_HIGH(retv), 0);
- }
} else {
TCGv_i32 c32 = tcg_temp_ebb_new_i32();
TCGv_i32 n32 = tcg_temp_ebb_new_i32();
@@ -1064,43 +1015,33 @@ static void tcg_gen_nonatomic_cmpxchg_i128_int(TCGv_i128 retv, TCGTemp *addr,
TCGv_i128 cmpv, TCGv_i128 newv,
TCGArg idx, MemOp memop)
{
- if (TCG_TARGET_REG_BITS == 32) {
- /* Inline expansion below is simply too large for 32-bit hosts. */
- MemOpIdx oi = make_memop_idx(memop, idx);
- TCGv_i64 a64 = maybe_extend_addr64(addr);
+ TCGv_i128 oldv = tcg_temp_ebb_new_i128();
+ TCGv_i128 tmpv = tcg_temp_ebb_new_i128();
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
+ TCGv_i64 z = tcg_constant_i64(0);
- gen_helper_nonatomic_cmpxchgo(retv, tcg_env, a64, cmpv, newv,
- tcg_constant_i32(oi));
- maybe_free_addr64(a64);
- } else {
- TCGv_i128 oldv = tcg_temp_ebb_new_i128();
- TCGv_i128 tmpv = tcg_temp_ebb_new_i128();
- TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- TCGv_i64 t1 = tcg_temp_ebb_new_i64();
- TCGv_i64 z = tcg_constant_i64(0);
+ tcg_gen_qemu_ld_i128_int(oldv, addr, idx, memop);
- tcg_gen_qemu_ld_i128_int(oldv, addr, idx, memop);
+ /* Compare i128 */
+ tcg_gen_xor_i64(t0, TCGV128_LOW(oldv), TCGV128_LOW(cmpv));
+ tcg_gen_xor_i64(t1, TCGV128_HIGH(oldv), TCGV128_HIGH(cmpv));
+ tcg_gen_or_i64(t0, t0, t1);
- /* Compare i128 */
- tcg_gen_xor_i64(t0, TCGV128_LOW(oldv), TCGV128_LOW(cmpv));
- tcg_gen_xor_i64(t1, TCGV128_HIGH(oldv), TCGV128_HIGH(cmpv));
- tcg_gen_or_i64(t0, t0, t1);
+ /* tmpv = equal ? newv : oldv */
+ tcg_gen_movcond_i64(TCG_COND_EQ, TCGV128_LOW(tmpv), t0, z,
+ TCGV128_LOW(newv), TCGV128_LOW(oldv));
+ tcg_gen_movcond_i64(TCG_COND_EQ, TCGV128_HIGH(tmpv), t0, z,
+ TCGV128_HIGH(newv), TCGV128_HIGH(oldv));
- /* tmpv = equal ? newv : oldv */
- tcg_gen_movcond_i64(TCG_COND_EQ, TCGV128_LOW(tmpv), t0, z,
- TCGV128_LOW(newv), TCGV128_LOW(oldv));
- tcg_gen_movcond_i64(TCG_COND_EQ, TCGV128_HIGH(tmpv), t0, z,
- TCGV128_HIGH(newv), TCGV128_HIGH(oldv));
+ /* Unconditional writeback. */
+ tcg_gen_qemu_st_i128_int(tmpv, addr, idx, memop);
+ tcg_gen_mov_i128(retv, oldv);
- /* Unconditional writeback. */
- tcg_gen_qemu_st_i128_int(tmpv, addr, idx, memop);
- tcg_gen_mov_i128(retv, oldv);
-
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
- tcg_temp_free_i128(tmpv);
- tcg_temp_free_i128(oldv);
- }
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
+ tcg_temp_free_i128(tmpv);
+ tcg_temp_free_i128(oldv);
}
void tcg_gen_nonatomic_cmpxchg_i128_chk(TCGv_i128 retv, TCGTemp *addr,
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 31/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (29 preceding siblings ...)
2026-01-18 22:03 ` [PULL 30/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-ldst.c Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 32/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-internal.h Richard Henderson
` (23 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg.c | 216 +++++++++---------------------------------------------
1 file changed, 36 insertions(+), 180 deletions(-)
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 2b3bcbe750..e7bf4dad4e 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -215,10 +215,8 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = {
[MO_SW] = helper_ldsw_mmu,
[MO_UL] = helper_ldul_mmu,
[MO_UQ] = helper_ldq_mmu,
-#if TCG_TARGET_REG_BITS == 64
[MO_SL] = helper_ldsl_mmu,
[MO_128] = helper_ld16_mmu,
-#endif
};
static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = {
@@ -226,9 +224,7 @@ static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = {
[MO_16] = helper_stw_mmu,
[MO_32] = helper_stl_mmu,
[MO_64] = helper_stq_mmu,
-#if TCG_TARGET_REG_BITS == 64
[MO_128] = helper_st16_mmu,
-#endif
};
typedef struct {
@@ -504,7 +500,6 @@ static void tcg_out_movext(TCGContext *s, TCGType dst_type, TCGReg dst,
}
break;
case MO_UQ:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
if (dst_type == TCG_TYPE_I32) {
tcg_out_extrl_i64_i32(s, dst, src);
} else {
@@ -1113,7 +1108,6 @@ QEMU_BUILD_BUG_ON((int)(offsetof(CPUNegativeOffsetState, tlb.f[0]) -
< MIN_TLB_MASK_TABLE_OFS);
#endif
-#if TCG_TARGET_REG_BITS == 64
/*
* We require these functions for slow-path function calls.
* Adapt them generically for opcode output.
@@ -1148,7 +1142,6 @@ static const TCGOutOpUnary outop_extrl_i64_i32 = {
.base.static_constraint = C_O1_I1(r, r),
.out_rr = TCG_TARGET_HAS_extr_i64_i32 ? tgen_extrl_i64_i32 : NULL,
};
-#endif
static const TCGOutOp outop_goto_ptr = {
.static_constraint = C_O0_I1(r),
@@ -1360,11 +1353,7 @@ void tcg_pool_reset(TCGContext *s)
* the helpers, with the end result that it's easier to build manually.
*/
-#if TCG_TARGET_REG_BITS == 32
-# define dh_typecode_ttl dh_typecode_i32
-#else
-# define dh_typecode_ttl dh_typecode_i64
-#endif
+#define dh_typecode_ttl dh_typecode_i64
static TCGHelperInfo info_helper_ld32_mmu = {
.flags = TCG_CALL_NO_WG,
@@ -1615,17 +1604,12 @@ static void init_call_layout(TCGHelperInfo *info)
break;
case dh_typecode_i32:
case dh_typecode_s32:
+ case dh_typecode_i64:
+ case dh_typecode_s64:
case dh_typecode_ptr:
info->nr_out = 1;
info->out_kind = TCG_CALL_RET_NORMAL;
break;
- case dh_typecode_i64:
- case dh_typecode_s64:
- info->nr_out = 64 / TCG_TARGET_REG_BITS;
- info->out_kind = TCG_CALL_RET_NORMAL;
- /* Query the last register now to trigger any assert early. */
- tcg_target_call_oarg_reg(info->out_kind, info->nr_out - 1);
- break;
case dh_typecode_i128:
info->nr_out = 128 / TCG_TARGET_REG_BITS;
info->out_kind = TCG_TARGET_CALL_RET_I128;
@@ -1705,11 +1689,7 @@ static void init_call_layout(TCGHelperInfo *info)
layout_arg_even(&cum);
/* fall through */
case TCG_CALL_ARG_NORMAL:
- if (TCG_TARGET_REG_BITS == 32) {
- layout_arg_normal_n(&cum, info, 2);
- } else {
- layout_arg_1(&cum, info, TCG_CALL_ARG_NORMAL);
- }
+ layout_arg_1(&cum, info, TCG_CALL_ARG_NORMAL);
break;
default:
qemu_build_not_reached();
@@ -2002,11 +1982,8 @@ static TCGTemp *tcg_global_alloc(TCGContext *s)
static TCGTemp *tcg_global_reg_new_internal(TCGContext *s, TCGType type,
TCGReg reg, const char *name)
{
- TCGTemp *ts;
+ TCGTemp *ts = tcg_global_alloc(s);
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64 || type == TCG_TYPE_I32);
-
- ts = tcg_global_alloc(s);
ts->base_type = type;
ts->type = type;
ts->kind = TEMP_FIXED;
@@ -2040,48 +2017,20 @@ static TCGTemp *tcg_global_mem_new_internal(TCGv_ptr base, intptr_t offset,
/* We do not support double-indirect registers. */
tcg_debug_assert(!base_ts->indirect_reg);
base_ts->indirect_base = 1;
- s->nb_indirects += (TCG_TARGET_REG_BITS == 32 && type == TCG_TYPE_I64
- ? 2 : 1);
+ s->nb_indirects += 1;
indirect_reg = 1;
break;
default:
g_assert_not_reached();
}
- if (TCG_TARGET_REG_BITS == 32 && type == TCG_TYPE_I64) {
- TCGTemp *ts2 = tcg_global_alloc(s);
- char buf[64];
-
- ts->base_type = TCG_TYPE_I64;
- ts->type = TCG_TYPE_I32;
- ts->indirect_reg = indirect_reg;
- ts->mem_allocated = 1;
- ts->mem_base = base_ts;
- ts->mem_offset = offset;
- pstrcpy(buf, sizeof(buf), name);
- pstrcat(buf, sizeof(buf), "_0");
- ts->name = strdup(buf);
-
- tcg_debug_assert(ts2 == ts + 1);
- ts2->base_type = TCG_TYPE_I64;
- ts2->type = TCG_TYPE_I32;
- ts2->indirect_reg = indirect_reg;
- ts2->mem_allocated = 1;
- ts2->mem_base = base_ts;
- ts2->mem_offset = offset + 4;
- ts2->temp_subindex = 1;
- pstrcpy(buf, sizeof(buf), name);
- pstrcat(buf, sizeof(buf), "_1");
- ts2->name = strdup(buf);
- } else {
- ts->base_type = type;
- ts->type = type;
- ts->indirect_reg = indirect_reg;
- ts->mem_allocated = 1;
- ts->mem_base = base_ts;
- ts->mem_offset = offset;
- ts->name = name;
- }
+ ts->base_type = type;
+ ts->type = type;
+ ts->indirect_reg = indirect_reg;
+ ts->mem_allocated = 1;
+ ts->mem_base = base_ts;
+ ts->mem_offset = offset;
+ ts->name = name;
return ts;
}
@@ -2128,14 +2077,12 @@ TCGTemp *tcg_temp_new_internal(TCGType type, TCGTempKind kind)
switch (type) {
case TCG_TYPE_I32:
+ case TCG_TYPE_I64:
case TCG_TYPE_V64:
case TCG_TYPE_V128:
case TCG_TYPE_V256:
n = 1;
break;
- case TCG_TYPE_I64:
- n = 64 / TCG_TARGET_REG_BITS;
- break;
case TCG_TYPE_I128:
n = 128 / TCG_TARGET_REG_BITS;
break;
@@ -2300,43 +2247,13 @@ TCGTemp *tcg_constant_internal(TCGType type, int64_t val)
ts = g_hash_table_lookup(h, &val);
if (ts == NULL) {
- int64_t *val_ptr;
-
ts = tcg_temp_alloc(s);
-
- if (TCG_TARGET_REG_BITS == 32 && type == TCG_TYPE_I64) {
- TCGTemp *ts2 = tcg_temp_alloc(s);
-
- tcg_debug_assert(ts2 == ts + 1);
-
- ts->base_type = TCG_TYPE_I64;
- ts->type = TCG_TYPE_I32;
- ts->kind = TEMP_CONST;
- ts->temp_allocated = 1;
-
- ts2->base_type = TCG_TYPE_I64;
- ts2->type = TCG_TYPE_I32;
- ts2->kind = TEMP_CONST;
- ts2->temp_allocated = 1;
- ts2->temp_subindex = 1;
-
- /*
- * Retain the full value of the 64-bit constant in the low
- * part, so that the hash table works. Actual uses will
- * truncate the value to the low part.
- */
- ts[HOST_BIG_ENDIAN].val = val;
- ts[!HOST_BIG_ENDIAN].val = val >> 32;
- val_ptr = &ts[HOST_BIG_ENDIAN].val;
- } else {
- ts->base_type = type;
- ts->type = type;
- ts->kind = TEMP_CONST;
- ts->temp_allocated = 1;
- ts->val = val;
- val_ptr = &ts->val;
- }
- g_hash_table_insert(h, val_ptr, ts);
+ ts->base_type = type;
+ ts->type = type;
+ ts->kind = TEMP_CONST;
+ ts->temp_allocated = 1;
+ ts->val = val;
+ g_hash_table_insert(h, &ts->val, ts);
}
return ts;
@@ -2405,10 +2322,8 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
switch (type) {
case TCG_TYPE_I32:
- has_type = true;
- break;
case TCG_TYPE_I64:
- has_type = TCG_TARGET_REG_BITS == 64;
+ has_type = true;
break;
case TCG_TYPE_V64:
has_type = TCG_TARGET_HAS_v64;
@@ -2443,10 +2358,6 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_qemu_ld2:
case INDEX_op_qemu_st2:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_debug_assert(type == TCG_TYPE_I64);
- return true;
- }
tcg_debug_assert(type == TCG_TYPE_I128);
goto do_lookup;
@@ -2479,7 +2390,7 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_extu_i32_i64:
case INDEX_op_extrl_i64_i32:
case INDEX_op_extrh_i64_i32:
- return TCG_TARGET_REG_BITS == 64;
+ return true;
case INDEX_op_mov_vec:
case INDEX_op_dup_vec:
@@ -2792,11 +2703,9 @@ static char *tcg_get_arg_str_ptr(TCGContext *s, char *buf, int buf_size,
case TCG_TYPE_I32:
snprintf(buf, buf_size, "$0x%x", (int32_t)ts->val);
break;
-#if TCG_TARGET_REG_BITS > 32
case TCG_TYPE_I64:
snprintf(buf, buf_size, "$0x%" PRIx64, ts->val);
break;
-#endif
case TCG_TYPE_V64:
case TCG_TYPE_V128:
case TCG_TYPE_V256:
@@ -5654,8 +5563,6 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
case INDEX_op_extu_i32_i64:
case INDEX_op_extrl_i64_i32:
case INDEX_op_extrh_i64_i32:
- assert(TCG_TARGET_REG_BITS == 64);
- /* fall through */
case INDEX_op_ctpop:
case INDEX_op_neg:
case INDEX_op_not:
@@ -6179,9 +6086,7 @@ static int tcg_out_helper_stk_ofs(TCGType type, unsigned slot)
* Each stack slot is TCG_TARGET_LONG_BITS. If the host does not
* require extension to uint64_t, adjust the address for uint32_t.
*/
- if (HOST_BIG_ENDIAN &&
- TCG_TARGET_REG_BITS == 64 &&
- type == TCG_TYPE_I32) {
+ if (HOST_BIG_ENDIAN && type == TCG_TYPE_I32) {
ofs += 4;
}
return ofs;
@@ -6390,13 +6295,8 @@ static unsigned tcg_out_helper_add_mov(TCGMovExtend *mov,
return 1;
}
- if (TCG_TARGET_REG_BITS == 32) {
- assert(dst_type == TCG_TYPE_I64);
- reg_mo = MO_32;
- } else {
- assert(dst_type == TCG_TYPE_I128);
- reg_mo = MO_64;
- }
+ assert(dst_type == TCG_TYPE_I128);
+ reg_mo = MO_64;
mov[0].dst = loc[HOST_BIG_ENDIAN].arg_slot;
mov[0].src = lo;
@@ -6442,26 +6342,10 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
next_arg = 1;
loc = &info->in[next_arg];
- if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I32) {
- /*
- * 32-bit host with 32-bit guest: zero-extend the guest address
- * to 64-bits for the helper by storing the low part, then
- * load a zero for the high part.
- */
- tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN,
- TCG_TYPE_I32, TCG_TYPE_I32,
- ldst->addr_reg, -1);
- tcg_out_helper_load_slots(s, 1, mov, parm);
-
- tcg_out_helper_load_imm(s, loc[!HOST_BIG_ENDIAN].arg_slot,
- TCG_TYPE_I32, 0, parm);
- next_arg += 2;
- } else {
- nmov = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
- ldst->addr_reg, -1);
- tcg_out_helper_load_slots(s, nmov, mov, parm);
- next_arg += nmov;
- }
+ nmov = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
+ ldst->addr_reg, -1);
+ tcg_out_helper_load_slots(s, nmov, mov, parm);
+ next_arg += nmov;
switch (info->out_kind) {
case TCG_CALL_RET_NORMAL:
@@ -6503,13 +6387,8 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst,
int ofs_slot0;
switch (ldst->type) {
- case TCG_TYPE_I64:
- if (TCG_TARGET_REG_BITS == 32) {
- break;
- }
- /* fall through */
-
case TCG_TYPE_I32:
+ case TCG_TYPE_I64:
mov[0].dst = ldst->datalo_reg;
mov[0].src = tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, 0);
mov[0].dst_type = ldst->type;
@@ -6526,7 +6405,7 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst,
* helper functions.
*/
if (load_sign || !(mop & MO_SIGN)) {
- if (TCG_TARGET_REG_BITS == 32 || ldst->type == TCG_TYPE_I32) {
+ if (ldst->type == TCG_TYPE_I32) {
mov[0].src_ext = MO_32;
} else {
mov[0].src_ext = MO_64;
@@ -6538,7 +6417,6 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst,
return;
case TCG_TYPE_I128:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
ofs_slot0 = TCG_TARGET_CALL_STACK_OFFSET;
switch (TCG_TARGET_CALL_RET_I128) {
case TCG_CALL_RET_NORMAL:
@@ -6568,14 +6446,14 @@ static void tcg_out_ld_helper_ret(TCGContext *s, const TCGLabelQemuLdst *ldst,
tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, HOST_BIG_ENDIAN);
mov[0].dst_type = TCG_TYPE_REG;
mov[0].src_type = TCG_TYPE_REG;
- mov[0].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64;
+ mov[0].src_ext = MO_64;
mov[1].dst = ldst->datahi_reg;
mov[1].src =
tcg_target_call_oarg_reg(TCG_CALL_RET_NORMAL, !HOST_BIG_ENDIAN);
mov[1].dst_type = TCG_TYPE_REG;
mov[1].src_type = TCG_TYPE_REG;
- mov[1].src_ext = TCG_TARGET_REG_BITS == 32 ? MO_32 : MO_64;
+ mov[1].src_ext = MO_64;
tcg_out_movext2(s, mov, mov + 1, parm->ntmp ? parm->tmp[0] : -1);
}
@@ -6616,24 +6494,10 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
/* Handle addr argument. */
loc = &info->in[next_arg];
tcg_debug_assert(s->addr_type <= TCG_TYPE_REG);
- if (TCG_TARGET_REG_BITS == 32) {
- /*
- * 32-bit host (and thus 32-bit guest): zero-extend the guest address
- * to 64-bits for the helper by storing the low part. Later,
- * after we have processed the register inputs, we will load a
- * zero for the high part.
- */
- tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN,
- TCG_TYPE_I32, TCG_TYPE_I32,
+ n = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
ldst->addr_reg, -1);
- next_arg += 2;
- nmov += 1;
- } else {
- n = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
- ldst->addr_reg, -1);
- next_arg += n;
- nmov += n;
- }
+ next_arg += n;
+ nmov += n;
/* Handle data argument. */
loc = &info->in[next_arg];
@@ -6649,7 +6513,6 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
break;
case TCG_CALL_ARG_BY_REF:
- tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_debug_assert(data_type == TCG_TYPE_I128);
tcg_out_st(s, TCG_TYPE_I64,
HOST_BIG_ENDIAN ? ldst->datahi_reg : ldst->datalo_reg,
@@ -6678,12 +6541,6 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
g_assert_not_reached();
}
- if (TCG_TARGET_REG_BITS == 32) {
- /* Zero extend the address by loading a zero for the high part. */
- loc = &info->in[1 + !HOST_BIG_ENDIAN];
- tcg_out_helper_load_imm(s, loc->arg_slot, TCG_TYPE_I32, 0, parm);
- }
-
tcg_out_helper_load_common_args(s, ldst, parm, info, next_arg);
}
@@ -6791,7 +6648,6 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb, uint64_t pc_start)
switch (opc) {
case INDEX_op_extrl_i64_i32:
- assert(TCG_TARGET_REG_BITS == 64);
/*
* If TCG_TYPE_I32 is represented in some canonical form,
* e.g. zero or sign-extended, then emit as a unary op.
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 32/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-internal.h
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (30 preceding siblings ...)
2026-01-18 22:03 ` [PULL 31/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg.c Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 33/54] tcg: Drop TCG_TARGET_REG_BITS test in tcg-has.h Richard Henderson
` (22 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-internal.h | 21 ++-------------------
1 file changed, 2 insertions(+), 19 deletions(-)
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
index d6a12afe06..2cbfb5d5ca 100644
--- a/tcg/tcg-internal.h
+++ b/tcg/tcg-internal.h
@@ -54,31 +54,14 @@ static inline unsigned tcg_call_flags(TCGOp *op)
return tcg_call_info(op)->flags;
}
-#if TCG_TARGET_REG_BITS == 32
-static inline TCGv_i32 TCGV_LOW(TCGv_i64 t)
-{
- return temp_tcgv_i32(tcgv_i64_temp(t) + HOST_BIG_ENDIAN);
-}
-static inline TCGv_i32 TCGV_HIGH(TCGv_i64 t)
-{
- return temp_tcgv_i32(tcgv_i64_temp(t) + !HOST_BIG_ENDIAN);
-}
-#else
-TCGv_i32 TCGV_LOW(TCGv_i64) QEMU_ERROR("32-bit code path is reachable");
-TCGv_i32 TCGV_HIGH(TCGv_i64) QEMU_ERROR("32-bit code path is reachable");
-#endif
-
static inline TCGv_i64 TCGV128_LOW(TCGv_i128 t)
{
- /* For 32-bit, offset by 2, which may then have TCGV_{LOW,HIGH} applied. */
- int o = HOST_BIG_ENDIAN ? 64 / TCG_TARGET_REG_BITS : 0;
- return temp_tcgv_i64(tcgv_i128_temp(t) + o);
+ return temp_tcgv_i64(tcgv_i128_temp(t) + HOST_BIG_ENDIAN);
}
static inline TCGv_i64 TCGV128_HIGH(TCGv_i128 t)
{
- int o = HOST_BIG_ENDIAN ? 0 : 64 / TCG_TARGET_REG_BITS;
- return temp_tcgv_i64(tcgv_i128_temp(t) + o);
+ return temp_tcgv_i64(tcgv_i128_temp(t) + !HOST_BIG_ENDIAN);
}
bool tcg_target_has_memory_bswap(MemOp memop);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 33/54] tcg: Drop TCG_TARGET_REG_BITS test in tcg-has.h
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (31 preceding siblings ...)
2026-01-18 22:03 ` [PULL 32/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-internal.h Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 34/54] include/tcg: Drop TCG_TARGET_REG_BITS tests Richard Henderson
` (21 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-has.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/tcg/tcg-has.h b/tcg/tcg-has.h
index 2fc0e50d20..27771dc7f0 100644
--- a/tcg/tcg-has.h
+++ b/tcg/tcg-has.h
@@ -9,11 +9,6 @@
#include "tcg-target-has.h"
-#if TCG_TARGET_REG_BITS == 32
-/* Turn some undef macros into false macros. */
-#define TCG_TARGET_HAS_extr_i64_i32 0
-#endif
-
#if !defined(TCG_TARGET_HAS_v64) \
&& !defined(TCG_TARGET_HAS_v128) \
&& !defined(TCG_TARGET_HAS_v256)
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 34/54] include/tcg: Drop TCG_TARGET_REG_BITS tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (32 preceding siblings ...)
2026-01-18 22:03 ` [PULL 33/54] tcg: Drop TCG_TARGET_REG_BITS test in tcg-has.h Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 35/54] target/i386/tcg: Drop TCG_TARGET_REG_BITS test Richard Henderson
` (20 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/tcg-op.h | 9 +++------
include/tcg/tcg-opc.h | 5 +----
include/tcg/tcg.h | 27 ++-------------------------
3 files changed, 6 insertions(+), 35 deletions(-)
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
index 232733cb71..ee379994e7 100644
--- a/include/tcg/tcg-op.h
+++ b/include/tcg/tcg-op.h
@@ -31,8 +31,7 @@
#if TARGET_INSN_START_EXTRA_WORDS == 0
static inline void tcg_gen_insn_start(target_ulong pc)
{
- TCGOp *op = tcg_emit_op(INDEX_op_insn_start,
- INSN_START_WORDS * 64 / TCG_TARGET_REG_BITS);
+ TCGOp *op = tcg_emit_op(INDEX_op_insn_start, INSN_START_WORDS);
tcg_set_insn_start_param(op, 0, pc);
tcg_set_insn_start_param(op, 1, 0);
tcg_set_insn_start_param(op, 2, 0);
@@ -40,8 +39,7 @@ static inline void tcg_gen_insn_start(target_ulong pc)
#elif TARGET_INSN_START_EXTRA_WORDS == 1
static inline void tcg_gen_insn_start(target_ulong pc, target_ulong a1)
{
- TCGOp *op = tcg_emit_op(INDEX_op_insn_start,
- INSN_START_WORDS * 64 / TCG_TARGET_REG_BITS);
+ TCGOp *op = tcg_emit_op(INDEX_op_insn_start, INSN_START_WORDS);
tcg_set_insn_start_param(op, 0, pc);
tcg_set_insn_start_param(op, 1, a1);
tcg_set_insn_start_param(op, 2, 0);
@@ -50,8 +48,7 @@ static inline void tcg_gen_insn_start(target_ulong pc, target_ulong a1)
static inline void tcg_gen_insn_start(target_ulong pc, target_ulong a1,
target_ulong a2)
{
- TCGOp *op = tcg_emit_op(INDEX_op_insn_start,
- INSN_START_WORDS * 64 / TCG_TARGET_REG_BITS);
+ TCGOp *op = tcg_emit_op(INDEX_op_insn_start, INSN_START_WORDS);
tcg_set_insn_start_param(op, 0, pc);
tcg_set_insn_start_param(op, 1, a1);
tcg_set_insn_start_param(op, 2, a2);
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
index 28806057c5..61f1c28858 100644
--- a/include/tcg/tcg-opc.h
+++ b/include/tcg/tcg-opc.h
@@ -109,9 +109,7 @@ DEF(extu_i32_i64, 1, 1, 0, 0)
DEF(extrl_i64_i32, 1, 1, 0, 0)
DEF(extrh_i64_i32, 1, 1, 0, 0)
-#define DATA64_ARGS (TCG_TARGET_REG_BITS == 64 ? 1 : 2)
-
-DEF(insn_start, 0, 0, DATA64_ARGS * INSN_START_WORDS, TCG_OPF_NOT_PRESENT)
+DEF(insn_start, 0, 0, INSN_START_WORDS, TCG_OPF_NOT_PRESENT)
DEF(exit_tb, 0, 0, 1, TCG_OPF_BB_EXIT | TCG_OPF_BB_END | TCG_OPF_NOT_PRESENT)
DEF(goto_tb, 0, 0, 1, TCG_OPF_BB_EXIT | TCG_OPF_BB_END | TCG_OPF_NOT_PRESENT)
@@ -184,5 +182,4 @@ DEF(last_generic, 0, 0, 0, TCG_OPF_NOT_PRESENT)
#include "tcg-target-opc.h.inc"
-#undef DATA64_ARGS
#undef DEF
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
index 067150c542..60942ce05c 100644
--- a/include/tcg/tcg.h
+++ b/include/tcg/tcg.h
@@ -43,19 +43,10 @@
#define CPU_TEMP_BUF_NLONGS 128
#define TCG_STATIC_FRAME_SIZE (CPU_TEMP_BUF_NLONGS * sizeof(long))
-#if TCG_TARGET_REG_BITS == 32
-typedef int32_t tcg_target_long;
-typedef uint32_t tcg_target_ulong;
-#define TCG_PRIlx PRIx32
-#define TCG_PRIld PRId32
-#elif TCG_TARGET_REG_BITS == 64
typedef int64_t tcg_target_long;
typedef uint64_t tcg_target_ulong;
#define TCG_PRIlx PRIx64
#define TCG_PRIld PRId64
-#else
-#error unsupported
-#endif
#if TCG_TARGET_NB_REGS <= 32
typedef uint32_t TCGRegSet;
@@ -147,11 +138,7 @@ typedef enum TCGType {
#define TCG_TYPE_COUNT (TCG_TYPE_V256 + 1)
/* An alias for the size of the host register. */
-#if TCG_TARGET_REG_BITS == 32
- TCG_TYPE_REG = TCG_TYPE_I32,
-#else
TCG_TYPE_REG = TCG_TYPE_I64,
-#endif
/* An alias for the size of the native pointer. */
#if UINTPTR_MAX == UINT32_MAX
@@ -605,23 +592,13 @@ static inline void tcg_set_insn_param(TCGOp *op, unsigned arg, TCGArg v)
static inline uint64_t tcg_get_insn_start_param(TCGOp *op, unsigned arg)
{
tcg_debug_assert(arg < INSN_START_WORDS);
- if (TCG_TARGET_REG_BITS == 64) {
- return tcg_get_insn_param(op, arg);
- } else {
- return deposit64(tcg_get_insn_param(op, arg * 2), 32, 32,
- tcg_get_insn_param(op, arg * 2 + 1));
- }
+ return tcg_get_insn_param(op, arg);
}
static inline void tcg_set_insn_start_param(TCGOp *op, unsigned arg, uint64_t v)
{
tcg_debug_assert(arg < INSN_START_WORDS);
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_set_insn_param(op, arg, v);
- } else {
- tcg_set_insn_param(op, arg * 2, v);
- tcg_set_insn_param(op, arg * 2 + 1, v >> 32);
- }
+ tcg_set_insn_param(op, arg, v);
}
/* The last op that was emitted. */
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 35/54] target/i386/tcg: Drop TCG_TARGET_REG_BITS test
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (33 preceding siblings ...)
2026-01-18 22:03 ` [PULL 34/54] include/tcg: Drop TCG_TARGET_REG_BITS tests Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 36/54] target/riscv: " Richard Henderson
` (19 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/i386/tcg/emit.c.inc | 37 +++++++++----------------------------
1 file changed, 9 insertions(+), 28 deletions(-)
diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc
index 41bf047b8d..639a1eb638 100644
--- a/target/i386/tcg/emit.c.inc
+++ b/target/i386/tcg/emit.c.inc
@@ -2094,34 +2094,15 @@ static void gen_IMUL3(DisasContext *s, X86DecodedInsn *decode)
case MO_32:
#ifdef TARGET_X86_64
- if (TCG_TARGET_REG_BITS == 64) {
- /*
- * This produces fewer TCG ops, and better code if flags are needed,
- * but it requires a 64-bit multiply even if they are not. Use it
- * only if the target has 64-bits registers.
- *
- * s->T0 is already sign-extended.
- */
- tcg_gen_ext32s_tl(s->T1, s->T1);
- tcg_gen_mul_tl(s->T0, s->T0, s->T1);
- /* Compare the full result to the extension of the truncated result. */
- tcg_gen_ext32s_tl(s->T1, s->T0);
- cc_src_rhs = s->T0;
- } else {
- /* Variant that only needs a 32-bit widening multiply. */
- TCGv_i32 hi = tcg_temp_new_i32();
- TCGv_i32 lo = tcg_temp_new_i32();
- tcg_gen_trunc_tl_i32(lo, s->T0);
- tcg_gen_trunc_tl_i32(hi, s->T1);
- tcg_gen_muls2_i32(lo, hi, lo, hi);
- tcg_gen_extu_i32_tl(s->T0, lo);
-
- cc_src_rhs = tcg_temp_new();
- tcg_gen_extu_i32_tl(cc_src_rhs, hi);
- /* Compare the high part to the sign bit of the truncated result */
- tcg_gen_sari_i32(lo, lo, 31);
- tcg_gen_extu_i32_tl(s->T1, lo);
- }
+ /*
+ * This produces fewer TCG ops, and better code if flags are needed.
+ * s->T0 is already sign-extended.
+ */
+ tcg_gen_ext32s_tl(s->T1, s->T1);
+ tcg_gen_mul_tl(s->T0, s->T0, s->T1);
+ /* Compare the full result to the extension of the truncated result. */
+ tcg_gen_ext32s_tl(s->T1, s->T0);
+ cc_src_rhs = s->T0;
break;
case MO_64:
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 36/54] target/riscv: Drop TCG_TARGET_REG_BITS test
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (34 preceding siblings ...)
2026-01-18 22:03 ` [PULL 35/54] target/i386/tcg: Drop TCG_TARGET_REG_BITS test Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 37/54] accel/tcg/runtime: Remove 64-bit shift helpers Richard Henderson
` (18 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/riscv/insn_trans/trans_rvv.c.inc | 54 ++++++-------------------
1 file changed, 13 insertions(+), 41 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index caefd38216..4df9a40b44 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -1181,60 +1181,32 @@ static bool ldst_whole_trans(uint32_t vd, uint32_t rs1, uint32_t nf,
* Update vstart with the number of processed elements.
* Use the helper function if either:
* - vstart is not 0.
- * - the target has 32 bit registers and we are loading/storing 64 bit long
- * elements. This is to ensure that we process every element with a single
- * memory instruction.
*/
- bool use_helper_fn = !(s->vstart_eq_zero) ||
- (TCG_TARGET_REG_BITS == 32 && log2_esz == 3);
+ bool use_helper_fn = !s->vstart_eq_zero;
if (!use_helper_fn) {
- TCGv addr = tcg_temp_new();
uint32_t size = s->cfg_ptr->vlenb * nf;
TCGv_i64 t8 = tcg_temp_new_i64();
- TCGv_i32 t4 = tcg_temp_new_i32();
MemOp atomicity = MO_ATOM_NONE;
if (log2_esz == 0) {
atomicity = MO_ATOM_NONE;
} else {
atomicity = MO_ATOM_IFALIGN_PAIR;
}
- if (TCG_TARGET_REG_BITS == 64) {
- for (int i = 0; i < size; i += 8) {
- addr = get_address(s, rs1, i);
- if (is_load) {
- tcg_gen_qemu_ld_i64(t8, addr, s->mem_idx,
- MO_LE | MO_64 | atomicity);
- tcg_gen_st_i64(t8, tcg_env, vreg_ofs(s, vd) + i);
- } else {
- tcg_gen_ld_i64(t8, tcg_env, vreg_ofs(s, vd) + i);
- tcg_gen_qemu_st_i64(t8, addr, s->mem_idx,
- MO_LE | MO_64 | atomicity);
- }
- if (i == size - 8) {
- tcg_gen_movi_tl(cpu_vstart, 0);
- } else {
- tcg_gen_addi_tl(cpu_vstart, cpu_vstart, 8 >> log2_esz);
- }
+ for (int i = 0; i < size; i += 8) {
+ TCGv addr = get_address(s, rs1, i);
+ if (is_load) {
+ tcg_gen_qemu_ld_i64(t8, addr, s->mem_idx, MO_LEUQ | atomicity);
+ tcg_gen_st_i64(t8, tcg_env, vreg_ofs(s, vd) + i);
+ } else {
+ tcg_gen_ld_i64(t8, tcg_env, vreg_ofs(s, vd) + i);
+ tcg_gen_qemu_st_i64(t8, addr, s->mem_idx, MO_LEUQ | atomicity);
}
- } else {
- for (int i = 0; i < size; i += 4) {
- addr = get_address(s, rs1, i);
- if (is_load) {
- tcg_gen_qemu_ld_i32(t4, addr, s->mem_idx,
- MO_LE | MO_32 | atomicity);
- tcg_gen_st_i32(t4, tcg_env, vreg_ofs(s, vd) + i);
- } else {
- tcg_gen_ld_i32(t4, tcg_env, vreg_ofs(s, vd) + i);
- tcg_gen_qemu_st_i32(t4, addr, s->mem_idx,
- MO_LE | MO_32 | atomicity);
- }
- if (i == size - 4) {
- tcg_gen_movi_tl(cpu_vstart, 0);
- } else {
- tcg_gen_addi_tl(cpu_vstart, cpu_vstart, 4 >> log2_esz);
- }
+ if (i == size - 8) {
+ tcg_gen_movi_tl(cpu_vstart, 0);
+ } else {
+ tcg_gen_addi_tl(cpu_vstart, cpu_vstart, 8 >> log2_esz);
}
}
} else {
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 37/54] accel/tcg/runtime: Remove 64-bit shift helpers
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (35 preceding siblings ...)
2026-01-18 22:03 ` [PULL 36/54] target/riscv: " Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 38/54] accel/tcg/runtime: Remove helper_nonatomic_cmpxchgo Richard Henderson
` (17 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
These were only required for some 32-bit hosts.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/tcg-runtime.h | 4 ----
accel/tcg/tcg-runtime.c | 15 ---------------
2 files changed, 19 deletions(-)
diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
index 8436599b9f..698e9baa29 100644
--- a/accel/tcg/tcg-runtime.h
+++ b/accel/tcg/tcg-runtime.h
@@ -8,10 +8,6 @@ DEF_HELPER_FLAGS_2(rem_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(divu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(remu_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
-DEF_HELPER_FLAGS_2(shl_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
-DEF_HELPER_FLAGS_2(shr_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
-DEF_HELPER_FLAGS_2(sar_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
-
DEF_HELPER_FLAGS_2(mulsh_i64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_2(muluh_i64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
index fa7ed9739c..f483c9c2ba 100644
--- a/accel/tcg/tcg-runtime.c
+++ b/accel/tcg/tcg-runtime.c
@@ -55,21 +55,6 @@ uint32_t HELPER(remu_i32)(uint32_t arg1, uint32_t arg2)
/* 64-bit helpers */
-uint64_t HELPER(shl_i64)(uint64_t arg1, uint64_t arg2)
-{
- return arg1 << arg2;
-}
-
-uint64_t HELPER(shr_i64)(uint64_t arg1, uint64_t arg2)
-{
- return arg1 >> arg2;
-}
-
-int64_t HELPER(sar_i64)(int64_t arg1, int64_t arg2)
-{
- return arg1 >> arg2;
-}
-
int64_t HELPER(div_i64)(int64_t arg1, int64_t arg2)
{
return arg1 / arg2;
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 38/54] accel/tcg/runtime: Remove helper_nonatomic_cmpxchgo
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (36 preceding siblings ...)
2026-01-18 22:03 ` [PULL 37/54] accel/tcg/runtime: Remove 64-bit shift helpers Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:03 ` [PULL 39/54] tcg: Unconditionally define atomic64 helpers Richard Henderson
` (16 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
This were only required for some 32-bit hosts.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/tcg-runtime.h | 3 ---
accel/tcg/atomic_common.c.inc | 20 --------------------
2 files changed, 23 deletions(-)
diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
index 698e9baa29..dc89155c0f 100644
--- a/accel/tcg/tcg-runtime.h
+++ b/accel/tcg/tcg-runtime.h
@@ -73,9 +73,6 @@ DEF_HELPER_FLAGS_4(atomic_fetch_oro_le, TCG_CALL_NO_WG,
i128, env, i64, i128, i32)
#endif
-DEF_HELPER_FLAGS_5(nonatomic_cmpxchgo, TCG_CALL_NO_WG,
- i128, env, i64, i128, i128, i32)
-
#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
diff --git a/accel/tcg/atomic_common.c.inc b/accel/tcg/atomic_common.c.inc
index bca93a0ac4..1ff80d19fe 100644
--- a/accel/tcg/atomic_common.c.inc
+++ b/accel/tcg/atomic_common.c.inc
@@ -59,26 +59,6 @@ CMPXCHG_HELPER(cmpxchgo_le, Int128)
#undef CMPXCHG_HELPER
-Int128 HELPER(nonatomic_cmpxchgo)(CPUArchState *env, uint64_t addr,
- Int128 cmpv, Int128 newv, uint32_t oi)
-{
-#if TCG_TARGET_REG_BITS == 32
- uintptr_t ra = GETPC();
- Int128 oldv;
-
- oldv = cpu_ld16_mmu(env, addr, oi, ra);
- if (int128_eq(oldv, cmpv)) {
- cpu_st16_mmu(env, addr, newv, oi, ra);
- } else {
- /* Even with comparison failure, still need a write cycle. */
- probe_write(env, addr, 16, get_mmuidx(oi), ra);
- }
- return oldv;
-#else
- g_assert_not_reached();
-#endif
-}
-
#define ATOMIC_HELPER(OP, TYPE) \
TYPE HELPER(glue(atomic_,OP))(CPUArchState *env, uint64_t addr, \
TYPE val, uint32_t oi) \
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 39/54] tcg: Unconditionally define atomic64 helpers
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (37 preceding siblings ...)
2026-01-18 22:03 ` [PULL 38/54] accel/tcg/runtime: Remove helper_nonatomic_cmpxchgo Richard Henderson
@ 2026-01-18 22:03 ` Richard Henderson
2026-01-18 22:04 ` [PULL 40/54] accel/tcg: Drop CONFIG_ATOMIC64 checks from ldst_atomicicy.c.inc Richard Henderson
` (15 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:03 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
CONFIG_ATOMIC64 is a configuration knob for 32-bit hosts.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/tcg-runtime.h | 16 ----------------
include/accel/tcg/cpu-ldst-common.h | 9 ---------
accel/tcg/cputlb.c | 2 --
accel/tcg/user-exec.c | 2 --
tcg/tcg-op-ldst.c | 17 ++++++-----------
accel/tcg/atomic_common.c.inc | 12 ------------
6 files changed, 6 insertions(+), 52 deletions(-)
diff --git a/accel/tcg/tcg-runtime.h b/accel/tcg/tcg-runtime.h
index dc89155c0f..0b832176b3 100644
--- a/accel/tcg/tcg-runtime.h
+++ b/accel/tcg/tcg-runtime.h
@@ -48,12 +48,10 @@ DEF_HELPER_FLAGS_5(atomic_cmpxchgl_be, TCG_CALL_NO_WG,
i32, env, i64, i32, i32, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgl_le, TCG_CALL_NO_WG,
i32, env, i64, i32, i32, i32)
-#ifdef CONFIG_ATOMIC64
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
i64, env, i64, i64, i64, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, i64, i64, i64, i32)
-#endif
#if HAVE_CMPXCHG128
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_be, TCG_CALL_NO_WG,
i128, env, i64, i128, i128, i32)
@@ -73,7 +71,6 @@ DEF_HELPER_FLAGS_4(atomic_fetch_oro_le, TCG_CALL_NO_WG,
i128, env, i64, i128, i32)
#endif
-#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(NAME) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
TCG_CALL_NO_WG, i32, env, i64, i32, i32) \
@@ -89,19 +86,6 @@ DEF_HELPER_FLAGS_4(atomic_fetch_oro_le, TCG_CALL_NO_WG,
TCG_CALL_NO_WG, i64, env, i64, i64, i32) \
DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), q_be), \
TCG_CALL_NO_WG, i64, env, i64, i64, i32)
-#else
-#define GEN_ATOMIC_HELPERS(NAME) \
- DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), b), \
- TCG_CALL_NO_WG, i32, env, i64, i32, i32) \
- DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_le), \
- TCG_CALL_NO_WG, i32, env, i64, i32, i32) \
- DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), w_be), \
- TCG_CALL_NO_WG, i32, env, i64, i32, i32) \
- DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_le), \
- TCG_CALL_NO_WG, i32, env, i64, i32, i32) \
- DEF_HELPER_FLAGS_4(glue(glue(atomic_, NAME), l_be), \
- TCG_CALL_NO_WG, i32, env, i64, i32, i32)
-#endif /* CONFIG_ATOMIC64 */
GEN_ATOMIC_HELPERS(fetch_add)
GEN_ATOMIC_HELPERS(fetch_and)
diff --git a/include/accel/tcg/cpu-ldst-common.h b/include/accel/tcg/cpu-ldst-common.h
index 17a3250ded..f12be8cfb7 100644
--- a/include/accel/tcg/cpu-ldst-common.h
+++ b/include/accel/tcg/cpu-ldst-common.h
@@ -60,7 +60,6 @@ TYPE cpu_atomic_ ## NAME ## SUFFIX ## _mmu \
(CPUArchState *env, vaddr addr, TYPE val, \
MemOpIdx oi, uintptr_t retaddr);
-#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPER_ALL(NAME) \
GEN_ATOMIC_HELPER(NAME, uint32_t, b) \
GEN_ATOMIC_HELPER(NAME, uint32_t, w_le) \
@@ -69,14 +68,6 @@ TYPE cpu_atomic_ ## NAME ## SUFFIX ## _mmu \
GEN_ATOMIC_HELPER(NAME, uint32_t, l_be) \
GEN_ATOMIC_HELPER(NAME, uint64_t, q_le) \
GEN_ATOMIC_HELPER(NAME, uint64_t, q_be)
-#else
-#define GEN_ATOMIC_HELPER_ALL(NAME) \
- GEN_ATOMIC_HELPER(NAME, uint32_t, b) \
- GEN_ATOMIC_HELPER(NAME, uint32_t, w_le) \
- GEN_ATOMIC_HELPER(NAME, uint32_t, w_be) \
- GEN_ATOMIC_HELPER(NAME, uint32_t, l_le) \
- GEN_ATOMIC_HELPER(NAME, uint32_t, l_be)
-#endif
GEN_ATOMIC_HELPER_ALL(fetch_add)
GEN_ATOMIC_HELPER_ALL(fetch_sub)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index c30073326a..a6774083b0 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2886,10 +2886,8 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
#define DATA_SIZE 4
#include "atomic_template.h"
-#ifdef CONFIG_ATOMIC64
#define DATA_SIZE 8
#include "atomic_template.h"
-#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#define DATA_SIZE 16
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 1800dffa63..ddbdc0432d 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1258,10 +1258,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
#define DATA_SIZE 4
#include "atomic_template.h"
-#ifdef CONFIG_ATOMIC64
#define DATA_SIZE 8
#include "atomic_template.h"
-#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#define DATA_SIZE 16
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index 55bfbf3a20..354d9968f9 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -825,11 +825,6 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv_i64,
typedef void (*gen_atomic_op_i128)(TCGv_i128, TCGv_env, TCGv_i64,
TCGv_i128, TCGv_i32);
-#ifdef CONFIG_ATOMIC64
-# define WITH_ATOMIC64(X) X,
-#else
-# define WITH_ATOMIC64(X)
-#endif
#if HAVE_CMPXCHG128
# define WITH_ATOMIC128(X) X,
#else
@@ -842,8 +837,8 @@ static void * const table_cmpxchg[(MO_SIZE | MO_BSWAP) + 1] = {
[MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
[MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
[MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
- WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
- WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be)
+ [MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le,
+ [MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be,
WITH_ATOMIC128([MO_128 | MO_LE] = gen_helper_atomic_cmpxchgo_le)
WITH_ATOMIC128([MO_128 | MO_BE] = gen_helper_atomic_cmpxchgo_be)
};
@@ -1235,8 +1230,8 @@ static void * const table_##NAME[(MO_SIZE | MO_BSWAP) + 1] = { \
[MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be, \
[MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le, \
[MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be, \
- WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le) \
- WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be) \
+ [MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le, \
+ [MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be, \
WITH_ATOMIC128([MO_128 | MO_LE] = gen_helper_atomic_##NAME##o_le) \
WITH_ATOMIC128([MO_128 | MO_BE] = gen_helper_atomic_##NAME##o_be) \
}; \
@@ -1287,8 +1282,8 @@ static void * const table_##NAME[(MO_SIZE | MO_BSWAP) + 1] = { \
[MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be, \
[MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le, \
[MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be, \
- WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le) \
- WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be) \
+ [MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le, \
+ [MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be, \
}; \
void tcg_gen_atomic_##NAME##_i32_chk(TCGv_i32 ret, TCGTemp *addr, \
TCGv_i32 val, TCGArg idx, \
diff --git a/accel/tcg/atomic_common.c.inc b/accel/tcg/atomic_common.c.inc
index 1ff80d19fe..7d779dd51d 100644
--- a/accel/tcg/atomic_common.c.inc
+++ b/accel/tcg/atomic_common.c.inc
@@ -46,11 +46,8 @@ CMPXCHG_HELPER(cmpxchgw_be, uint32_t)
CMPXCHG_HELPER(cmpxchgw_le, uint32_t)
CMPXCHG_HELPER(cmpxchgl_be, uint32_t)
CMPXCHG_HELPER(cmpxchgl_le, uint32_t)
-
-#ifdef CONFIG_ATOMIC64
CMPXCHG_HELPER(cmpxchgq_be, uint64_t)
CMPXCHG_HELPER(cmpxchgq_le, uint64_t)
-#endif
#if HAVE_CMPXCHG128
CMPXCHG_HELPER(cmpxchgo_be, Int128)
@@ -64,7 +61,6 @@ CMPXCHG_HELPER(cmpxchgo_le, Int128)
TYPE val, uint32_t oi) \
{ return glue(glue(cpu_atomic_,OP),_mmu)(env, addr, val, oi, GETPC()); }
-#ifdef CONFIG_ATOMIC64
#define GEN_ATOMIC_HELPERS(OP) \
ATOMIC_HELPER(glue(OP,b), uint32_t) \
ATOMIC_HELPER(glue(OP,w_be), uint32_t) \
@@ -73,14 +69,6 @@ CMPXCHG_HELPER(cmpxchgo_le, Int128)
ATOMIC_HELPER(glue(OP,l_le), uint32_t) \
ATOMIC_HELPER(glue(OP,q_be), uint64_t) \
ATOMIC_HELPER(glue(OP,q_le), uint64_t)
-#else
-#define GEN_ATOMIC_HELPERS(OP) \
- ATOMIC_HELPER(glue(OP,b), uint32_t) \
- ATOMIC_HELPER(glue(OP,w_be), uint32_t) \
- ATOMIC_HELPER(glue(OP,w_le), uint32_t) \
- ATOMIC_HELPER(glue(OP,l_be), uint32_t) \
- ATOMIC_HELPER(glue(OP,l_le), uint32_t)
-#endif
GEN_ATOMIC_HELPERS(fetch_add)
GEN_ATOMIC_HELPERS(fetch_and)
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 40/54] accel/tcg: Drop CONFIG_ATOMIC64 checks from ldst_atomicicy.c.inc
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (38 preceding siblings ...)
2026-01-18 22:03 ` [PULL 39/54] tcg: Unconditionally define atomic64 helpers Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 41/54] accel/tcg: Drop CONFIG_ATOMIC64 test from translator.c Richard Henderson
` (14 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
CONFIG_ATOMIC64 is a configuration knob for 32-bit hosts.
This allows removal of functions like load_atomic8_or_exit
and simplification of load_atom_extract_al8_or_exit to
load_atom_extract_al8.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 35 +-------
accel/tcg/ldst_atomicity.c.inc | 149 +++++----------------------------
2 files changed, 24 insertions(+), 160 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a6774083b0..6900a12682 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2080,25 +2080,6 @@ static uint64_t do_ld_parts_beN(MMULookupPageData *p, uint64_t ret_be)
return ret_be;
}
-/**
- * do_ld_parts_be4
- * @p: translation parameters
- * @ret_be: accumulated data
- *
- * As do_ld_bytes_beN, but with one atomic load.
- * Four aligned bytes are guaranteed to cover the load.
- */
-static uint64_t do_ld_whole_be4(MMULookupPageData *p, uint64_t ret_be)
-{
- int o = p->addr & 3;
- uint32_t x = load_atomic4(p->haddr - o);
-
- x = cpu_to_be32(x);
- x <<= o * 8;
- x >>= (4 - p->size) * 8;
- return (ret_be << (p->size * 8)) | x;
-}
-
/**
* do_ld_parts_be8
* @p: translation parameters
@@ -2111,7 +2092,7 @@ static uint64_t do_ld_whole_be8(CPUState *cpu, uintptr_t ra,
MMULookupPageData *p, uint64_t ret_be)
{
int o = p->addr & 7;
- uint64_t x = load_atomic8_or_exit(cpu, ra, p->haddr - o);
+ uint64_t x = load_atomic8(p->haddr - o);
x = cpu_to_be64(x);
x <<= o * 8;
@@ -2176,11 +2157,7 @@ static uint64_t do_ld_beN(CPUState *cpu, MMULookupPageData *p,
if (atom == MO_ATOM_IFALIGN_PAIR
? p->size == half_size
: p->size >= half_size) {
- if (!HAVE_al8_fast && p->size < 4) {
- return do_ld_whole_be4(p, ret_be);
- } else {
- return do_ld_whole_be8(cpu, ra, p, ret_be);
- }
+ return do_ld_whole_be8(cpu, ra, p, ret_be);
}
/* fall through */
@@ -2586,13 +2563,7 @@ static uint64_t do_st_leN(CPUState *cpu, MMULookupPageData *p,
if (atom == MO_ATOM_IFALIGN_PAIR
? p->size == half_size
: p->size >= half_size) {
- if (!HAVE_al8_fast && p->size <= 4) {
- return store_whole_le4(p->haddr, p->size, val_le);
- } else if (HAVE_al8) {
- return store_whole_le8(p->haddr, p->size, val_le);
- } else {
- cpu_loop_exit_atomic(cpu, ra);
- }
+ return store_whole_le8(p->haddr, p->size, val_le);
}
/* fall through */
diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc
index c735add261..f5b8289009 100644
--- a/accel/tcg/ldst_atomicity.c.inc
+++ b/accel/tcg/ldst_atomicity.c.inc
@@ -12,13 +12,6 @@
#include "host/load-extract-al16-al8.h.inc"
#include "host/store-insert-al16.h.inc"
-#ifdef CONFIG_ATOMIC64
-# define HAVE_al8 true
-#else
-# define HAVE_al8 false
-#endif
-#define HAVE_al8_fast (ATOMIC_REG_SIZE >= 8)
-
/**
* required_atomicity:
*
@@ -132,44 +125,7 @@ static inline uint32_t load_atomic4(void *pv)
static inline uint64_t load_atomic8(void *pv)
{
uint64_t *p = __builtin_assume_aligned(pv, 8);
-
- qemu_build_assert(HAVE_al8);
- return qatomic_read__nocheck(p);
-}
-
-/**
- * load_atomic8_or_exit:
- * @cpu: generic cpu state
- * @ra: host unwind address
- * @pv: host address
- *
- * Atomically load 8 aligned bytes from @pv.
- * If this is not possible, longjmp out to restart serially.
- */
-static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv)
-{
- if (HAVE_al8) {
- return load_atomic8(pv);
- }
-
-#ifdef CONFIG_USER_ONLY
- /*
- * If the page is not writable, then assume the value is immutable
- * and requires no locking. This ignores the case of MAP_SHARED with
- * another process, because the fallback start_exclusive solution
- * provides no protection across processes.
- */
- WITH_MMAP_LOCK_GUARD() {
- if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
- uint64_t *p = __builtin_assume_aligned(pv, 8);
- return *p;
- }
- }
-#endif
-
- /* Ultimate fallback: re-execute in serial context. */
- trace_load_atom8_or_exit_fallback(ra);
- cpu_loop_exit_atomic(cpu, ra);
+ return qatomic_read(p);
}
/**
@@ -264,9 +220,7 @@ static uint64_t load_atom_extract_al8x2(void *pv)
}
/**
- * load_atom_extract_al8_or_exit:
- * @cpu: generic cpu state
- * @ra: host unwind address
+ * load_atom_extract_al8
* @pv: host address
* @s: object size in bytes, @s <= 4.
*
@@ -275,15 +229,14 @@ static uint64_t load_atom_extract_al8x2(void *pv)
* 8-byte load and extract.
* The value is returned in the low bits of a uint32_t.
*/
-static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
- void *pv, int s)
+static uint32_t load_atom_extract_al8(void *pv, int s)
{
uintptr_t pi = (uintptr_t)pv;
int o = pi & 7;
int shr = (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8;
pv = (void *)(pi & ~7);
- return load_atomic8_or_exit(cpu, ra, pv) >> shr;
+ return load_atomic8(pv) >> shr;
}
/**
@@ -297,7 +250,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra,
* and p % 16 + s > 8. I.e. does not cross a 16-byte
* boundary, but *does* cross an 8-byte boundary.
* This is the slow version, so we must have eliminated
- * any faster load_atom_extract_al8_or_exit case.
+ * any faster load_atom_extract_al8 case.
*
* If this is not possible, longjmp out to restart serially.
*/
@@ -374,21 +327,6 @@ static inline uint64_t load_atom_8_by_4(void *pv)
}
}
-/**
- * load_atom_8_by_8_or_4:
- * @pv: host address
- *
- * Load 8 bytes from aligned @pv, with at least 4-byte atomicity.
- */
-static inline uint64_t load_atom_8_by_8_or_4(void *pv)
-{
- if (HAVE_al8_fast) {
- return load_atomic8(pv);
- } else {
- return load_atom_8_by_4(pv);
- }
-}
-
/**
* load_atom_2:
* @p: host address
@@ -418,12 +356,8 @@ static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra,
return lduw_he_p(pv);
case MO_16:
/* The only case remaining is MO_ATOM_WITHIN16. */
- if (!HAVE_al8_fast && (pi & 3) == 1) {
- /* Big or little endian, we want the middle two bytes. */
- return load_atomic4(pv - 1) >> 8;
- }
if ((pi & 15) != 7) {
- return load_atom_extract_al8_or_exit(cpu, ra, pv, 2);
+ return load_atom_extract_al8(pv, 2);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 2);
default:
@@ -468,7 +402,7 @@ static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra,
return load_atom_extract_al4x2(pv);
case MO_32:
if (!(pi & 4)) {
- return load_atom_extract_al8_or_exit(cpu, ra, pv, 4);
+ return load_atom_extract_al8(pv, 4);
}
return load_atom_extract_al16_or_exit(cpu, ra, pv, 4);
default:
@@ -493,7 +427,7 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
* If the host does not support 8-byte atomics, wait until we have
* examined the atomicity parameters below.
*/
- if (HAVE_al8 && likely((pi & 7) == 0)) {
+ if (likely((pi & 7) == 0)) {
return load_atomic8(pv);
}
if (HAVE_ATOMIC128_RO) {
@@ -502,30 +436,9 @@ static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra,
atmax = required_atomicity(cpu, pi, memop);
if (atmax == MO_64) {
- if (!HAVE_al8 && (pi & 7) == 0) {
- load_atomic8_or_exit(cpu, ra, pv);
- }
return load_atom_extract_al16_or_exit(cpu, ra, pv, 8);
}
- if (HAVE_al8_fast) {
- return load_atom_extract_al8x2(pv);
- }
- switch (atmax) {
- case MO_8:
- return ldq_he_p(pv);
- case MO_16:
- return load_atom_8_by_2(pv);
- case MO_32:
- return load_atom_8_by_4(pv);
- case -MO_32:
- if (HAVE_al8) {
- return load_atom_extract_al8x2(pv);
- }
- trace_load_atom8_fallback(memop, ra);
- cpu_loop_exit_atomic(cpu, ra);
- default:
- g_assert_not_reached();
- }
+ return load_atom_extract_al8x2(pv);
}
/**
@@ -565,18 +478,10 @@ static Int128 load_atom_16(CPUState *cpu, uintptr_t ra,
b = load_atom_8_by_4(pv + 8);
break;
case MO_64:
- if (!HAVE_al8) {
- trace_load_atom16_fallback(memop, ra);
- cpu_loop_exit_atomic(cpu, ra);
- }
a = load_atomic8(pv);
b = load_atomic8(pv + 8);
break;
case -MO_64:
- if (!HAVE_al8) {
- trace_load_atom16_fallback(memop, ra);
- cpu_loop_exit_atomic(cpu, ra);
- }
a = load_atom_extract_al8x2(pv);
b = load_atom_extract_al8x2(pv + 8);
break;
@@ -624,9 +529,7 @@ static inline void store_atomic4(void *pv, uint32_t val)
static inline void store_atomic8(void *pv, uint64_t val)
{
uint64_t *p = __builtin_assume_aligned(pv, 8);
-
- qemu_build_assert(HAVE_al8);
- qatomic_set__nocheck(p, val);
+ qatomic_set(p, val);
}
/**
@@ -688,9 +591,8 @@ static void store_atom_insert_al8(uint64_t *p, uint64_t val, uint64_t msk)
{
uint64_t old, new;
- qemu_build_assert(HAVE_al8);
p = __builtin_assume_aligned(p, 8);
- old = qatomic_read__nocheck(p);
+ old = qatomic_read(p);
do {
new = (old & ~msk) | val;
} while (!__atomic_compare_exchange_n(p, &old, new, true,
@@ -802,7 +704,6 @@ static uint64_t store_whole_le8(void *pv, int size, uint64_t val_le)
uint64_t m = MAKE_64BIT_MASK(0, sz);
uint64_t v;
- qemu_build_assert(HAVE_al8);
if (HOST_BIG_ENDIAN) {
v = bswap64(val_le) >> sh;
m = bswap64(m) >> sh;
@@ -887,10 +788,8 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
store_atom_insert_al4(pv - 1, (uint32_t)val << 8, MAKE_64BIT_MASK(8, 16));
return;
} else if ((pi & 7) == 3) {
- if (HAVE_al8) {
- store_atom_insert_al8(pv - 3, (uint64_t)val << 24, MAKE_64BIT_MASK(24, 16));
- return;
- }
+ store_atom_insert_al8(pv - 3, (uint64_t)val << 24, MAKE_64BIT_MASK(24, 16));
+ return;
} else if ((pi & 15) == 7) {
if (HAVE_CMPXCHG128) {
Int128 v = int128_lshift(int128_make64(val), 56);
@@ -957,10 +856,8 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
return;
case MO_32:
if ((pi & 7) < 4) {
- if (HAVE_al8) {
- store_whole_le8(pv, 4, cpu_to_le32(val));
- return;
- }
+ store_whole_le8(pv, 4, cpu_to_le32(val));
+ return;
} else {
if (HAVE_CMPXCHG128) {
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
@@ -988,7 +885,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
uintptr_t pi = (uintptr_t)pv;
int atmax;
- if (HAVE_al8 && likely((pi & 7) == 0)) {
+ if (likely((pi & 7) == 0)) {
store_atomic8(pv, val);
return;
}
@@ -1005,7 +902,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
store_atom_8_by_4(pv, val);
return;
case -MO_32:
- if (HAVE_al8) {
+ {
uint64_t val_le = cpu_to_le64(val);
int s2 = pi & 7;
int s1 = 8 - s2;
@@ -1024,9 +921,8 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
default:
g_assert_not_reached();
}
- return;
}
- break;
+ return;
case MO_64:
if (HAVE_CMPXCHG128) {
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
@@ -1077,12 +973,9 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
store_atom_8_by_4(pv + 8, b);
return;
case MO_64:
- if (HAVE_al8) {
- store_atomic8(pv, a);
- store_atomic8(pv + 8, b);
- return;
- }
- break;
+ store_atomic8(pv, a);
+ store_atomic8(pv + 8, b);
+ return;
case -MO_64:
if (HAVE_CMPXCHG128) {
uint64_t val_le;
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 41/54] accel/tcg: Drop CONFIG_ATOMIC64 test from translator.c
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (39 preceding siblings ...)
2026-01-18 22:04 ` [PULL 40/54] accel/tcg: Drop CONFIG_ATOMIC64 checks from ldst_atomicicy.c.inc Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 42/54] linux-user/arm: Drop CONFIG_ATOMIC64 test Richard Henderson
` (13 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/translator.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c
index 034f2f359e..f3eddcbb2e 100644
--- a/accel/tcg/translator.c
+++ b/accel/tcg/translator.c
@@ -352,15 +352,13 @@ static bool translator_ld(CPUArchState *env, DisasContextBase *db,
return true;
}
break;
-#ifdef CONFIG_ATOMIC64
case 8:
if (QEMU_IS_ALIGNED(pc, 8)) {
- uint64_t t = qatomic_read__nocheck((uint64_t *)host);
+ uint64_t t = qatomic_read((uint64_t *)host);
stq_he_p(dest, t);
return true;
}
break;
-#endif
}
/* Unaligned or partial read from the second page is not atomic. */
memcpy(dest, host, len);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 42/54] linux-user/arm: Drop CONFIG_ATOMIC64 test
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (40 preceding siblings ...)
2026-01-18 22:04 ` [PULL 41/54] accel/tcg: Drop CONFIG_ATOMIC64 test from translator.c Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 43/54] linux-user/hppa: " Richard Henderson
` (12 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/arm/cpu_loop.c | 19 +------------------
1 file changed, 1 insertion(+), 18 deletions(-)
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
index cd89b7d6f5..40aefc4c1d 100644
--- a/linux-user/arm/cpu_loop.c
+++ b/linux-user/arm/cpu_loop.c
@@ -146,25 +146,8 @@ static void arm_kernel_cmpxchg64_helper(CPUARMState *env)
/* Swap if host != guest endianness, for the host cmpxchg below */
oldval = tswap64(oldval);
newval = tswap64(newval);
-
-#ifdef CONFIG_ATOMIC64
- val = qatomic_cmpxchg__nocheck(host_addr, oldval, newval);
+ val = qatomic_cmpxchg(host_addr, oldval, newval);
cpsr = (val == oldval) * CPSR_C;
-#else
- /*
- * This only works between threads, not between processes, but since
- * the host has no 64-bit cmpxchg, it is the best that we can do.
- */
- start_exclusive();
- val = *host_addr;
- if (val == oldval) {
- *host_addr = newval;
- cpsr = CPSR_C;
- } else {
- cpsr = 0;
- }
- end_exclusive();
-#endif
mmap_unlock();
cpsr_write(env, cpsr, CPSR_C, CPSRWriteByInstr);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 43/54] linux-user/hppa: Drop CONFIG_ATOMIC64 test
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (41 preceding siblings ...)
2026-01-18 22:04 ` [PULL 42/54] linux-user/arm: Drop CONFIG_ATOMIC64 test Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 44/54] target/arm: Drop CONFIG_ATOMIC64 tests Richard Henderson
` (11 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/hppa/cpu_loop.c | 14 +-------------
1 file changed, 1 insertion(+), 13 deletions(-)
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
index 356cb48acc..e5c0f52d94 100644
--- a/linux-user/hppa/cpu_loop.c
+++ b/linux-user/hppa/cpu_loop.c
@@ -83,20 +83,8 @@ static abi_ulong hppa_lws(CPUHPPAState *env)
uint64_t o64, n64, r64;
o64 = *(uint64_t *)g2h(cs, old);
n64 = *(uint64_t *)g2h(cs, new);
-#ifdef CONFIG_ATOMIC64
- r64 = qatomic_cmpxchg__nocheck((aligned_uint64_t *)g2h(cs, addr),
- o64, n64);
+ r64 = qatomic_cmpxchg((aligned_uint64_t *)g2h(cs, addr), o64, n64);
ret = r64 != o64;
-#else
- start_exclusive();
- r64 = *(uint64_t *)g2h(cs, addr);
- ret = 1;
- if (r64 == o64) {
- *(uint64_t *)g2h(cs, addr) = n64;
- ret = 0;
- }
- end_exclusive();
-#endif
}
break;
default:
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 44/54] target/arm: Drop CONFIG_ATOMIC64 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (42 preceding siblings ...)
2026-01-18 22:04 ` [PULL 43/54] linux-user/hppa: " Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 45/54] target/hppa: Drop CONFIG_ATOMIC64 test Richard Henderson
` (10 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/ptw.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
index a986dc66f6..8b8dc09e72 100644
--- a/target/arm/ptw.c
+++ b/target/arm/ptw.c
@@ -757,20 +757,12 @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
if (likely(host)) {
/* Page tables are in RAM, and we have the host address. */
-#ifdef CONFIG_ATOMIC64
- data = qatomic_read__nocheck((uint64_t *)host);
+ data = qatomic_read((uint64_t *)host);
if (ptw->out_be) {
data = be64_to_cpu(data);
} else {
data = le64_to_cpu(data);
}
-#else
- if (ptw->out_be) {
- data = ldq_be_p(host);
- } else {
- data = ldq_le_p(host);
- }
-#endif
} else {
/* Page tables are in MMIO. */
MemTxAttrs attrs = {
@@ -798,7 +790,7 @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
uint64_t new_val, S1Translate *ptw,
ARMMMUFaultInfo *fi)
{
-#if defined(CONFIG_ATOMIC64) && defined(CONFIG_TCG)
+#ifdef CONFIG_TCG
uint64_t cur_val;
void *host = ptw->out_host;
@@ -903,17 +895,17 @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
if (ptw->out_be) {
old_val = cpu_to_be64(old_val);
new_val = cpu_to_be64(new_val);
- cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
+ cur_val = qatomic_cmpxchg((uint64_t *)host, old_val, new_val);
cur_val = be64_to_cpu(cur_val);
} else {
old_val = cpu_to_le64(old_val);
new_val = cpu_to_le64(new_val);
- cur_val = qatomic_cmpxchg__nocheck((uint64_t *)host, old_val, new_val);
+ cur_val = qatomic_cmpxchg((uint64_t *)host, old_val, new_val);
cur_val = le64_to_cpu(cur_val);
}
return cur_val;
#else
- /* AArch32 does not have FEAT_HADFS; non-TCG guests only use debug-mode. */
+ /* Non-TCG guests only use debug-mode. */
g_assert_not_reached();
#endif
}
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 45/54] target/hppa: Drop CONFIG_ATOMIC64 test
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (43 preceding siblings ...)
2026-01-18 22:04 ` [PULL 44/54] target/arm: Drop CONFIG_ATOMIC64 tests Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 46/54] target/m68k: Drop CONFIG_ATOMIC64 tests Richard Henderson
` (9 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/hppa/op_helper.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
index 65faf03cd0..f961046e4c 100644
--- a/target/hppa/op_helper.c
+++ b/target/hppa/op_helper.c
@@ -74,7 +74,6 @@ static void atomic_store_mask64(CPUHPPAState *env, target_ulong addr,
uint64_t val, uint64_t mask,
int size, uintptr_t ra)
{
-#ifdef CONFIG_ATOMIC64
int mmu_idx = cpu_mmu_index(env_cpu(env), 0);
uint64_t old, new, cmp, *haddr;
void *vaddr;
@@ -88,15 +87,12 @@ static void atomic_store_mask64(CPUHPPAState *env, target_ulong addr,
old = *haddr;
while (1) {
new = be32_to_cpu((cpu_to_be32(old) & ~mask) | (val & mask));
- cmp = qatomic_cmpxchg__nocheck(haddr, old, new);
+ cmp = qatomic_cmpxchg(haddr, old, new);
if (cmp == old) {
return;
}
old = cmp;
}
-#else
- cpu_loop_exit_atomic(env_cpu(env), ra);
-#endif
}
static void do_stby_b(CPUHPPAState *env, target_ulong addr, target_ulong val,
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 46/54] target/m68k: Drop CONFIG_ATOMIC64 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (44 preceding siblings ...)
2026-01-18 22:04 ` [PULL 45/54] target/hppa: Drop CONFIG_ATOMIC64 test Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 47/54] target/s390x: " Richard Henderson
` (8 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/m68k/op_helper.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
index a27e2bcfbc..f7df83c850 100644
--- a/target/m68k/op_helper.c
+++ b/target/m68k/op_helper.c
@@ -816,14 +816,11 @@ static void do_cas2l(CPUM68KState *env, uint32_t regs, uint32_t a1, uint32_t a2,
uint32_t u2 = env->dregs[Du2];
uint32_t l1, l2;
uintptr_t ra = GETPC();
-#if defined(CONFIG_ATOMIC64)
int mmu_idx = cpu_mmu_index(env_cpu(env), 0);
MemOpIdx oi = make_memop_idx(MO_BEUQ, mmu_idx);
-#endif
if (parallel) {
/* We're executing in a parallel context -- must be atomic. */
-#ifdef CONFIG_ATOMIC64
uint64_t c, u, l;
if ((a1 & 7) == 0 && a2 == a1 + 4) {
c = deposit64(c2, 32, 32, c1);
@@ -837,9 +834,7 @@ static void do_cas2l(CPUM68KState *env, uint32_t regs, uint32_t a1, uint32_t a2,
l = cpu_atomic_cmpxchgq_be_mmu(env, a2, c, u, oi, ra);
l2 = l >> 32;
l1 = l;
- } else
-#endif
- {
+ } else {
/* Tell the main loop we need to serialize this insn. */
cpu_loop_exit_atomic(env_cpu(env), ra);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 47/54] target/s390x: Drop CONFIG_ATOMIC64 tests
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (45 preceding siblings ...)
2026-01-18 22:04 ` [PULL 46/54] target/m68k: Drop CONFIG_ATOMIC64 tests Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 48/54] target/s390x: Simplify atomicity check in do_csst Richard Henderson
` (7 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Thomas Huth, Pierrick Bouvier
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/s390x/tcg/mem_helper.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
index 2972b7ddb9..0b8b6d3bbb 100644
--- a/target/s390x/tcg/mem_helper.c
+++ b/target/s390x/tcg/mem_helper.c
@@ -1815,9 +1815,7 @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
*/
if (parallel) {
uint32_t max = 2;
-#ifdef CONFIG_ATOMIC64
max = 3;
-#endif
if ((HAVE_CMPXCHG128 ? 0 : fc + 2 > max) ||
(HAVE_ATOMIC128_RW ? 0 : sc > max)) {
cpu_loop_exit_atomic(env_cpu(env), ra);
@@ -1856,12 +1854,7 @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
uint64_t ov;
if (parallel) {
-#ifdef CONFIG_ATOMIC64
ov = cpu_atomic_cmpxchgq_be_mmu(env, a1, cv, nv, oi8, ra);
-#else
- /* Note that we asserted !parallel above. */
- g_assert_not_reached();
-#endif
} else {
ov = cpu_ldq_mmu(env, a1, oi8, ra);
cpu_stq_mmu(env, a1, (ov == cv ? nv : ov), oi8, ra);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 48/54] target/s390x: Simplify atomicity check in do_csst
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (46 preceding siblings ...)
2026-01-18 22:04 ` [PULL 47/54] target/s390x: " Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 49/54] migration: Drop use of Stat64 Richard Henderson
` (6 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
We should have used MO_{32,64} from the start, rather than
raw integer constants. However, now that the CONFIG_ATOMIC64
test has been removed, we can remove the 'max' variable and
simplify the two blocks.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/s390x/tcg/mem_helper.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
index 0b8b6d3bbb..2a79a789f6 100644
--- a/target/s390x/tcg/mem_helper.c
+++ b/target/s390x/tcg/mem_helper.c
@@ -1813,13 +1813,10 @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
* restart early if we can't support either operation that is supposed
* to be atomic.
*/
- if (parallel) {
- uint32_t max = 2;
- max = 3;
- if ((HAVE_CMPXCHG128 ? 0 : fc + 2 > max) ||
- (HAVE_ATOMIC128_RW ? 0 : sc > max)) {
- cpu_loop_exit_atomic(env_cpu(env), ra);
- }
+ if (parallel &&
+ ((!HAVE_CMPXCHG128 && fc + 2 > MO_64) ||
+ (!HAVE_ATOMIC128_RW && sc > MO_64))) {
+ cpu_loop_exit_atomic(env_cpu(env), ra);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 49/54] migration: Drop use of Stat64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (47 preceding siblings ...)
2026-01-18 22:04 ` [PULL 48/54] target/s390x: Simplify atomicity check in do_csst Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 50/54] block: " Richard Henderson
` (5 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
The Stat64 structure is an aid for 32-bit hosts, and
is no longer required. Use plain 64-bit types.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
migration/migration-stats.h | 36 ++++++++++++++++-------------------
migration/cpu-throttle.c | 4 ++--
migration/migration-stats.c | 16 ++++++++--------
migration/migration.c | 24 +++++++++++------------
migration/multifd-nocomp.c | 2 +-
migration/multifd-zero-page.c | 4 ++--
migration/multifd.c | 12 ++++--------
migration/qemu-file.c | 6 +++---
migration/ram.c | 30 ++++++++++++++---------------
migration/rdma.c | 8 ++++----
10 files changed, 67 insertions(+), 75 deletions(-)
diff --git a/migration/migration-stats.h b/migration/migration-stats.h
index 05290ade76..c0f50144c9 100644
--- a/migration/migration-stats.h
+++ b/migration/migration-stats.h
@@ -13,8 +13,6 @@
#ifndef QEMU_MIGRATION_STATS_H
#define QEMU_MIGRATION_STATS_H
-#include "qemu/stats64.h"
-
/*
* Amount of time to allocate to each "chunk" of bandwidth-throttled
* data.
@@ -29,9 +27,7 @@
/*
* These are the ram migration statistic counters. It is loosely
- * based on MigrationStats. We change to Stat64 any counter that
- * needs to be updated using atomic ops (can be accessed by more than
- * one thread).
+ * based on MigrationStats.
*/
typedef struct {
/*
@@ -41,66 +37,66 @@ typedef struct {
* since last iteration, not counting what the guest has dirtied
* since we synchronized bitmaps.
*/
- Stat64 dirty_bytes_last_sync;
+ uint64_t dirty_bytes_last_sync;
/*
* Number of pages dirtied per second.
*/
- Stat64 dirty_pages_rate;
+ uint64_t dirty_pages_rate;
/*
* Number of times we have synchronized guest bitmaps.
*/
- Stat64 dirty_sync_count;
+ uint64_t dirty_sync_count;
/*
* Number of times zero copy failed to send any page using zero
* copy.
*/
- Stat64 dirty_sync_missed_zero_copy;
+ uint64_t dirty_sync_missed_zero_copy;
/*
* Number of bytes sent at migration completion stage while the
* guest is stopped.
*/
- Stat64 downtime_bytes;
+ uint64_t downtime_bytes;
/*
* Number of bytes sent through multifd channels.
*/
- Stat64 multifd_bytes;
+ uint64_t multifd_bytes;
/*
* Number of pages transferred that were not full of zeros.
*/
- Stat64 normal_pages;
+ uint64_t normal_pages;
/*
* Number of bytes sent during postcopy.
*/
- Stat64 postcopy_bytes;
+ uint64_t postcopy_bytes;
/*
* Number of postcopy page faults that we have handled during
* postcopy stage.
*/
- Stat64 postcopy_requests;
+ uint64_t postcopy_requests;
/*
* Number of bytes sent during precopy stage.
*/
- Stat64 precopy_bytes;
+ uint64_t precopy_bytes;
/*
* Number of bytes transferred with QEMUFile.
*/
- Stat64 qemu_file_transferred;
+ uint64_t qemu_file_transferred;
/*
* Amount of transferred data at the start of current cycle.
*/
- Stat64 rate_limit_start;
+ uint64_t rate_limit_start;
/*
* Maximum amount of data we can send in a cycle.
*/
- Stat64 rate_limit_max;
+ uint64_t rate_limit_max;
/*
* Number of bytes sent through RDMA.
*/
- Stat64 rdma_bytes;
+ uint64_t rdma_bytes;
/*
* Number of pages transferred that were full of zeros.
*/
- Stat64 zero_pages;
+ uint64_t zero_pages;
} MigrationAtomicStats;
extern MigrationAtomicStats mig_stats;
diff --git a/migration/cpu-throttle.c b/migration/cpu-throttle.c
index 0642e6bdea..3b4d4aea52 100644
--- a/migration/cpu-throttle.c
+++ b/migration/cpu-throttle.c
@@ -134,7 +134,7 @@ int cpu_throttle_get_percentage(void)
void cpu_throttle_dirty_sync_timer_tick(void *opaque)
{
- uint64_t sync_cnt = stat64_get(&mig_stats.dirty_sync_count);
+ uint64_t sync_cnt = qatomic_read(&mig_stats.dirty_sync_count);
/*
* The first iteration copies all memory anyhow and has no
@@ -153,7 +153,7 @@ void cpu_throttle_dirty_sync_timer_tick(void *opaque)
}
end:
- throttle_dirty_sync_count_prev = stat64_get(&mig_stats.dirty_sync_count);
+ throttle_dirty_sync_count_prev = qatomic_read(&mig_stats.dirty_sync_count);
timer_mod(throttle_dirty_sync_timer,
qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) +
diff --git a/migration/migration-stats.c b/migration/migration-stats.c
index f690b98a03..3f17b6ac5c 100644
--- a/migration/migration-stats.c
+++ b/migration/migration-stats.c
@@ -11,7 +11,7 @@
*/
#include "qemu/osdep.h"
-#include "qemu/stats64.h"
+#include "qemu/atomic.h"
#include "qemu-file.h"
#include "trace.h"
#include "migration-stats.h"
@@ -29,7 +29,7 @@ bool migration_rate_exceeded(QEMUFile *f)
return false;
}
- uint64_t rate_limit_start = stat64_get(&mig_stats.rate_limit_start);
+ uint64_t rate_limit_start = qatomic_read(&mig_stats.rate_limit_start);
uint64_t rate_limit_current = migration_transferred_bytes();
uint64_t rate_limit_used = rate_limit_current - rate_limit_start;
@@ -41,7 +41,7 @@ bool migration_rate_exceeded(QEMUFile *f)
uint64_t migration_rate_get(void)
{
- return stat64_get(&mig_stats.rate_limit_max);
+ return qatomic_read(&mig_stats.rate_limit_max);
}
#define XFER_LIMIT_RATIO (1000 / BUFFER_DELAY)
@@ -51,19 +51,19 @@ void migration_rate_set(uint64_t limit)
/*
* 'limit' is per second. But we check it each BUFFER_DELAY milliseconds.
*/
- stat64_set(&mig_stats.rate_limit_max, limit / XFER_LIMIT_RATIO);
+ qatomic_set(&mig_stats.rate_limit_max, limit / XFER_LIMIT_RATIO);
}
void migration_rate_reset(void)
{
- stat64_set(&mig_stats.rate_limit_start, migration_transferred_bytes());
+ qatomic_set(&mig_stats.rate_limit_start, migration_transferred_bytes());
}
uint64_t migration_transferred_bytes(void)
{
- uint64_t multifd = stat64_get(&mig_stats.multifd_bytes);
- uint64_t rdma = stat64_get(&mig_stats.rdma_bytes);
- uint64_t qemu_file = stat64_get(&mig_stats.qemu_file_transferred);
+ uint64_t multifd = qatomic_read(&mig_stats.multifd_bytes);
+ uint64_t rdma = qatomic_read(&mig_stats.rdma_bytes);
+ uint64_t qemu_file = qatomic_read(&mig_stats.qemu_file_transferred);
trace_migration_transferred_bytes(qemu_file, multifd, rdma);
return qemu_file + multifd + rdma;
diff --git a/migration/migration.c b/migration/migration.c
index 1c34d8d432..1bcde301f7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1266,22 +1266,22 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram = g_malloc0(sizeof(*info->ram));
info->ram->transferred = migration_transferred_bytes();
info->ram->total = ram_bytes_total();
- info->ram->duplicate = stat64_get(&mig_stats.zero_pages);
- info->ram->normal = stat64_get(&mig_stats.normal_pages);
+ info->ram->duplicate = qatomic_read(&mig_stats.zero_pages);
+ info->ram->normal = qatomic_read(&mig_stats.normal_pages);
info->ram->normal_bytes = info->ram->normal * page_size;
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count =
- stat64_get(&mig_stats.dirty_sync_count);
+ qatomic_read(&mig_stats.dirty_sync_count);
info->ram->dirty_sync_missed_zero_copy =
- stat64_get(&mig_stats.dirty_sync_missed_zero_copy);
+ qatomic_read(&mig_stats.dirty_sync_missed_zero_copy);
info->ram->postcopy_requests =
- stat64_get(&mig_stats.postcopy_requests);
+ qatomic_read(&mig_stats.postcopy_requests);
info->ram->page_size = page_size;
- info->ram->multifd_bytes = stat64_get(&mig_stats.multifd_bytes);
+ info->ram->multifd_bytes = qatomic_read(&mig_stats.multifd_bytes);
info->ram->pages_per_second = s->pages_per_second;
- info->ram->precopy_bytes = stat64_get(&mig_stats.precopy_bytes);
- info->ram->downtime_bytes = stat64_get(&mig_stats.downtime_bytes);
- info->ram->postcopy_bytes = stat64_get(&mig_stats.postcopy_bytes);
+ info->ram->precopy_bytes = qatomic_read(&mig_stats.precopy_bytes);
+ info->ram->downtime_bytes = qatomic_read(&mig_stats.downtime_bytes);
+ info->ram->postcopy_bytes = qatomic_read(&mig_stats.postcopy_bytes);
if (migrate_xbzrle()) {
info->xbzrle_cache = g_malloc0(sizeof(*info->xbzrle_cache));
@@ -1302,7 +1302,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
if (s->state != MIGRATION_STATUS_COMPLETED) {
info->ram->remaining = ram_bytes_remaining();
info->ram->dirty_pages_rate =
- stat64_get(&mig_stats.dirty_pages_rate);
+ qatomic_read(&mig_stats.dirty_pages_rate);
}
if (migrate_dirty_limit() && dirtylimit_in_service()) {
@@ -3420,10 +3420,10 @@ static void migration_update_counters(MigrationState *s,
* if we haven't sent anything, we don't want to
* recalculate. 10000 is a small enough number for our purposes
*/
- if (stat64_get(&mig_stats.dirty_pages_rate) &&
+ if (qatomic_read(&mig_stats.dirty_pages_rate) &&
transferred > 10000) {
s->expected_downtime =
- stat64_get(&mig_stats.dirty_bytes_last_sync) / expected_bw_per_ms;
+ qatomic_read(&mig_stats.dirty_bytes_last_sync) / expected_bw_per_ms;
}
migration_rate_reset();
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
index b48eae3d86..9be79b3b8e 100644
--- a/migration/multifd-nocomp.c
+++ b/migration/multifd-nocomp.c
@@ -141,7 +141,7 @@ static int multifd_nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
return -1;
}
- stat64_add(&mig_stats.multifd_bytes, p->packet_len);
+ qatomic_add(&mig_stats.multifd_bytes, p->packet_len);
}
return 0;
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index 4cde868159..00c330416a 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -77,8 +77,8 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
pages->normal_num = i;
out:
- stat64_add(&mig_stats.normal_pages, pages->normal_num);
- stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
+ qatomic_add(&mig_stats.normal_pages, pages->normal_num);
+ qatomic_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
}
void multifd_recv_zero_page_process(MultiFDRecvParams *p)
diff --git a/migration/multifd.c b/migration/multifd.c
index bf6da85af8..c9d4a67a46 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -58,10 +58,6 @@ struct {
* operations on both 32bit / 64 bits hosts. It means on 32bit systems
* multifd will overflow the packet_num easier, but that should be
* fine.
- *
- * Another option is to use QEMU's Stat64 then it'll be 64 bits on all
- * hosts, however so far it does not support atomic fetch_add() yet.
- * Make it easy for now.
*/
uintptr_t packet_num;
/*
@@ -174,7 +170,7 @@ static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
if (ret != 0) {
return -1;
}
- stat64_add(&mig_stats.multifd_bytes, size);
+ qatomic_add(&mig_stats.multifd_bytes, size);
return 0;
}
@@ -607,7 +603,7 @@ static int multifd_zero_copy_flush(QIOChannel *c)
return -1;
}
if (ret == 1) {
- stat64_add(&mig_stats.dirty_sync_missed_zero_copy, 1);
+ qatomic_add(&mig_stats.dirty_sync_missed_zero_copy, 1);
}
return ret;
@@ -735,7 +731,7 @@ static void *multifd_send_thread(void *opaque)
break;
}
- stat64_add(&mig_stats.multifd_bytes, total_size);
+ qatomic_add(&mig_stats.multifd_bytes, total_size);
p->next_packet_size = 0;
multifd_send_data_clear(p->data);
@@ -766,7 +762,7 @@ static void *multifd_send_thread(void *opaque)
break;
}
/* p->next_packet_size will always be zero for a SYNC packet */
- stat64_add(&mig_stats.multifd_bytes, p->packet_len);
+ qatomic_add(&mig_stats.multifd_bytes, p->packet_len);
}
qatomic_set(&p->pending_sync, MULTIFD_SYNC_NONE);
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 4b5a409a80..8d82d94416 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -295,7 +295,7 @@ int qemu_fflush(QEMUFile *f)
qemu_file_set_error_obj(f, -EIO, local_error);
} else {
uint64_t size = iov_size(f->iov, f->iovcnt);
- stat64_add(&mig_stats.qemu_file_transferred, size);
+ qatomic_add(&mig_stats.qemu_file_transferred, size);
}
qemu_iovec_release_ram(f);
@@ -552,7 +552,7 @@ void qemu_put_buffer_at(QEMUFile *f, const uint8_t *buf, size_t buflen,
return;
}
- stat64_add(&mig_stats.qemu_file_transferred, buflen);
+ qatomic_add(&mig_stats.qemu_file_transferred, buflen);
}
@@ -785,7 +785,7 @@ int coroutine_mixed_fn qemu_get_byte(QEMUFile *f)
uint64_t qemu_file_transferred(QEMUFile *f)
{
- uint64_t ret = stat64_get(&mig_stats.qemu_file_transferred);
+ uint64_t ret = qatomic_read(&mig_stats.qemu_file_transferred);
int i;
g_assert(qemu_file_is_writable(f));
diff --git a/migration/ram.c b/migration/ram.c
index 04958c5603..fc7ece2c1a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -479,11 +479,11 @@ uint64_t ram_bytes_remaining(void)
void ram_transferred_add(uint64_t bytes)
{
if (runstate_is_running()) {
- stat64_add(&mig_stats.precopy_bytes, bytes);
+ qatomic_add(&mig_stats.precopy_bytes, bytes);
} else if (migration_in_postcopy()) {
- stat64_add(&mig_stats.postcopy_bytes, bytes);
+ qatomic_add(&mig_stats.postcopy_bytes, bytes);
} else {
- stat64_add(&mig_stats.downtime_bytes, bytes);
+ qatomic_add(&mig_stats.downtime_bytes, bytes);
}
}
@@ -605,7 +605,7 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
/* We don't care if this fails to allocate a new cache page
* as long as it updated an old one */
cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
- stat64_get(&mig_stats.dirty_sync_count));
+ qatomic_read(&mig_stats.dirty_sync_count));
}
#define ENCODING_FLAG_XBZRLE 0x1
@@ -631,7 +631,7 @@ static int save_xbzrle_page(RAMState *rs, PageSearchStatus *pss,
int encoded_len = 0, bytes_xbzrle;
uint8_t *prev_cached_page;
QEMUFile *file = pss->pss_channel;
- uint64_t generation = stat64_get(&mig_stats.dirty_sync_count);
+ uint64_t generation = qatomic_read(&mig_stats.dirty_sync_count);
if (!cache_is_cached(XBZRLE.cache, current_addr, generation)) {
xbzrle_counters.cache_miss++;
@@ -1035,9 +1035,9 @@ uint64_t ram_pagesize_summary(void)
uint64_t ram_get_total_transferred_pages(void)
{
- return stat64_get(&mig_stats.normal_pages) +
- stat64_get(&mig_stats.zero_pages) +
- xbzrle_counters.pages;
+ return (qatomic_read(&mig_stats.normal_pages) +
+ qatomic_read(&mig_stats.zero_pages) +
+ xbzrle_counters.pages);
}
static void migration_update_rates(RAMState *rs, int64_t end_time)
@@ -1045,7 +1045,7 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
uint64_t page_count = rs->target_page_count - rs->target_page_count_prev;
/* calculate period counters */
- stat64_set(&mig_stats.dirty_pages_rate,
+ qatomic_set(&mig_stats.dirty_pages_rate,
rs->num_dirty_pages_period * 1000 /
(end_time - rs->time_last_bitmap_sync));
@@ -1136,7 +1136,7 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage)
RAMBlock *block;
int64_t end_time;
- stat64_add(&mig_stats.dirty_sync_count, 1);
+ qatomic_add(&mig_stats.dirty_sync_count, 1);
if (!rs->time_last_bitmap_sync) {
rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -1150,7 +1150,7 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage)
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(rs, block);
}
- stat64_set(&mig_stats.dirty_bytes_last_sync, ram_bytes_remaining());
+ qatomic_set(&mig_stats.dirty_bytes_last_sync, ram_bytes_remaining());
}
}
@@ -1173,7 +1173,7 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage)
rs->bytes_xfer_prev = migration_transferred_bytes();
}
if (migrate_events()) {
- uint64_t generation = stat64_get(&mig_stats.dirty_sync_count);
+ uint64_t generation = qatomic_read(&mig_stats.dirty_sync_count);
qapi_event_send_migration_pass(generation);
}
}
@@ -1232,7 +1232,7 @@ static int save_zero_page(RAMState *rs, PageSearchStatus *pss,
return 0;
}
- stat64_add(&mig_stats.zero_pages, 1);
+ qatomic_add(&mig_stats.zero_pages, 1);
if (migrate_mapped_ram()) {
/* zero pages are not transferred with mapped-ram */
@@ -1291,7 +1291,7 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
}
}
ram_transferred_add(TARGET_PAGE_SIZE);
- stat64_add(&mig_stats.normal_pages, 1);
+ qatomic_add(&mig_stats.normal_pages, 1);
return 1;
}
@@ -1943,7 +1943,7 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len,
RAMBlock *ramblock;
RAMState *rs = ram_state;
- stat64_add(&mig_stats.postcopy_requests, 1);
+ qatomic_add(&mig_stats.postcopy_requests, 1);
RCU_READ_LOCK_GUARD();
if (!rbname) {
diff --git a/migration/rdma.c b/migration/rdma.c
index 9e301cf917..cced173379 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -1936,8 +1936,8 @@ retry:
* would think that head.len would be the more similar
* thing to a correct value.
*/
- stat64_add(&mig_stats.zero_pages,
- sge.length / qemu_target_page_size());
+ qatomic_add(&mig_stats.zero_pages,
+ sge.length / qemu_target_page_size());
return 1;
}
@@ -2045,7 +2045,7 @@ retry:
}
set_bit(chunk, block->transit_bitmap);
- stat64_add(&mig_stats.normal_pages, sge.length / qemu_target_page_size());
+ qatomic_add(&mig_stats.normal_pages, sge.length / qemu_target_page_size());
/*
* We are adding to transferred the amount of data written, but no
* overhead at all. I will assume that RDMA is magicaly and don't
@@ -2055,7 +2055,7 @@ retry:
* sizeof(send_wr) + sge.length
* but this being RDMA, who knows.
*/
- stat64_add(&mig_stats.rdma_bytes, sge.length);
+ qatomic_add(&mig_stats.rdma_bytes, sge.length);
ram_transferred_add(sge.length);
rdma->total_writes++;
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 50/54] block: Drop use of Stat64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (48 preceding siblings ...)
2026-01-18 22:04 ` [PULL 49/54] migration: Drop use of Stat64 Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 51/54] util: Remove stats64 Richard Henderson
` (4 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
The Stat64 structure is an aid for 32-bit hosts, and
is no longer required. Use plain 64-bit types.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/block/block_int-common.h | 3 +--
block/io.c | 10 +++++++++-
block/qapi.c | 2 +-
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index 6d0898e53d..9324af903d 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -30,7 +30,6 @@
#include "qemu/aiocb.h"
#include "qemu/iov.h"
#include "qemu/rcu.h"
-#include "qemu/stats64.h"
#define BLOCK_FLAG_LAZY_REFCOUNTS 8
@@ -1246,7 +1245,7 @@ struct BlockDriverState {
QLIST_HEAD(, BdrvDirtyBitmap) dirty_bitmaps;
/* Offset after the highest byte written to */
- Stat64 wr_highest_offset;
+ uint64_t wr_highest_offset;
/*
* If true, copy read backing sectors into image. Can be >1 if more
diff --git a/block/io.c b/block/io.c
index cace297f22..e8fb4ede4d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -39,6 +39,7 @@
#include "qemu/main-loop.h"
#include "system/replay.h"
#include "qemu/units.h"
+#include "qemu/atomic.h"
/* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */
#define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
@@ -2044,7 +2045,14 @@ bdrv_co_write_req_finish(BdrvChild *child, int64_t offset, int64_t bytes,
if (req->bytes) {
switch (req->type) {
case BDRV_TRACKED_WRITE:
- stat64_max(&bs->wr_highest_offset, offset + bytes);
+ {
+ uint64_t new = offset + bytes;
+ uint64_t old = qatomic_read(&bs->wr_highest_offset);
+
+ while (old < new) {
+ old = qatomic_cmpxchg(&bs->wr_highest_offset, old, new);
+ }
+ }
/* fall through, to set dirty bits */
case BDRV_TRACKED_DISCARD:
bdrv_set_dirty(bs, offset, bytes);
diff --git a/block/qapi.c b/block/qapi.c
index 9f5771e019..27e0ac6a32 100644
--- a/block/qapi.c
+++ b/block/qapi.c
@@ -651,7 +651,7 @@ bdrv_query_bds_stats(BlockDriverState *bs, bool blk_level)
s->node_name = g_strdup(bdrv_get_node_name(bs));
}
- s->stats->wr_highest_offset = stat64_get(&bs->wr_highest_offset);
+ s->stats->wr_highest_offset = qatomic_read(&bs->wr_highest_offset);
s->driver_specific = bdrv_get_specific_stats(bs);
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 51/54] util: Remove stats64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (49 preceding siblings ...)
2026-01-18 22:04 ` [PULL 50/54] block: " Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 52/54] include/qemu/atomic: Drop qatomic_{read,set}_[iu]64 Richard Henderson
` (3 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
This API is no longer used.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/qemu/stats64.h | 199 -----------------------------------------
util/stats64.c | 148 ------------------------------
util/meson.build | 1 -
3 files changed, 348 deletions(-)
delete mode 100644 include/qemu/stats64.h
delete mode 100644 util/stats64.c
diff --git a/include/qemu/stats64.h b/include/qemu/stats64.h
deleted file mode 100644
index 99b5cb724a..0000000000
--- a/include/qemu/stats64.h
+++ /dev/null
@@ -1,199 +0,0 @@
-/*
- * Atomic operations on 64-bit quantities.
- *
- * Copyright (C) 2017 Red Hat, Inc.
- *
- * Author: Paolo Bonzini <pbonzini@redhat.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#ifndef QEMU_STATS64_H
-#define QEMU_STATS64_H
-
-#include "qemu/atomic.h"
-
-/* This provides atomic operations on 64-bit type, using a reader-writer
- * spinlock on architectures that do not have 64-bit accesses. Even on
- * those architectures, it tries hard not to take the lock.
- */
-
-typedef struct Stat64 {
-#ifdef CONFIG_ATOMIC64
- aligned_uint64_t value;
-#else
- uint32_t low, high;
- uint32_t lock;
-#endif
-} Stat64;
-
-#ifdef CONFIG_ATOMIC64
-static inline void stat64_init(Stat64 *s, uint64_t value)
-{
- /* This is not guaranteed to be atomic! */
- *s = (Stat64) { value };
-}
-
-static inline uint64_t stat64_get(const Stat64 *s)
-{
- return qatomic_read__nocheck(&s->value);
-}
-
-static inline void stat64_set(Stat64 *s, uint64_t value)
-{
- qatomic_set__nocheck(&s->value, value);
-}
-
-static inline void stat64_add(Stat64 *s, uint64_t value)
-{
- qatomic_add(&s->value, value);
-}
-
-static inline void stat64_min(Stat64 *s, uint64_t value)
-{
- uint64_t orig = qatomic_read__nocheck(&s->value);
- while (orig > value) {
- orig = qatomic_cmpxchg__nocheck(&s->value, orig, value);
- }
-}
-
-static inline void stat64_max(Stat64 *s, uint64_t value)
-{
- uint64_t orig = qatomic_read__nocheck(&s->value);
- while (orig < value) {
- orig = qatomic_cmpxchg__nocheck(&s->value, orig, value);
- }
-}
-#else
-uint64_t stat64_get(const Stat64 *s);
-void stat64_set(Stat64 *s, uint64_t value);
-bool stat64_min_slow(Stat64 *s, uint64_t value);
-bool stat64_max_slow(Stat64 *s, uint64_t value);
-bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t high);
-
-static inline void stat64_init(Stat64 *s, uint64_t value)
-{
- /* This is not guaranteed to be atomic! */
- *s = (Stat64) { .low = value, .high = value >> 32, .lock = 0 };
-}
-
-static inline void stat64_add(Stat64 *s, uint64_t value)
-{
- uint32_t low, high;
- high = value >> 32;
- low = (uint32_t) value;
- if (!low) {
- if (high) {
- qatomic_add(&s->high, high);
- }
- return;
- }
-
- for (;;) {
- uint32_t orig = s->low;
- uint32_t result = orig + low;
- uint32_t old;
-
- if (result < low || high) {
- /* If the high part is affected, take the lock. */
- if (stat64_add32_carry(s, low, high)) {
- return;
- }
- continue;
- }
-
- /* No carry, try with a 32-bit cmpxchg. The result is independent of
- * the high 32 bits, so it can race just fine with stat64_add32_carry
- * and even stat64_get!
- */
- old = qatomic_cmpxchg(&s->low, orig, result);
- if (orig == old) {
- return;
- }
- }
-}
-
-static inline void stat64_min(Stat64 *s, uint64_t value)
-{
- uint32_t low, high;
- uint32_t orig_low, orig_high;
-
- high = value >> 32;
- low = (uint32_t) value;
- do {
- orig_high = qatomic_read(&s->high);
- if (orig_high < high) {
- return;
- }
-
- if (orig_high == high) {
- /* High 32 bits are equal. Read low after high, otherwise we
- * can get a false positive (e.g. 0x1235,0x0000 changes to
- * 0x1234,0x8000 and we read it as 0x1234,0x0000). Pairs with
- * the write barrier in stat64_min_slow.
- */
- smp_rmb();
- orig_low = qatomic_read(&s->low);
- if (orig_low <= low) {
- return;
- }
-
- /* See if we were lucky and a writer raced against us. The
- * barrier is theoretically unnecessary, but if we remove it
- * we may miss being lucky.
- */
- smp_rmb();
- orig_high = qatomic_read(&s->high);
- if (orig_high < high) {
- return;
- }
- }
-
- /* If the value changes in any way, we have to take the lock. */
- } while (!stat64_min_slow(s, value));
-}
-
-static inline void stat64_max(Stat64 *s, uint64_t value)
-{
- uint32_t low, high;
- uint32_t orig_low, orig_high;
-
- high = value >> 32;
- low = (uint32_t) value;
- do {
- orig_high = qatomic_read(&s->high);
- if (orig_high > high) {
- return;
- }
-
- if (orig_high == high) {
- /* High 32 bits are equal. Read low after high, otherwise we
- * can get a false positive (e.g. 0x1234,0x8000 changes to
- * 0x1235,0x0000 and we read it as 0x1235,0x8000). Pairs with
- * the write barrier in stat64_max_slow.
- */
- smp_rmb();
- orig_low = qatomic_read(&s->low);
- if (orig_low >= low) {
- return;
- }
-
- /* See if we were lucky and a writer raced against us. The
- * barrier is theoretically unnecessary, but if we remove it
- * we may miss being lucky.
- */
- smp_rmb();
- orig_high = qatomic_read(&s->high);
- if (orig_high > high) {
- return;
- }
- }
-
- /* If the value changes in any way, we have to take the lock. */
- } while (!stat64_max_slow(s, value));
-}
-
-#endif
-
-#endif
diff --git a/util/stats64.c b/util/stats64.c
deleted file mode 100644
index 09736014ec..0000000000
--- a/util/stats64.c
+++ /dev/null
@@ -1,148 +0,0 @@
-/*
- * Atomic operations on 64-bit quantities.
- *
- * Copyright (C) 2017 Red Hat, Inc.
- *
- * Author: Paolo Bonzini <pbonzini@redhat.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-
-#include "qemu/osdep.h"
-#include "qemu/atomic.h"
-#include "qemu/stats64.h"
-#include "qemu/processor.h"
-
-#ifndef CONFIG_ATOMIC64
-static inline void stat64_rdlock(Stat64 *s)
-{
- /* Keep out incoming writers to avoid them starving us. */
- qatomic_add(&s->lock, 2);
-
- /* If there is a concurrent writer, wait for it. */
- while (qatomic_read(&s->lock) & 1) {
- cpu_relax();
- }
-}
-
-static inline void stat64_rdunlock(Stat64 *s)
-{
- qatomic_sub(&s->lock, 2);
-}
-
-static inline bool stat64_wrtrylock(Stat64 *s)
-{
- return qatomic_cmpxchg(&s->lock, 0, 1) == 0;
-}
-
-static inline void stat64_wrunlock(Stat64 *s)
-{
- qatomic_dec(&s->lock);
-}
-
-uint64_t stat64_get(const Stat64 *s)
-{
- uint32_t high, low;
-
- stat64_rdlock((Stat64 *)s);
-
- /* 64-bit writes always take the lock, so we can read in
- * any order.
- */
- high = qatomic_read(&s->high);
- low = qatomic_read(&s->low);
- stat64_rdunlock((Stat64 *)s);
-
- return ((uint64_t)high << 32) | low;
-}
-
-void stat64_set(Stat64 *s, uint64_t val)
-{
- while (!stat64_wrtrylock(s)) {
- cpu_relax();
- }
-
- qatomic_set(&s->high, val >> 32);
- qatomic_set(&s->low, val);
- stat64_wrunlock(s);
-}
-
-bool stat64_add32_carry(Stat64 *s, uint32_t low, uint32_t high)
-{
- uint32_t old;
-
- if (!stat64_wrtrylock(s)) {
- cpu_relax();
- return false;
- }
-
- /* 64-bit reads always take the lock, so they don't care about the
- * order of our update. By updating s->low first, we can check
- * whether we have to carry into s->high.
- */
- old = qatomic_fetch_add(&s->low, low);
- high += (old + low) < old;
- qatomic_add(&s->high, high);
- stat64_wrunlock(s);
- return true;
-}
-
-bool stat64_min_slow(Stat64 *s, uint64_t value)
-{
- uint32_t high, low;
- uint64_t orig;
-
- if (!stat64_wrtrylock(s)) {
- cpu_relax();
- return false;
- }
-
- high = qatomic_read(&s->high);
- low = qatomic_read(&s->low);
-
- orig = ((uint64_t)high << 32) | low;
- if (value < orig) {
- /* We have to set low before high, just like stat64_min reads
- * high before low. The value may become higher temporarily, but
- * stat64_get does not notice (it takes the lock) and the only ill
- * effect on stat64_min is that the slow path may be triggered
- * unnecessarily.
- */
- qatomic_set(&s->low, (uint32_t)value);
- smp_wmb();
- qatomic_set(&s->high, value >> 32);
- }
- stat64_wrunlock(s);
- return true;
-}
-
-bool stat64_max_slow(Stat64 *s, uint64_t value)
-{
- uint32_t high, low;
- uint64_t orig;
-
- if (!stat64_wrtrylock(s)) {
- cpu_relax();
- return false;
- }
-
- high = qatomic_read(&s->high);
- low = qatomic_read(&s->low);
-
- orig = ((uint64_t)high << 32) | low;
- if (value > orig) {
- /* We have to set low before high, just like stat64_max reads
- * high before low. The value may become lower temporarily, but
- * stat64_get does not notice (it takes the lock) and the only ill
- * effect on stat64_max is that the slow path may be triggered
- * unnecessarily.
- */
- qatomic_set(&s->low, (uint32_t)value);
- smp_wmb();
- qatomic_set(&s->high, value >> 32);
- }
- stat64_wrunlock(s);
- return true;
-}
-#endif
diff --git a/util/meson.build b/util/meson.build
index 35029380a3..d7d6b213f6 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -59,7 +59,6 @@ util_ss.add(files('qht.c'))
util_ss.add(files('qsp.c'))
util_ss.add(files('range.c'))
util_ss.add(files('reserved-region.c'))
-util_ss.add(files('stats64.c'))
util_ss.add(files('systemd.c'))
util_ss.add(files('transactions.c'))
util_ss.add(files('guest-random.c'))
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 52/54] include/qemu/atomic: Drop qatomic_{read,set}_[iu]64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (50 preceding siblings ...)
2026-01-18 22:04 ` [PULL 51/54] util: Remove stats64 Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 53/54] meson: Remove CONFIG_ATOMIC64 Richard Henderson
` (2 subsequent siblings)
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
Replace all uses with the normal qatomic_{read,set}.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/qemu/atomic.h | 22 ----------
accel/qtest/qtest.c | 4 +-
accel/tcg/icount-common.c | 25 ++++++-----
system/dirtylimit.c | 2 +-
tests/unit/test-rcu-list.c | 17 ++++----
util/atomic64.c | 85 --------------------------------------
util/cacheflush.c | 2 -
util/qsp.c | 8 ++--
util/meson.build | 3 --
9 files changed, 27 insertions(+), 141 deletions(-)
delete mode 100644 util/atomic64.c
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index c39dc99f2f..27d98014d4 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -247,26 +247,4 @@
typedef int64_t aligned_int64_t __attribute__((aligned(8)));
typedef uint64_t aligned_uint64_t __attribute__((aligned(8)));
-#ifdef CONFIG_ATOMIC64
-/* Use __nocheck because sizeof(void *) might be < sizeof(u64) */
-#define qatomic_read_i64(P) \
- _Generic(*(P), int64_t: qatomic_read__nocheck(P))
-#define qatomic_read_u64(P) \
- _Generic(*(P), uint64_t: qatomic_read__nocheck(P))
-#define qatomic_set_i64(P, V) \
- _Generic(*(P), int64_t: qatomic_set__nocheck(P, V))
-#define qatomic_set_u64(P, V) \
- _Generic(*(P), uint64_t: qatomic_set__nocheck(P, V))
-
-static inline void qatomic64_init(void)
-{
-}
-#else /* !CONFIG_ATOMIC64 */
-int64_t qatomic_read_i64(const int64_t *ptr);
-uint64_t qatomic_read_u64(const uint64_t *ptr);
-void qatomic_set_i64(int64_t *ptr, int64_t val);
-void qatomic_set_u64(uint64_t *ptr, uint64_t val);
-void qatomic64_init(void);
-#endif /* !CONFIG_ATOMIC64 */
-
#endif /* QEMU_ATOMIC_H */
diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
index 1d4337d698..bb1491d93b 100644
--- a/accel/qtest/qtest.c
+++ b/accel/qtest/qtest.c
@@ -31,12 +31,12 @@ static int64_t qtest_clock_counter;
static int64_t qtest_get_virtual_clock(void)
{
- return qatomic_read_i64(&qtest_clock_counter);
+ return qatomic_read(&qtest_clock_counter);
}
static void qtest_set_virtual_clock(int64_t count)
{
- qatomic_set_i64(&qtest_clock_counter, count);
+ qatomic_set(&qtest_clock_counter, count);
}
static int qtest_init_accel(AccelState *as, MachineState *ms)
diff --git a/accel/tcg/icount-common.c b/accel/tcg/icount-common.c
index d6471174a3..b1b6c005fe 100644
--- a/accel/tcg/icount-common.c
+++ b/accel/tcg/icount-common.c
@@ -86,8 +86,8 @@ static void icount_update_locked(CPUState *cpu)
int64_t executed = icount_get_executed(cpu);
cpu->icount_budget -= executed;
- qatomic_set_i64(&timers_state.qemu_icount,
- timers_state.qemu_icount + executed);
+ qatomic_set(&timers_state.qemu_icount,
+ timers_state.qemu_icount + executed);
}
/*
@@ -116,15 +116,14 @@ static int64_t icount_get_raw_locked(void)
/* Take into account what has run */
icount_update_locked(cpu);
}
- /* The read is protected by the seqlock, but needs atomic64 to avoid UB */
- return qatomic_read_i64(&timers_state.qemu_icount);
+ /* The read is protected by the seqlock, but needs atomic to avoid UB */
+ return qatomic_read(&timers_state.qemu_icount);
}
static int64_t icount_get_locked(void)
{
int64_t icount = icount_get_raw_locked();
- return qatomic_read_i64(&timers_state.qemu_icount_bias) +
- icount_to_ns(icount);
+ return qatomic_read(&timers_state.qemu_icount_bias) + icount_to_ns(icount);
}
int64_t icount_get_raw(void)
@@ -201,9 +200,9 @@ static void icount_adjust(void)
timers_state.icount_time_shift + 1);
}
timers_state.last_delta = delta;
- qatomic_set_i64(&timers_state.qemu_icount_bias,
- cur_icount - (timers_state.qemu_icount
- << timers_state.icount_time_shift));
+ qatomic_set(&timers_state.qemu_icount_bias,
+ cur_icount - (timers_state.qemu_icount
+ << timers_state.icount_time_shift));
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
}
@@ -269,8 +268,8 @@ static void icount_warp_rt(void)
}
warp_delta = MIN(warp_delta, delta);
}
- qatomic_set_i64(&timers_state.qemu_icount_bias,
- timers_state.qemu_icount_bias + warp_delta);
+ qatomic_set(&timers_state.qemu_icount_bias,
+ timers_state.qemu_icount_bias + warp_delta);
}
timers_state.vm_clock_warp_start = -1;
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
@@ -361,8 +360,8 @@ void icount_start_warp_timer(void)
*/
seqlock_write_lock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
- qatomic_set_i64(&timers_state.qemu_icount_bias,
- timers_state.qemu_icount_bias + deadline);
+ qatomic_set(&timers_state.qemu_icount_bias,
+ timers_state.qemu_icount_bias + deadline);
seqlock_write_unlock(&timers_state.vm_clock_seqlock,
&timers_state.vm_clock_lock);
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
diff --git a/system/dirtylimit.c b/system/dirtylimit.c
index a0c327533c..50fa67f3d6 100644
--- a/system/dirtylimit.c
+++ b/system/dirtylimit.c
@@ -123,7 +123,7 @@ static void *vcpu_dirty_rate_stat_thread(void *opaque)
int64_t vcpu_dirty_rate_get(int cpu_index)
{
DirtyRateVcpu *rates = vcpu_dirty_rate_stat->stat.rates;
- return qatomic_read_i64(&rates[cpu_index].dirty_rate);
+ return qatomic_read(&rates[cpu_index].dirty_rate);
}
void vcpu_dirty_rate_stat_start(void)
diff --git a/tests/unit/test-rcu-list.c b/tests/unit/test-rcu-list.c
index 8f0adb8b00..8dde3e61a8 100644
--- a/tests/unit/test-rcu-list.c
+++ b/tests/unit/test-rcu-list.c
@@ -105,7 +105,7 @@ static void reclaim_list_el(struct rcu_head *prcu)
struct list_element *el = container_of(prcu, struct list_element, rcu);
g_free(el);
/* Accessed only from call_rcu thread. */
- qatomic_set_i64(&n_reclaims, n_reclaims + 1);
+ qatomic_set(&n_reclaims, n_reclaims + 1);
}
#if TEST_LIST_TYPE == 1
@@ -247,7 +247,7 @@ static void *rcu_q_updater(void *arg)
qemu_mutex_lock(&counts_mutex);
n_nodes += n_nodes_local;
n_updates += n_updates_local;
- qatomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
+ qatomic_set(&n_nodes_removed, n_nodes_removed + n_removed_local);
qemu_mutex_unlock(&counts_mutex);
return NULL;
}
@@ -301,23 +301,22 @@ static void rcu_qtest(const char *test, int duration, int nreaders)
n_removed_local++;
}
qemu_mutex_lock(&counts_mutex);
- qatomic_set_i64(&n_nodes_removed, n_nodes_removed + n_removed_local);
+ qatomic_set(&n_nodes_removed, n_nodes_removed + n_removed_local);
qemu_mutex_unlock(&counts_mutex);
synchronize_rcu();
- while (qatomic_read_i64(&n_nodes_removed) >
- qatomic_read_i64(&n_reclaims)) {
+ while (qatomic_read(&n_nodes_removed) > qatomic_read(&n_reclaims)) {
g_usleep(100);
synchronize_rcu();
}
if (g_test_in_charge) {
- g_assert_cmpint(qatomic_read_i64(&n_nodes_removed), ==,
- qatomic_read_i64(&n_reclaims));
+ g_assert_cmpint(qatomic_read(&n_nodes_removed), ==,
+ qatomic_read(&n_reclaims));
} else {
printf("%s: %d readers; 1 updater; nodes read: " \
"%lld, nodes removed: %"PRIi64"; nodes reclaimed: %"PRIi64"\n",
test, nthreadsrunning - 1, n_reads,
- qatomic_read_i64(&n_nodes_removed),
- qatomic_read_i64(&n_reclaims));
+ qatomic_read(&n_nodes_removed),
+ qatomic_read(&n_reclaims));
exit(0);
}
}
diff --git a/util/atomic64.c b/util/atomic64.c
deleted file mode 100644
index c20d071d8e..0000000000
--- a/util/atomic64.c
+++ /dev/null
@@ -1,85 +0,0 @@
-/*
- * Copyright (C) 2018, Emilio G. Cota <cota@braap.org>
- *
- * License: GNU GPL, version 2 or later.
- * See the COPYING file in the top-level directory.
- */
-#include "qemu/osdep.h"
-#include "qemu/atomic.h"
-#include "qemu/thread.h"
-#include "qemu/cacheinfo.h"
-#include "qemu/memalign.h"
-
-#ifdef CONFIG_ATOMIC64
-#error This file must only be compiled if !CONFIG_ATOMIC64
-#endif
-
-/*
- * When !CONFIG_ATOMIC64, we serialize both reads and writes with spinlocks.
- * We use an array of spinlocks, with padding computed at run-time based on
- * the host's dcache line size.
- * We point to the array with a void * to simplify the padding's computation.
- * Each spinlock is located every lock_size bytes.
- */
-static void *lock_array;
-static size_t lock_size;
-
-/*
- * Systems without CONFIG_ATOMIC64 are unlikely to have many cores, so we use a
- * small array of locks.
- */
-#define NR_LOCKS 16
-
-static QemuSpin *addr_to_lock(const void *addr)
-{
- uintptr_t a = (uintptr_t)addr;
- uintptr_t idx;
-
- idx = a >> qemu_dcache_linesize_log;
- idx ^= (idx >> 8) ^ (idx >> 16);
- idx &= NR_LOCKS - 1;
- return lock_array + idx * lock_size;
-}
-
-#define GEN_READ(name, type) \
- type name(const type *ptr) \
- { \
- QemuSpin *lock = addr_to_lock(ptr); \
- type ret; \
- \
- qemu_spin_lock(lock); \
- ret = *ptr; \
- qemu_spin_unlock(lock); \
- return ret; \
- }
-
-GEN_READ(qatomic_read_i64, int64_t)
-GEN_READ(qatomic_read_u64, uint64_t)
-#undef GEN_READ
-
-#define GEN_SET(name, type) \
- void name(type *ptr, type val) \
- { \
- QemuSpin *lock = addr_to_lock(ptr); \
- \
- qemu_spin_lock(lock); \
- *ptr = val; \
- qemu_spin_unlock(lock); \
- }
-
-GEN_SET(qatomic_set_i64, int64_t)
-GEN_SET(qatomic_set_u64, uint64_t)
-#undef GEN_SET
-
-void qatomic64_init(void)
-{
- int i;
-
- lock_size = ROUND_UP(sizeof(QemuSpin), qemu_dcache_linesize);
- lock_array = qemu_memalign(qemu_dcache_linesize, lock_size * NR_LOCKS);
- for (i = 0; i < NR_LOCKS; i++) {
- QemuSpin *lock = lock_array + i * lock_size;
-
- qemu_spin_init(lock);
- }
-}
diff --git a/util/cacheflush.c b/util/cacheflush.c
index 99221a409f..c043c5f881 100644
--- a/util/cacheflush.c
+++ b/util/cacheflush.c
@@ -216,8 +216,6 @@ static void __attribute__((constructor)) init_cache_info(void)
qemu_icache_linesize_log = ctz32(isize);
qemu_dcache_linesize = dsize;
qemu_dcache_linesize_log = ctz32(dsize);
-
- qatomic64_init();
}
diff --git a/util/qsp.c b/util/qsp.c
index 6b783e2e7f..382e4397e2 100644
--- a/util/qsp.c
+++ b/util/qsp.c
@@ -346,9 +346,9 @@ static QSPEntry *qsp_entry_get(const void *obj, const char *file, int line,
*/
static inline void do_qsp_entry_record(QSPEntry *e, int64_t delta, bool acq)
{
- qatomic_set_u64(&e->ns, e->ns + delta);
+ qatomic_set(&e->ns, e->ns + delta);
if (acq) {
- qatomic_set_u64(&e->n_acqs, e->n_acqs + 1);
+ qatomic_set(&e->n_acqs, e->n_acqs + 1);
}
}
@@ -538,8 +538,8 @@ static void qsp_aggregate(void *p, uint32_t h, void *up)
* The entry is in the global hash table; read from it atomically (as in
* "read once").
*/
- agg->ns += qatomic_read_u64(&e->ns);
- agg->n_acqs += qatomic_read_u64(&e->n_acqs);
+ agg->ns += qatomic_read(&e->ns);
+ agg->n_acqs += qatomic_read(&e->n_acqs);
}
static void qsp_iter_diff(void *p, uint32_t hash, void *htp)
diff --git a/util/meson.build b/util/meson.build
index d7d6b213f6..59e835a8d3 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -1,8 +1,5 @@
util_ss.add(files('osdep.c', 'cutils.c', 'unicode.c', 'qemu-timer-common.c'))
util_ss.add(files('thread-context.c'), numa)
-if not config_host_data.get('CONFIG_ATOMIC64')
- util_ss.add(files('atomic64.c'))
-endif
if host_os != 'windows'
util_ss.add(files('aio-posix.c'))
util_ss.add(files('fdmon-poll.c'))
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 53/54] meson: Remove CONFIG_ATOMIC64
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (51 preceding siblings ...)
2026-01-18 22:04 ` [PULL 52/54] include/qemu/atomic: Drop qatomic_{read,set}_[iu]64 Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 22:04 ` [PULL 54/54] include/qemu/atomic: Drop aligned_{u}int64_t Richard Henderson
2026-01-18 23:59 ` [PULL 00/54] Remove 32-bit host support Richard Henderson
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier
This config is no longer used.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
meson.build | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/meson.build b/meson.build
index 0189d8fd44..3108f01e88 100644
--- a/meson.build
+++ b/meson.build
@@ -2939,22 +2939,6 @@ config_host_data.set('HAVE_BROKEN_SIZE_MAX', not cc.compiles('''
return printf("%zu", SIZE_MAX);
}''', args: ['-Werror']))
-# See if 64-bit atomic operations are supported.
-# Note that without __atomic builtins, we can only
-# assume atomic loads/stores max at pointer size.
-config_host_data.set('CONFIG_ATOMIC64', cc.links('''
- #include <stdint.h>
- int main(void)
- {
- uint64_t x = 0, y = 0;
- y = __atomic_load_n(&x, __ATOMIC_RELAXED);
- __atomic_store_n(&x, y, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n(&x, &y, x, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED);
- __atomic_exchange_n(&x, y, __ATOMIC_RELAXED);
- __atomic_fetch_add(&x, y, __ATOMIC_RELAXED);
- return 0;
- }''', args: qemu_isa_flags))
-
# has_int128_type is set to false on Emscripten to avoid errors by libffi
# during runtime.
has_int128_type = host_os != 'emscripten' and cc.compiles('''
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* [PULL 54/54] include/qemu/atomic: Drop aligned_{u}int64_t
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (52 preceding siblings ...)
2026-01-18 22:04 ` [PULL 53/54] meson: Remove CONFIG_ATOMIC64 Richard Henderson
@ 2026-01-18 22:04 ` Richard Henderson
2026-01-18 23:59 ` [PULL 00/54] Remove 32-bit host support Richard Henderson
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 22:04 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Pierrick Bouvier
As we no longer support i386 as a host architecture,
this abstraction is no longer required.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/atomic_template.h | 4 ++--
include/qemu/atomic.h | 13 -------------
include/system/cpu-timers-internal.h | 2 +-
linux-user/hppa/cpu_loop.c | 2 +-
util/qsp.c | 4 ++--
5 files changed, 6 insertions(+), 19 deletions(-)
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
index ae5203b439..f7924078f7 100644
--- a/accel/tcg/atomic_template.h
+++ b/accel/tcg/atomic_template.h
@@ -27,8 +27,8 @@
# define SHIFT 4
#elif DATA_SIZE == 8
# define SUFFIX q
-# define DATA_TYPE aligned_uint64_t
-# define SDATA_TYPE aligned_int64_t
+# define DATA_TYPE uint64_t
+# define SDATA_TYPE int64_t
# define BSWAP bswap64
# define SHIFT 3
#elif DATA_SIZE == 4
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 27d98014d4..dc9290084b 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -234,17 +234,4 @@
_oldn; \
})
-/*
- * Abstractions to access atomically (i.e. "once") i64/u64 variables.
- *
- * The i386 abi is odd in that by default members are only aligned to
- * 4 bytes, which means that 8-byte types can wind up mis-aligned.
- * Clang will then warn about this, and emit a call into libatomic.
- *
- * Use of these types in structures when they will be used with atomic
- * operations can avoid this.
- */
-typedef int64_t aligned_int64_t __attribute__((aligned(8)));
-typedef uint64_t aligned_uint64_t __attribute__((aligned(8)));
-
#endif /* QEMU_ATOMIC_H */
diff --git a/include/system/cpu-timers-internal.h b/include/system/cpu-timers-internal.h
index 94bb7394c5..8c262ce139 100644
--- a/include/system/cpu-timers-internal.h
+++ b/include/system/cpu-timers-internal.h
@@ -47,7 +47,7 @@ typedef struct TimersState {
int64_t last_delta;
/* Compensate for varying guest execution speed. */
- aligned_int64_t qemu_icount_bias;
+ int64_t qemu_icount_bias;
int64_t vm_clock_warp_start;
int64_t cpu_clock_offset;
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
index e5c0f52d94..972e85c487 100644
--- a/linux-user/hppa/cpu_loop.c
+++ b/linux-user/hppa/cpu_loop.c
@@ -83,7 +83,7 @@ static abi_ulong hppa_lws(CPUHPPAState *env)
uint64_t o64, n64, r64;
o64 = *(uint64_t *)g2h(cs, old);
n64 = *(uint64_t *)g2h(cs, new);
- r64 = qatomic_cmpxchg((aligned_uint64_t *)g2h(cs, addr), o64, n64);
+ r64 = qatomic_cmpxchg((uint64_t *)g2h(cs, addr), o64, n64);
ret = r64 != o64;
}
break;
diff --git a/util/qsp.c b/util/qsp.c
index 382e4397e2..55477ae025 100644
--- a/util/qsp.c
+++ b/util/qsp.c
@@ -83,8 +83,8 @@ typedef struct QSPCallSite QSPCallSite;
struct QSPEntry {
void *thread_ptr;
const QSPCallSite *callsite;
- aligned_uint64_t n_acqs;
- aligned_uint64_t ns;
+ uint64_t n_acqs;
+ uint64_t ns;
unsigned int n_objs; /* count of coalesced objs; only used for reporting */
};
typedef struct QSPEntry QSPEntry;
--
2.43.0
^ permalink raw reply related [flat|nested] 57+ messages in thread
* Re: [PULL 00/54] Remove 32-bit host support
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
` (53 preceding siblings ...)
2026-01-18 22:04 ` [PULL 54/54] include/qemu/atomic: Drop aligned_{u}int64_t Richard Henderson
@ 2026-01-18 23:59 ` Richard Henderson
54 siblings, 0 replies; 57+ messages in thread
From: Richard Henderson @ 2026-01-18 23:59 UTC (permalink / raw)
To: qemu-devel
On 1/19/26 09:03, Richard Henderson wrote:
> The following changes since commit 42a5675aa9dd718f395ca3279098051dfdbbc6e1:
>
> Merge tag 'accel-20260116' ofhttps://github.com/philmd/qemu into staging (2026-01-16 22:26:36 +1100)
>
> are available in the Git repository at:
>
> https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20260119
>
> for you to fetch changes up to 239b9d0488b270f5781fd7cd7139262c165d0351:
>
> include/qemu/atomic: Drop aligned_{u}int64_t (2026-01-17 10:46:51 +1100)
>
> ----------------------------------------------------------------
> Remove support for 32-bit hosts.
Applied, thanks. Please update https://wiki.qemu.org/ChangeLog/11.0 as appropriate.
r~
^ permalink raw reply [flat|nested] 57+ messages in thread
* Re: [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT
2026-01-18 22:03 ` [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT Richard Henderson
@ 2026-01-19 8:01 ` Pierrick Bouvier
0 siblings, 0 replies; 57+ messages in thread
From: Pierrick Bouvier @ 2026-01-19 8:01 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
Cc: Kyle Evans, Warner Losh, Philippe Mathieu-Daudé,
Michael Tokarev
On 1/18/26 2:03 PM, Richard Henderson wrote:
> The target test is TARGET_I386, not __i386__.
>
> Cc: Kyle Evans <kevans@freebsd.org>
> Reviewed-by: Warner Losh <imp@bsdimp.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> bsd-user/syscall_defs.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/bsd-user/syscall_defs.h b/bsd-user/syscall_defs.h
> index 52f84d5dd1..c49be32bdc 100644
> --- a/bsd-user/syscall_defs.h
> +++ b/bsd-user/syscall_defs.h
> @@ -247,7 +247,7 @@ struct target_freebsd11_stat {
> unsigned int:(8 / 2) * (16 - (int)sizeof(struct target_freebsd_timespec));
> } __packed;
>
> -#if defined(__i386__)
> +#if defined(TARGET_I386)
> #define TARGET_HAS_STAT_TIME_T_EXT 1
> #endif
>
This commit brings a regression, this patch fixes it:
https://lore.kernel.org/qemu-devel/20260119075738.712207-1-pierrick.bouvier@linaro.org/T/#u
@Michael, the current patch is a good candidate for stable, but it has
to come with the fix also, or it will be a regression.
Regards,
Pierrick
^ permalink raw reply [flat|nested] 57+ messages in thread
end of thread, other threads:[~2026-01-19 8:02 UTC | newest]
Thread overview: 57+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-18 22:03 [PULL 00/54] Remove 32-bit host support Richard Henderson
2026-01-18 22:03 ` [PULL 01/54] gitlab-ci: Drop build-wasm32-32bit Richard Henderson
2026-01-18 22:03 ` [PULL 02/54] tests/docker/dockerfiles: Drop wasm32 from emsdk-wasm-cross.docker Richard Henderson
2026-01-18 22:03 ` [PULL 03/54] gitlab: Remove 32-bit host testing Richard Henderson
2026-01-18 22:03 ` [PULL 04/54] meson: Reject 32-bit hosts Richard Henderson
2026-01-18 22:03 ` [PULL 05/54] meson: Drop cpu == wasm32 tests Richard Henderson
2026-01-18 22:03 ` [PULL 06/54] *: Remove arm host support Richard Henderson
2026-01-18 22:03 ` [PULL 07/54] bsd-user: Fix __i386__ test for TARGET_HAS_STAT_TIME_T_EXT Richard Henderson
2026-01-19 8:01 ` Pierrick Bouvier
2026-01-18 22:03 ` [PULL 08/54] *: Remove __i386__ tests Richard Henderson
2026-01-18 22:03 ` [PULL 09/54] *: Remove i386 host support Richard Henderson
2026-01-18 22:03 ` [PULL 10/54] host/include/x86_64/bufferiszero: Remove no SSE2 fallback Richard Henderson
2026-01-18 22:03 ` [PULL 11/54] meson: Remove cpu == x86 tests Richard Henderson
2026-01-18 22:03 ` [PULL 12/54] *: Remove ppc host support Richard Henderson
2026-01-18 22:03 ` [PULL 13/54] tcg/i386: Remove TCG_TARGET_REG_BITS tests Richard Henderson
2026-01-18 22:03 ` [PULL 14/54] tcg/x86_64: Rename from i386 Richard Henderson
2026-01-18 22:03 ` [PULL 15/54] tcg/ppc64: Rename from ppc Richard Henderson
2026-01-18 22:03 ` [PULL 16/54] meson: Drop host_arch rename for mips64 Richard Henderson
2026-01-18 22:03 ` [PULL 17/54] meson: Drop host_arch rename for riscv64 Richard Henderson
2026-01-18 22:03 ` [PULL 18/54] meson: Remove cpu == riscv32 tests Richard Henderson
2026-01-18 22:03 ` [PULL 19/54] tcg: Make TCG_TARGET_REG_BITS common Richard Henderson
2026-01-18 22:03 ` [PULL 20/54] tcg: Replace TCG_TARGET_REG_BITS / 8 Richard Henderson
2026-01-18 22:03 ` [PULL 21/54] *: Drop TCG_TARGET_REG_BITS test for prefer_i64 Richard Henderson
2026-01-18 22:03 ` [PULL 22/54] tcg: Remove INDEX_op_brcond2_i32 Richard Henderson
2026-01-18 22:03 ` [PULL 23/54] tcg: Remove INDEX_op_setcond2_i32 Richard Henderson
2026-01-18 22:03 ` [PULL 24/54] tcg: Remove INDEX_op_dup2_vec Richard Henderson
2026-01-18 22:03 ` [PULL 25/54] tcg/tci: Drop TCG_TARGET_REG_BITS tests Richard Henderson
2026-01-18 22:03 ` [PULL 26/54] tcg/tci: Remove glue TCG_TARGET_REG_BITS renames Richard Henderson
2026-01-18 22:03 ` [PULL 27/54] tcg: Drop TCG_TARGET_REG_BITS test in region.c Richard Henderson
2026-01-18 22:03 ` [PULL 28/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op.c Richard Henderson
2026-01-18 22:03 ` [PULL 29/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-gvec.c Richard Henderson
2026-01-18 22:03 ` [PULL 30/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-op-ldst.c Richard Henderson
2026-01-18 22:03 ` [PULL 31/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg.c Richard Henderson
2026-01-18 22:03 ` [PULL 32/54] tcg: Drop TCG_TARGET_REG_BITS tests in tcg-internal.h Richard Henderson
2026-01-18 22:03 ` [PULL 33/54] tcg: Drop TCG_TARGET_REG_BITS test in tcg-has.h Richard Henderson
2026-01-18 22:03 ` [PULL 34/54] include/tcg: Drop TCG_TARGET_REG_BITS tests Richard Henderson
2026-01-18 22:03 ` [PULL 35/54] target/i386/tcg: Drop TCG_TARGET_REG_BITS test Richard Henderson
2026-01-18 22:03 ` [PULL 36/54] target/riscv: " Richard Henderson
2026-01-18 22:03 ` [PULL 37/54] accel/tcg/runtime: Remove 64-bit shift helpers Richard Henderson
2026-01-18 22:03 ` [PULL 38/54] accel/tcg/runtime: Remove helper_nonatomic_cmpxchgo Richard Henderson
2026-01-18 22:03 ` [PULL 39/54] tcg: Unconditionally define atomic64 helpers Richard Henderson
2026-01-18 22:04 ` [PULL 40/54] accel/tcg: Drop CONFIG_ATOMIC64 checks from ldst_atomicicy.c.inc Richard Henderson
2026-01-18 22:04 ` [PULL 41/54] accel/tcg: Drop CONFIG_ATOMIC64 test from translator.c Richard Henderson
2026-01-18 22:04 ` [PULL 42/54] linux-user/arm: Drop CONFIG_ATOMIC64 test Richard Henderson
2026-01-18 22:04 ` [PULL 43/54] linux-user/hppa: " Richard Henderson
2026-01-18 22:04 ` [PULL 44/54] target/arm: Drop CONFIG_ATOMIC64 tests Richard Henderson
2026-01-18 22:04 ` [PULL 45/54] target/hppa: Drop CONFIG_ATOMIC64 test Richard Henderson
2026-01-18 22:04 ` [PULL 46/54] target/m68k: Drop CONFIG_ATOMIC64 tests Richard Henderson
2026-01-18 22:04 ` [PULL 47/54] target/s390x: " Richard Henderson
2026-01-18 22:04 ` [PULL 48/54] target/s390x: Simplify atomicity check in do_csst Richard Henderson
2026-01-18 22:04 ` [PULL 49/54] migration: Drop use of Stat64 Richard Henderson
2026-01-18 22:04 ` [PULL 50/54] block: " Richard Henderson
2026-01-18 22:04 ` [PULL 51/54] util: Remove stats64 Richard Henderson
2026-01-18 22:04 ` [PULL 52/54] include/qemu/atomic: Drop qatomic_{read,set}_[iu]64 Richard Henderson
2026-01-18 22:04 ` [PULL 53/54] meson: Remove CONFIG_ATOMIC64 Richard Henderson
2026-01-18 22:04 ` [PULL 54/54] include/qemu/atomic: Drop aligned_{u}int64_t Richard Henderson
2026-01-18 23:59 ` [PULL 00/54] Remove 32-bit host support Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox