* [PULL 00/16] Accelerators and TCG patches for 2026-03-10
@ 2026-03-10 10:44 Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 01/16] tcg: Drop extract+shl expansions in tcg_gen_deposit_z_* Philippe Mathieu-Daudé
` (17 more replies)
0 siblings, 18 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:44 UTC (permalink / raw)
To: qemu-devel
The following changes since commit 31ee190665dd50054c39cef5ad740680aabda382:
Merge tag 'hw-misc-20260309' of https://github.com/philmd/qemu into staging (2026-03-09 17:19:26 +0000)
are available in the Git repository at:
https://github.com/philmd/qemu.git tags/accel-tcg-20260310
for you to fetch changes up to c574ff9245c7e9d10433efd60df2319ff4d69084:
accel/qtest: Build once as common object (2026-03-10 11:36:21 +0100)
----------------------------------------------------------------
Accelerators and TCG patches queue
- Improve TCG extract and deposit
- Build accelerator stub files once
----------------------------------------------------------------
Paolo Bonzini (2):
tcg: Add tcg_op_imm_match
tcg: target-dependent lowering of extract to shr/and
Philippe Mathieu-Daudé (9):
target/hppa: Expand tcg_global_mem_new() -> tcg_global_mem_new_i64()
accel/kvm: Include missing 'exec/cpu-common.h' header
accel/kvm: Make kvm_irqchip*notifier() declaration non target-specific
accel/stubs: Build stubs once
accel/mshv: Forward-declare mshv_root_hvcall structure
accel/mshv: Build without target-specific knowledge
accel/hvf: Build without target-specific knowledge
accel/xen: Build without target-specific knowledge
accel/qtest: Build once as common object
Richard Henderson (5):
tcg: Drop extract+shl expansions in tcg_gen_deposit_z_*
tcg/optimize: Lower unsupported deposit during optimize
tcg/optimize: Lower unsupported extract2 during optimize
tcg: Expand missing rotri with extract2
tcg/optimize: possibly expand deposit into zero with shifts
meson.build | 3 -
include/system/kvm.h | 8 +-
include/system/mshv_int.h | 5 +-
tcg/tcg-internal.h | 5 +
accel/kvm/kvm-accel-ops.c | 1 +
accel/mshv/mshv-all.c | 2 +-
target/hppa/translate.c | 8 +-
tcg/optimize.c | 266 ++++++++++++++++++++++++++++++++++----
tcg/tcg-op.c | 215 +++++++-----------------------
tcg/tcg.c | 21 ++-
accel/hvf/meson.build | 5 +-
accel/mshv/meson.build | 5 +-
accel/qtest/meson.build | 5 +-
accel/stubs/meson.build | 21 ++-
accel/xen/meson.build | 2 +-
15 files changed, 338 insertions(+), 234 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PULL 01/16] tcg: Drop extract+shl expansions in tcg_gen_deposit_z_*
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
@ 2026-03-10 10:44 ` Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 02/16] tcg/optimize: Lower unsupported deposit during optimize Philippe Mathieu-Daudé
` (16 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:44 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
The extract+shl expansion is handled in tcg_gen_andi_*
by preferring supported extract.
The shl+extract expansion is simply removed for now; it was
only present for slightly smaller code generation on x86.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20260303010833.1115741-2-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/tcg-op.c | 30 ------------------------------
1 file changed, 30 deletions(-)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 8d67acc4fce..b95b07efb57 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -937,21 +937,6 @@ void tcg_gen_deposit_z_i32(TCGv_i32 ret, TCGv_i32 arg,
TCGv_i32 zero = tcg_constant_i32(0);
tcg_gen_op5ii_i32(INDEX_op_deposit, ret, zero, arg, ofs, len);
} else {
- /*
- * To help two-operand hosts we prefer to zero-extend first,
- * which allows ARG to stay live.
- */
- if (TCG_TARGET_extract_valid(TCG_TYPE_I32, 0, len)) {
- tcg_gen_extract_i32(ret, arg, 0, len);
- tcg_gen_shli_i32(ret, ret, ofs);
- return;
- }
- /* Otherwise prefer zero-extension over AND for code size. */
- if (TCG_TARGET_extract_valid(TCG_TYPE_I32, 0, ofs + len)) {
- tcg_gen_shli_i32(ret, arg, ofs);
- tcg_gen_extract_i32(ret, ret, 0, ofs + len);
- return;
- }
tcg_gen_andi_i32(ret, arg, (1u << len) - 1);
tcg_gen_shli_i32(ret, ret, ofs);
}
@@ -2210,21 +2195,6 @@ void tcg_gen_deposit_z_i64(TCGv_i64 ret, TCGv_i64 arg,
TCGv_i64 zero = tcg_constant_i64(0);
tcg_gen_op5ii_i64(INDEX_op_deposit, ret, zero, arg, ofs, len);
} else {
- /*
- * To help two-operand hosts we prefer to zero-extend first,
- * which allows ARG to stay live.
- */
- if (TCG_TARGET_extract_valid(TCG_TYPE_I64, 0, len)) {
- tcg_gen_extract_i64(ret, arg, 0, len);
- tcg_gen_shli_i64(ret, ret, ofs);
- return;
- }
- /* Otherwise prefer zero-extension over AND for code size. */
- if (TCG_TARGET_extract_valid(TCG_TYPE_I64, 0, ofs + len)) {
- tcg_gen_shli_i64(ret, arg, ofs);
- tcg_gen_extract_i64(ret, ret, 0, ofs + len);
- return;
- }
tcg_gen_andi_i64(ret, arg, (1ull << len) - 1);
tcg_gen_shli_i64(ret, ret, ofs);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 02/16] tcg/optimize: Lower unsupported deposit during optimize
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 01/16] tcg: Drop extract+shl expansions in tcg_gen_deposit_z_* Philippe Mathieu-Daudé
@ 2026-03-10 10:44 ` Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 03/16] tcg/optimize: Lower unsupported extract2 " Philippe Mathieu-Daudé
` (15 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:44 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
The expansions that we chose in tcg-op.c may be less than optimal.
Delay lowering until optimize, so that we have propagated constants
and have computed known zero/one masks.
Reviewed-by: Jim MacArthur <jim.macarthur@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20260303010833.1115741-3-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/optimize.c | 179 +++++++++++++++++++++++++++++++++++++++++++------
tcg/tcg-op.c | 83 ++---------------------
2 files changed, 163 insertions(+), 99 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 801a0a2c686..42637c12fa1 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1652,12 +1652,17 @@ static bool fold_ctpop(OptContext *ctx, TCGOp *op)
static bool fold_deposit(OptContext *ctx, TCGOp *op)
{
- TempOptInfo *t1 = arg_info(op->args[1]);
- TempOptInfo *t2 = arg_info(op->args[2]);
+ TCGArg ret = op->args[0];
+ TCGArg arg1 = op->args[1];
+ TCGArg arg2 = op->args[2];
int ofs = op->args[3];
int len = op->args[4];
- int width = 8 * tcg_type_size(ctx->type);
- uint64_t z_mask, o_mask, s_mask;
+ TempOptInfo *t1 = arg_info(arg1);
+ TempOptInfo *t2 = arg_info(arg2);
+ int width;
+ uint64_t z_mask, o_mask, s_mask, type_mask, len_mask;
+ TCGOp *op2;
+ bool valid;
if (ti_is_const(t1) && ti_is_const(t2)) {
return tcg_opt_gen_movi(ctx, op, op->args[0],
@@ -1665,35 +1670,167 @@ static bool fold_deposit(OptContext *ctx, TCGOp *op)
ti_const_val(t2)));
}
- /* Inserting a value into zero at offset 0. */
- if (ti_is_const_val(t1, 0) && ofs == 0) {
- uint64_t mask = MAKE_64BIT_MASK(0, len);
+ width = 8 * tcg_type_size(ctx->type);
+ type_mask = MAKE_64BIT_MASK(0, width);
+ len_mask = MAKE_64BIT_MASK(0, len);
+ /* Inserting all-zero into a value. */
+ if ((t2->z_mask & len_mask) == 0) {
op->opc = INDEX_op_and;
- op->args[1] = op->args[2];
- op->args[2] = arg_new_constant(ctx, mask);
+ op->args[2] = arg_new_constant(ctx, ~(len_mask << ofs));
return fold_and(ctx, op);
}
- /* Inserting zero into a value. */
- if (ti_is_const_val(t2, 0)) {
- uint64_t mask = deposit64(-1, ofs, len, 0);
-
- op->opc = INDEX_op_and;
- op->args[2] = arg_new_constant(ctx, mask);
- return fold_and(ctx, op);
+ /* Inserting all-one into a value. */
+ if ((t2->o_mask & len_mask) == len_mask) {
+ op->opc = INDEX_op_or;
+ op->args[2] = arg_new_constant(ctx, len_mask << ofs);
+ return fold_or(ctx, op);
}
- /* The s_mask from the top portion of the deposit is still valid. */
- if (ofs + len == width) {
- s_mask = t2->s_mask << ofs;
- } else {
- s_mask = t1->s_mask & ~MAKE_64BIT_MASK(0, ofs + len);
+ valid = TCG_TARGET_deposit_valid(ctx->type, ofs, len);
+
+ /* Lower invalid deposit of constant as AND + OR. */
+ if (!valid && ti_is_const(t2)) {
+ uint64_t ins_val = (ti_const_val(t2) & len_mask) << ofs;
+
+ op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
+ op2->args[0] = ret;
+ op2->args[1] = arg1;
+ op2->args[2] = arg_new_constant(ctx, ~(len_mask << ofs));
+ fold_and(ctx, op2);
+
+ op->opc = INDEX_op_or;
+ op->args[1] = ret;
+ op->args[2] = arg_new_constant(ctx, ins_val);
+ return fold_or(ctx, op);
}
+ /*
+ * Compute result masks before calling other fold_* subroutines
+ * which could modify the masks of our inputs.
+ */
z_mask = deposit64(t1->z_mask, ofs, len, t2->z_mask);
o_mask = deposit64(t1->o_mask, ofs, len, t2->o_mask);
+ if (ofs + len < width) {
+ s_mask = t1->s_mask & ~MAKE_64BIT_MASK(0, ofs + len);
+ } else {
+ s_mask = t2->s_mask << ofs;
+ }
+ /* Inserting a value into zero. */
+ if (ti_is_const_val(t1, 0)) {
+ uint64_t need_mask;
+
+ /* Always lower deposit into zero at 0 as AND. */
+ if (ofs == 0) {
+ op->opc = INDEX_op_and;
+ op->args[1] = arg2;
+ op->args[2] = arg_new_constant(ctx, len_mask);
+ return fold_and(ctx, op);
+ }
+
+ /*
+ * If the portion of the value outside len that remains after
+ * shifting is zero, we can elide the mask and just shift.
+ */
+ need_mask = t2->z_mask & ~len_mask;
+ need_mask = (need_mask << ofs) & type_mask;
+ if (!need_mask) {
+ op->opc = INDEX_op_shl;
+ op->args[1] = arg2;
+ op->args[2] = arg_new_constant(ctx, ofs);
+ goto done;
+ }
+
+ /* Lower invalid deposit into zero as AND + SHL. */
+ if (!valid) {
+ op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
+ op2->args[0] = ret;
+ op2->args[1] = arg2;
+ op2->args[2] = arg_new_constant(ctx, len_mask);
+ fold_and(ctx, op2);
+
+ op->opc = INDEX_op_shl;
+ op->args[1] = ret;
+ op->args[2] = arg_new_constant(ctx, ofs);
+ goto done;
+ }
+ }
+
+ /* After special cases, lower invalid deposit. */
+ if (!valid) {
+ TCGArg tmp;
+
+ if (tcg_op_supported(INDEX_op_extract2, ctx->type, 0)) {
+ if (ofs == 0 && tcg_op_supported(INDEX_op_rotl, ctx->type, 0)) {
+ /*
+ * ret = arg2:arg1 >> len
+ * ret = rotl(ret, len)
+ */
+ op2 = opt_insert_before(ctx, op, INDEX_op_extract2, 4);
+ op2->args[0] = ret;
+ op2->args[1] = arg1;
+ op2->args[2] = arg2;
+ op2->args[3] = len;
+
+ op->opc = INDEX_op_rotl;
+ op->args[1] = ret;
+ op->args[2] = arg_new_constant(ctx, len);
+ goto done;
+ }
+ if (ofs + len == width) {
+ /*
+ * tmp = arg1 << len
+ * ret = arg2:tmp >> len
+ */
+ tmp = ret == arg2 ? arg_new_temp(ctx) : ret;
+
+ op2 = opt_insert_before(ctx, op, INDEX_op_shl, 4);
+ op2->args[0] = tmp;
+ op2->args[1] = arg1;
+ op2->args[2] = arg_new_constant(ctx, len);
+
+ op->opc = INDEX_op_extract2;
+ op->args[0] = ret;
+ op->args[1] = tmp;
+ op->args[2] = arg2;
+ op->args[3] = len;
+ goto done;
+ }
+ }
+
+ /*
+ * tmp = arg2 & mask
+ * ret = arg1 & ~(mask << ofs)
+ * tmp = tmp << ofs
+ * ret = ret | tmp
+ */
+ tmp = arg_new_temp(ctx);
+
+ op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
+ op2->args[0] = tmp;
+ op2->args[1] = arg2;
+ op2->args[2] = arg_new_constant(ctx, len_mask);
+ fold_and(ctx, op2);
+
+ op2 = opt_insert_before(ctx, op, INDEX_op_shl, 3);
+ op2->args[0] = tmp;
+ op2->args[1] = tmp;
+ op2->args[2] = arg_new_constant(ctx, ofs);
+
+ op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
+ op2->args[0] = ret;
+ op2->args[1] = arg1;
+ op2->args[2] = arg_new_constant(ctx, ~(len_mask << ofs));
+ fold_and(ctx, op2);
+
+ op->opc = INDEX_op_or;
+ op->args[1] = ret;
+ op->args[2] = tmp;
+ }
+
+ done:
return fold_masks_zos(ctx, op, z_mask, o_mask, s_mask);
}
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index b95b07efb57..96f72ba381c 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -876,9 +876,6 @@ void tcg_gen_rotri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2,
unsigned int ofs, unsigned int len)
{
- uint32_t mask;
- TCGv_i32 t1;
-
tcg_debug_assert(ofs < 32);
tcg_debug_assert(len > 0);
tcg_debug_assert(len <= 32);
@@ -886,39 +883,9 @@ void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2,
if (len == 32) {
tcg_gen_mov_i32(ret, arg2);
- return;
- }
- if (TCG_TARGET_deposit_valid(TCG_TYPE_I32, ofs, len)) {
- tcg_gen_op5ii_i32(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
- return;
- }
-
- t1 = tcg_temp_ebb_new_i32();
-
- if (tcg_op_supported(INDEX_op_extract2, TCG_TYPE_I32, 0)) {
- if (ofs + len == 32) {
- tcg_gen_shli_i32(t1, arg1, len);
- tcg_gen_extract2_i32(ret, t1, arg2, len);
- goto done;
- }
- if (ofs == 0) {
- tcg_gen_extract2_i32(ret, arg1, arg2, len);
- tcg_gen_rotli_i32(ret, ret, len);
- goto done;
- }
- }
-
- mask = (1u << len) - 1;
- if (ofs + len < 32) {
- tcg_gen_andi_i32(t1, arg2, mask);
- tcg_gen_shli_i32(t1, t1, ofs);
} else {
- tcg_gen_shli_i32(t1, arg2, ofs);
+ tcg_gen_op5ii_i32(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
}
- tcg_gen_andi_i32(ret, arg1, ~(mask << ofs));
- tcg_gen_or_i32(ret, ret, t1);
- done:
- tcg_temp_free_i32(t1);
}
void tcg_gen_deposit_z_i32(TCGv_i32 ret, TCGv_i32 arg,
@@ -932,13 +899,10 @@ void tcg_gen_deposit_z_i32(TCGv_i32 ret, TCGv_i32 arg,
if (ofs + len == 32) {
tcg_gen_shli_i32(ret, arg, ofs);
} else if (ofs == 0) {
- tcg_gen_andi_i32(ret, arg, (1u << len) - 1);
- } else if (TCG_TARGET_deposit_valid(TCG_TYPE_I32, ofs, len)) {
+ tcg_gen_extract_i32(ret, arg, 0, len);
+ } else {
TCGv_i32 zero = tcg_constant_i32(0);
tcg_gen_op5ii_i32(INDEX_op_deposit, ret, zero, arg, ofs, len);
- } else {
- tcg_gen_andi_i32(ret, arg, (1u << len) - 1);
- tcg_gen_shli_i32(ret, ret, ofs);
}
}
@@ -2133,9 +2097,6 @@ void tcg_gen_rotri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2,
unsigned int ofs, unsigned int len)
{
- uint64_t mask;
- TCGv_i64 t1;
-
tcg_debug_assert(ofs < 64);
tcg_debug_assert(len > 0);
tcg_debug_assert(len <= 64);
@@ -2143,40 +2104,9 @@ void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2,
if (len == 64) {
tcg_gen_mov_i64(ret, arg2);
- return;
- }
-
- if (TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
- tcg_gen_op5ii_i64(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
- return;
- }
-
- t1 = tcg_temp_ebb_new_i64();
-
- if (tcg_op_supported(INDEX_op_extract2, TCG_TYPE_I64, 0)) {
- if (ofs + len == 64) {
- tcg_gen_shli_i64(t1, arg1, len);
- tcg_gen_extract2_i64(ret, t1, arg2, len);
- goto done;
- }
- if (ofs == 0) {
- tcg_gen_extract2_i64(ret, arg1, arg2, len);
- tcg_gen_rotli_i64(ret, ret, len);
- goto done;
- }
- }
-
- mask = (1ull << len) - 1;
- if (ofs + len < 64) {
- tcg_gen_andi_i64(t1, arg2, mask);
- tcg_gen_shli_i64(t1, t1, ofs);
} else {
- tcg_gen_shli_i64(t1, arg2, ofs);
+ tcg_gen_op5ii_i64(INDEX_op_deposit, ret, arg1, arg2, ofs, len);
}
- tcg_gen_andi_i64(ret, arg1, ~(mask << ofs));
- tcg_gen_or_i64(ret, ret, t1);
- done:
- tcg_temp_free_i64(t1);
}
void tcg_gen_deposit_z_i64(TCGv_i64 ret, TCGv_i64 arg,
@@ -2191,12 +2121,9 @@ void tcg_gen_deposit_z_i64(TCGv_i64 ret, TCGv_i64 arg,
tcg_gen_shli_i64(ret, arg, ofs);
} else if (ofs == 0) {
tcg_gen_andi_i64(ret, arg, (1ull << len) - 1);
- } else if (TCG_TARGET_deposit_valid(TCG_TYPE_I64, ofs, len)) {
+ } else {
TCGv_i64 zero = tcg_constant_i64(0);
tcg_gen_op5ii_i64(INDEX_op_deposit, ret, zero, arg, ofs, len);
- } else {
- tcg_gen_andi_i64(ret, arg, (1ull << len) - 1);
- tcg_gen_shli_i64(ret, ret, ofs);
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 03/16] tcg/optimize: Lower unsupported extract2 during optimize
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 01/16] tcg: Drop extract+shl expansions in tcg_gen_deposit_z_* Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 02/16] tcg/optimize: Lower unsupported deposit during optimize Philippe Mathieu-Daudé
@ 2026-03-10 10:44 ` Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 04/16] tcg: Expand missing rotri with extract2 Philippe Mathieu-Daudé
` (14 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:44 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
The expansions that we chose in tcg-op.c may be less than optimal.
Delay lowering until optimize, so that we have propagated constants
and have computed known zero/one masks.
Reviewed-by: Jim MacArthur <jim.macarthur@linaro.org>
Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20260303010833.1115741-4-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/optimize.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++----
tcg/tcg-op.c | 14 ++---------
2 files changed, 60 insertions(+), 17 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 42637c12fa1..59761c2c844 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1918,21 +1918,74 @@ static bool fold_extract2(OptContext *ctx, TCGOp *op)
uint64_t z2 = t2->z_mask;
uint64_t o1 = t1->o_mask;
uint64_t o2 = t2->o_mask;
+ uint64_t zr, or;
int shr = op->args[3];
+ int shl;
if (ctx->type == TCG_TYPE_I32) {
z1 = (uint32_t)z1 >> shr;
o1 = (uint32_t)o1 >> shr;
- z2 = (uint64_t)((int32_t)z2 << (32 - shr));
- o2 = (uint64_t)((int32_t)o2 << (32 - shr));
+ shl = 32 - shr;
+ z2 = (uint64_t)((int32_t)z2 << shl);
+ o2 = (uint64_t)((int32_t)o2 << shl);
} else {
z1 >>= shr;
o1 >>= shr;
- z2 <<= 64 - shr;
- o2 <<= 64 - shr;
+ shl = 64 - shr;
+ z2 <<= shl;
+ o2 <<= shl;
+ }
+ zr = z1 | z2;
+ or = o1 | o2;
+
+ if (zr == or) {
+ return tcg_opt_gen_movi(ctx, op, op->args[0], zr);
}
- return fold_masks_zo(ctx, op, z1 | z2, o1 | o2);
+ if (z2 == 0) {
+ /* High part zeros folds to simple right shift. */
+ op->opc = INDEX_op_shr;
+ op->args[2] = arg_new_constant(ctx, shr);
+ } else if (z1 == 0) {
+ /* Low part zeros folds to simple left shift. */
+ op->opc = INDEX_op_shl;
+ op->args[1] = op->args[2];
+ op->args[2] = arg_new_constant(ctx, shl);
+ } else if (!tcg_op_supported(INDEX_op_extract2, ctx->type, 0)) {
+ TCGArg tmp = arg_new_temp(ctx);
+ TCGOp *op2 = opt_insert_before(ctx, op, INDEX_op_shr, 3);
+
+ op2->args[0] = tmp;
+ op2->args[1] = op->args[1];
+ op2->args[2] = arg_new_constant(ctx, shr);
+
+ if (TCG_TARGET_deposit_valid(ctx->type, shl, shr)) {
+ /*
+ * Deposit has more arguments than extract2,
+ * so we need to create a new TCGOp.
+ */
+ op2 = opt_insert_before(ctx, op, INDEX_op_deposit, 5);
+ op2->args[0] = op->args[0];
+ op2->args[1] = tmp;
+ op2->args[2] = op->args[2];
+ op2->args[3] = shl;
+ op2->args[4] = shr;
+
+ tcg_op_remove(ctx->tcg, op);
+ op = op2;
+ } else {
+ op2 = opt_insert_before(ctx, op, INDEX_op_shl, 3);
+ op2->args[0] = op->args[0];
+ op2->args[1] = op->args[2];
+ op2->args[2] = arg_new_constant(ctx, shl);
+
+ op->opc = INDEX_op_or;
+ op->args[1] = op->args[0];
+ op->args[2] = tmp;
+ }
+ }
+
+ return fold_masks_zo(ctx, op, zr, or);
}
static bool fold_exts(OptContext *ctx, TCGOp *op)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 96f72ba381c..fc2254f54a7 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -1000,13 +1000,8 @@ void tcg_gen_extract2_i32(TCGv_i32 ret, TCGv_i32 al, TCGv_i32 ah,
tcg_gen_mov_i32(ret, ah);
} else if (al == ah) {
tcg_gen_rotri_i32(ret, al, ofs);
- } else if (tcg_op_supported(INDEX_op_extract2, TCG_TYPE_I32, 0)) {
- tcg_gen_op4i_i32(INDEX_op_extract2, ret, al, ah, ofs);
} else {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- tcg_gen_shri_i32(t0, al, ofs);
- tcg_gen_deposit_i32(ret, t0, ah, 32 - ofs, ofs);
- tcg_temp_free_i32(t0);
+ tcg_gen_op4i_i32(INDEX_op_extract2, ret, al, ah, ofs);
}
}
@@ -2221,13 +2216,8 @@ void tcg_gen_extract2_i64(TCGv_i64 ret, TCGv_i64 al, TCGv_i64 ah,
tcg_gen_mov_i64(ret, ah);
} else if (al == ah) {
tcg_gen_rotri_i64(ret, al, ofs);
- } else if (tcg_op_supported(INDEX_op_extract2, TCG_TYPE_I64, 0)) {
- tcg_gen_op4i_i64(INDEX_op_extract2, ret, al, ah, ofs);
} else {
- TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- tcg_gen_shri_i64(t0, al, ofs);
- tcg_gen_deposit_i64(ret, t0, ah, 64 - ofs, ofs);
- tcg_temp_free_i64(t0);
+ tcg_gen_op4i_i64(INDEX_op_extract2, ret, al, ah, ofs);
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 04/16] tcg: Expand missing rotri with extract2
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (2 preceding siblings ...)
2026-03-10 10:44 ` [PULL 03/16] tcg/optimize: Lower unsupported extract2 " Philippe Mathieu-Daudé
@ 2026-03-10 10:44 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 05/16] tcg: Add tcg_op_imm_match Philippe Mathieu-Daudé
` (13 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:44 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Use extract2 to implement rotri. To make this easier,
redefine rotli in terms of rotri, rather than the reverse.
Reviewed-by: Jim MacArthur <jim.macarthur@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20260303010833.1115741-5-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/tcg-op.c | 52 ++++++++++++++++++++++++----------------------------
1 file changed, 24 insertions(+), 28 deletions(-)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index fc2254f54a7..13a56d89fa1 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -826,23 +826,12 @@ void tcg_gen_rotl_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 32);
- /* some cases can be optimized here */
if (arg2 == 0) {
tcg_gen_mov_i32(ret, arg1);
} else if (tcg_op_supported(INDEX_op_rotl, TCG_TYPE_I32, 0)) {
- TCGv_i32 t0 = tcg_constant_i32(arg2);
- tcg_gen_op3_i32(INDEX_op_rotl, ret, arg1, t0);
- } else if (tcg_op_supported(INDEX_op_rotr, TCG_TYPE_I32, 0)) {
- TCGv_i32 t0 = tcg_constant_i32(32 - arg2);
- tcg_gen_op3_i32(INDEX_op_rotr, ret, arg1, t0);
+ tcg_gen_op3_i32(INDEX_op_rotl, ret, arg1, tcg_constant_i32(arg2));
} else {
- TCGv_i32 t0 = tcg_temp_ebb_new_i32();
- TCGv_i32 t1 = tcg_temp_ebb_new_i32();
- tcg_gen_shli_i32(t0, arg1, arg2);
- tcg_gen_shri_i32(t1, arg1, 32 - arg2);
- tcg_gen_or_i32(ret, t0, t1);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
+ tcg_gen_rotri_i32(ret, arg1, -arg2 & 31);
}
}
@@ -870,7 +859,16 @@ void tcg_gen_rotr_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
void tcg_gen_rotri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 32);
- tcg_gen_rotli_i32(ret, arg1, -arg2 & 31);
+ if (arg2 == 0) {
+ tcg_gen_mov_i32(ret, arg1);
+ } else if (tcg_op_supported(INDEX_op_rotr, TCG_TYPE_I32, 0)) {
+ tcg_gen_op3_i32(INDEX_op_rotr, ret, arg1, tcg_constant_i32(arg2));
+ } else if (tcg_op_supported(INDEX_op_rotl, TCG_TYPE_I32, 0)) {
+ tcg_gen_op3_i32(INDEX_op_rotl, ret, arg1, tcg_constant_i32(32 - arg2));
+ } else {
+ /* Do not recurse with the rotri simplification. */
+ tcg_gen_op4i_i32(INDEX_op_extract2, ret, arg1, arg1, arg2);
+ }
}
void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2,
@@ -2042,23 +2040,12 @@ void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 64);
- /* some cases can be optimized here */
if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
} else if (tcg_op_supported(INDEX_op_rotl, TCG_TYPE_I64, 0)) {
- TCGv_i64 t0 = tcg_constant_i64(arg2);
- tcg_gen_op3_i64(INDEX_op_rotl, ret, arg1, t0);
- } else if (tcg_op_supported(INDEX_op_rotr, TCG_TYPE_I64, 0)) {
- TCGv_i64 t0 = tcg_constant_i64(64 - arg2);
- tcg_gen_op3_i64(INDEX_op_rotr, ret, arg1, t0);
+ tcg_gen_op3_i64(INDEX_op_rotl, ret, arg1, tcg_constant_i64(arg2));
} else {
- TCGv_i64 t0 = tcg_temp_ebb_new_i64();
- TCGv_i64 t1 = tcg_temp_ebb_new_i64();
- tcg_gen_shli_i64(t0, arg1, arg2);
- tcg_gen_shri_i64(t1, arg1, 64 - arg2);
- tcg_gen_or_i64(ret, t0, t1);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
+ tcg_gen_rotri_i64(ret, arg1, -arg2 & 63);
}
}
@@ -2086,7 +2073,16 @@ void tcg_gen_rotr_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
void tcg_gen_rotri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
tcg_debug_assert(arg2 >= 0 && arg2 < 64);
- tcg_gen_rotli_i64(ret, arg1, -arg2 & 63);
+ if (arg2 == 0) {
+ tcg_gen_mov_i64(ret, arg1);
+ } else if (tcg_op_supported(INDEX_op_rotr, TCG_TYPE_I64, 0)) {
+ tcg_gen_op3_i64(INDEX_op_rotr, ret, arg1, tcg_constant_i64(arg2));
+ } else if (tcg_op_supported(INDEX_op_rotl, TCG_TYPE_I64, 0)) {
+ tcg_gen_op3_i64(INDEX_op_rotl, ret, arg1, tcg_constant_i64(64 - arg2));
+ } else {
+ /* Do not recurse with the rotri simplification. */
+ tcg_gen_op4i_i64(INDEX_op_extract2, ret, arg1, arg1, arg2);
+ }
}
void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2,
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 05/16] tcg: Add tcg_op_imm_match
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (3 preceding siblings ...)
2026-03-10 10:44 ` [PULL 04/16] tcg: Expand missing rotri with extract2 Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 06/16] tcg: target-dependent lowering of extract to shr/and Philippe Mathieu-Daudé
` (12 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
From: Paolo Bonzini <pbonzini@redhat.com>
Create a function to test whether the second operand of a
binary operation allows a given immediate.
Reviewed-by: Jim MacArthur <jim.macarthur@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[rth: Split out from a larger patch; keep the declaration internal.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20260303010833.1115741-6-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/tcg-internal.h | 5 +++++
tcg/tcg.c | 21 +++++++++++++++++----
2 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
index 26156846120..c0997ab2243 100644
--- a/tcg/tcg-internal.h
+++ b/tcg/tcg-internal.h
@@ -100,4 +100,9 @@ TCGOp *tcg_op_insert_before(TCGContext *s, TCGOp *op,
TCGOp *tcg_op_insert_after(TCGContext *s, TCGOp *op,
TCGOpcode, TCGType, unsigned nargs);
+/*
+ * For a binary opcode OP, return true if the second input operand allows IMM.
+ */
+bool tcg_op_imm_match(TCGOpcode op, TCGType type, tcg_target_ulong imm);
+
#endif /* TCG_INTERNAL_H */
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 3111e1f4265..2ca44766f64 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -3387,11 +3387,9 @@ static void process_constraint_sets(void)
}
}
-static const TCGArgConstraint *opcode_args_ct(const TCGOp *op)
+static const TCGArgConstraint *op_args_ct(TCGOpcode opc, TCGType type,
+ unsigned flags)
{
- TCGOpcode opc = op->opc;
- TCGType type = TCGOP_TYPE(op);
- unsigned flags = TCGOP_FLAGS(op);
const TCGOpDef *def = &tcg_op_defs[opc];
const TCGOutOp *outop = all_outop[opc];
TCGConstraintSetIndex con_set;
@@ -3418,6 +3416,21 @@ static const TCGArgConstraint *opcode_args_ct(const TCGOp *op)
return all_cts[con_set];
}
+static const TCGArgConstraint *opcode_args_ct(const TCGOp *op)
+{
+ return op_args_ct(op->opc, TCGOP_TYPE(op), TCGOP_FLAGS(op));
+}
+
+bool tcg_op_imm_match(TCGOpcode opc, TCGType type, tcg_target_ulong imm)
+{
+ const TCGArgConstraint *args_ct = op_args_ct(opc, type, 0);
+ const TCGOpDef *def = &tcg_op_defs[opc];
+
+ tcg_debug_assert(def->nb_oargs == 1);
+ tcg_debug_assert(def->nb_iargs == 2);
+ return tcg_target_const_match(imm, args_ct[2].ct, type, 0, 0);
+}
+
static void remove_label_use(TCGOp *op, int idx)
{
TCGLabel *label = arg_label(op->args[idx]);
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 06/16] tcg: target-dependent lowering of extract to shr/and
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (4 preceding siblings ...)
2026-03-10 10:45 ` [PULL 05/16] tcg: Add tcg_op_imm_match Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 07/16] tcg/optimize: possibly expand deposit into zero with shifts Philippe Mathieu-Daudé
` (11 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
From: Paolo Bonzini <pbonzini@redhat.com>
Instead of assuming only small immediates are available for AND,
consult the backend in order to decide between SHL/SHR and SHR/AND.
Reviewed-by: Jim MacArthur <jim.macarthur@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[rth: Split from a larger patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20260303010833.1115741-7-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/tcg-op.c | 36 ++++++++++++++++--------------------
1 file changed, 16 insertions(+), 20 deletions(-)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 13a56d89fa1..d8ae57d6047 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -907,6 +907,8 @@ void tcg_gen_deposit_z_i32(TCGv_i32 ret, TCGv_i32 arg,
void tcg_gen_extract_i32(TCGv_i32 ret, TCGv_i32 arg,
unsigned int ofs, unsigned int len)
{
+ uint32_t mask;
+
tcg_debug_assert(ofs < 32);
tcg_debug_assert(len > 0);
tcg_debug_assert(len <= 32);
@@ -922,8 +924,10 @@ void tcg_gen_extract_i32(TCGv_i32 ret, TCGv_i32 arg,
tcg_gen_op4ii_i32(INDEX_op_extract, ret, arg, ofs, len);
return;
}
+
+ mask = (1u << len) - 1;
if (ofs == 0) {
- tcg_gen_andi_i32(ret, arg, (1u << len) - 1);
+ tcg_gen_andi_i32(ret, arg, mask);
return;
}
@@ -934,18 +938,12 @@ void tcg_gen_extract_i32(TCGv_i32 ret, TCGv_i32 arg,
return;
}
- /* ??? Ideally we'd know what values are available for immediate AND.
- Assume that 8 bits are available, plus the special case of 16,
- so that we get ext8u, ext16u. */
- switch (len) {
- case 1 ... 8: case 16:
+ if (tcg_op_imm_match(INDEX_op_and, TCG_TYPE_I32, mask)) {
tcg_gen_shri_i32(ret, arg, ofs);
- tcg_gen_andi_i32(ret, ret, (1u << len) - 1);
- break;
- default:
+ tcg_gen_andi_i32(ret, ret, mask);
+ } else {
tcg_gen_shli_i32(ret, arg, 32 - len - ofs);
tcg_gen_shri_i32(ret, ret, 32 - len);
- break;
}
}
@@ -2121,6 +2119,8 @@ void tcg_gen_deposit_z_i64(TCGv_i64 ret, TCGv_i64 arg,
void tcg_gen_extract_i64(TCGv_i64 ret, TCGv_i64 arg,
unsigned int ofs, unsigned int len)
{
+ uint64_t mask;
+
tcg_debug_assert(ofs < 64);
tcg_debug_assert(len > 0);
tcg_debug_assert(len <= 64);
@@ -2136,8 +2136,10 @@ void tcg_gen_extract_i64(TCGv_i64 ret, TCGv_i64 arg,
tcg_gen_op4ii_i64(INDEX_op_extract, ret, arg, ofs, len);
return;
}
+
+ mask = (1ull << len) - 1;
if (ofs == 0) {
- tcg_gen_andi_i64(ret, arg, (1ull << len) - 1);
+ tcg_gen_andi_i64(ret, arg, mask);
return;
}
@@ -2148,18 +2150,12 @@ void tcg_gen_extract_i64(TCGv_i64 ret, TCGv_i64 arg,
return;
}
- /* ??? Ideally we'd know what values are available for immediate AND.
- Assume that 8 bits are available, plus the special cases of 16 and 32,
- so that we get ext8u, ext16u, and ext32u. */
- switch (len) {
- case 1 ... 8: case 16: case 32:
+ if (tcg_op_imm_match(INDEX_op_and, TCG_TYPE_I64, mask)) {
tcg_gen_shri_i64(ret, arg, ofs);
- tcg_gen_andi_i64(ret, ret, (1ull << len) - 1);
- break;
- default:
+ tcg_gen_andi_i64(ret, ret, mask);
+ } else {
tcg_gen_shli_i64(ret, arg, 64 - len - ofs);
tcg_gen_shri_i64(ret, ret, 64 - len);
- break;
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 07/16] tcg/optimize: possibly expand deposit into zero with shifts
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (5 preceding siblings ...)
2026-03-10 10:45 ` [PULL 06/16] tcg: target-dependent lowering of extract to shr/and Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 08/16] target/hppa: Expand tcg_global_mem_new() -> tcg_global_mem_new_i64() Philippe Mathieu-Daudé
` (10 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
From: Richard Henderson <richard.henderson@linaro.org>
Use tcg_op_imm_match to choose between expanding with AND+SHL vs SHL+SHR.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20260303010833.1115741-8-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/optimize.c | 36 ++++++++++++++++++++++++++++++------
1 file changed, 30 insertions(+), 6 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 59761c2c844..b1abec69a5d 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1743,14 +1743,38 @@ static bool fold_deposit(OptContext *ctx, TCGOp *op)
goto done;
}
- /* Lower invalid deposit into zero as AND + SHL. */
+ /* Lower invalid deposit into zero as AND + SHL or SHL + SHR. */
if (!valid) {
- op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
- op2->args[0] = ret;
- op2->args[1] = arg2;
- op2->args[2] = arg_new_constant(ctx, len_mask);
- fold_and(ctx, op2);
+ if (TCG_TARGET_extract_valid(ctx->type, 0, len)) {
+ /* EXTRACT (at 0) + SHL */
+ op2 = opt_insert_before(ctx, op, INDEX_op_extract, 4);
+ op2->args[0] = ret;
+ op2->args[1] = arg2;
+ op2->args[2] = 0;
+ op2->args[3] = len;
+ } else if (tcg_op_imm_match(INDEX_op_and, ctx->type, len_mask)) {
+ /* AND + SHL */
+ op2 = opt_insert_before(ctx, op, INDEX_op_and, 3);
+ op2->args[0] = ret;
+ op2->args[1] = arg2;
+ op2->args[2] = arg_new_constant(ctx, len_mask);
+ } else {
+ /* SHL + SHR */
+ int shl = width - len;
+ int shr = width - len - ofs;
+ op2 = opt_insert_before(ctx, op, INDEX_op_shl, 3);
+ op2->args[0] = ret;
+ op2->args[1] = arg2;
+ op2->args[2] = arg_new_constant(ctx, shl);
+
+ op->opc = INDEX_op_shr;
+ op->args[1] = ret;
+ op->args[2] = arg_new_constant(ctx, shr);
+ goto done;
+ }
+
+ /* Finish the (EXTRACT|AND) + SHL cases. */
op->opc = INDEX_op_shl;
op->args[1] = ret;
op->args[2] = arg_new_constant(ctx, ofs);
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 08/16] target/hppa: Expand tcg_global_mem_new() -> tcg_global_mem_new_i64()
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (6 preceding siblings ...)
2026-03-10 10:45 ` [PULL 07/16] tcg/optimize: possibly expand deposit into zero with shifts Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 09/16] accel/kvm: Include missing 'exec/cpu-common.h' header Philippe Mathieu-Daudé
` (9 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
The HPPA target is a 64-bit one, so tcg_global_mem_new()
expands to tcg_global_mem_new_i64(). Use the latter which
is more explicit.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260205212914.10382-1-philmd@linaro.org>
---
target/hppa/translate.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 0f8a66f7732..70c20c00377 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -307,9 +307,9 @@ void hppa_translate_init(void)
cpu_gr[0] = NULL;
for (i = 1; i < 32; i++) {
- cpu_gr[i] = tcg_global_mem_new(tcg_env,
- offsetof(CPUHPPAState, gr[i]),
- gr_names[i]);
+ cpu_gr[i] = tcg_global_mem_new_i64(tcg_env,
+ offsetof(CPUHPPAState, gr[i]),
+ gr_names[i]);
}
for (i = 0; i < 4; i++) {
cpu_sr[i] = tcg_global_mem_new_i64(tcg_env,
@@ -322,7 +322,7 @@ void hppa_translate_init(void)
for (i = 0; i < ARRAY_SIZE(vars); ++i) {
const GlobalVar *v = &vars[i];
- *v->var = tcg_global_mem_new(tcg_env, v->ofs, v->name);
+ *v->var = tcg_global_mem_new_i64(tcg_env, v->ofs, v->name);
}
cpu_psw_xb = tcg_global_mem_new_i32(tcg_env,
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 09/16] accel/kvm: Include missing 'exec/cpu-common.h' header
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (7 preceding siblings ...)
2026-03-10 10:45 ` [PULL 08/16] target/hppa: Expand tcg_global_mem_new() -> tcg_global_mem_new_i64() Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 10/16] accel/kvm: Make kvm_irqchip*notifier() declaration non target-specific Philippe Mathieu-Daudé
` (8 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
kvm-accel-ops.c uses EXCP_DEBUG, itself defined in
"exec/cpu-common.h". Include it explicitly, otherwise
we get when modifying unrelated headers:
../accel/kvm/kvm-accel-ops.c: In function ‘kvm_vcpu_thread_fn’:
../accel/kvm/kvm-accel-ops.c:54:22: error: ‘EXCP_DEBUG’ undeclared (first use in this function)
54 | if (r == EXCP_DEBUG) {
| ^~~~~~~~~~
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225051303.91614-2-philmd@linaro.org>
---
accel/kvm/kvm-accel-ops.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
index 8ed6945c2f7..6d9140e549f 100644
--- a/accel/kvm/kvm-accel-ops.c
+++ b/accel/kvm/kvm-accel-ops.c
@@ -16,6 +16,7 @@
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
+#include "exec/cpu-common.h"
#include "accel/accel-cpu-ops.h"
#include "system/kvm.h"
#include "system/kvm_int.h"
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 10/16] accel/kvm: Make kvm_irqchip*notifier() declaration non target-specific
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (8 preceding siblings ...)
2026-03-10 10:45 ` [PULL 09/16] accel/kvm: Include missing 'exec/cpu-common.h' header Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 11/16] accel/stubs: Build stubs once Philippe Mathieu-Daudé
` (7 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Commit 3607715a308 ("kvm: Introduce KVM irqchip change notifier")
restricted the kvm_irqchip*notifier() declarations to target-specific
files, guarding them under the NEED_CPU_H (later renamed as
COMPILING_PER_TARGET) #ifdef check.
This however prohibit building the kvm-stub.c file once:
../accel/stubs/kvm-stub.c:70:6: error: no previous prototype for function 'kvm_irqchip_add_change_notifier' [-Werror,-Wmissing-prototypes]
70 | void kvm_irqchip_add_change_notifier(Notifier *n)
| ^
../accel/stubs/kvm-stub.c:74:6: error: no previous prototype for function 'kvm_irqchip_remove_change_notifier' [-Werror,-Wmissing-prototypes]
74 | void kvm_irqchip_remove_change_notifier(Notifier *n)
| ^
../accel/stubs/kvm-stub.c:78:6: error: no previous prototype for function 'kvm_irqchip_change_notify' [-Werror,-Wmissing-prototypes]
78 | void kvm_irqchip_change_notify(void)
| ^
Since nothing in these prototype declarations is target specific,
move them around to be generically available, allowing to build
kvm-stub.c once for all targets in the next commit.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-Id: <20260309174941.67624-2-philmd@linaro.org>
---
include/system/kvm.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/system/kvm.h b/include/system/kvm.h
index 4b0e1b4ab14..5fa33eddda3 100644
--- a/include/system/kvm.h
+++ b/include/system/kvm.h
@@ -219,6 +219,10 @@ int kvm_vm_ioctl(KVMState *s, unsigned long type, ...);
void kvm_flush_coalesced_mmio_buffer(void);
+void kvm_irqchip_add_change_notifier(Notifier *n);
+void kvm_irqchip_remove_change_notifier(Notifier *n);
+void kvm_irqchip_change_notify(void);
+
#ifdef COMPILING_PER_TARGET
#include "cpu.h"
@@ -393,10 +397,6 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg);
void kvm_irqchip_add_irq_route(KVMState *s, int gsi, int irqchip, int pin);
-void kvm_irqchip_add_change_notifier(Notifier *n);
-void kvm_irqchip_remove_change_notifier(Notifier *n);
-void kvm_irqchip_change_notify(void);
-
struct kvm_guest_debug;
struct kvm_debug_exit_arch;
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 11/16] accel/stubs: Build stubs once
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (9 preceding siblings ...)
2026-03-10 10:45 ` [PULL 10/16] accel/kvm: Make kvm_irqchip*notifier() declaration non target-specific Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 12/16] accel/mshv: Forward-declare mshv_root_hvcall structure Philippe Mathieu-Daudé
` (6 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Move stubs to the global stub_ss[] source set. These files
are now built once for all binaries, instead of one time
per system binary.
Inspired-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225044225.64059-1-philmd@linaro.org>
---
accel/stubs/meson.build | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/accel/stubs/meson.build b/accel/stubs/meson.build
index 5de4a279ff9..ccad583e647 100644
--- a/accel/stubs/meson.build
+++ b/accel/stubs/meson.build
@@ -1,11 +1,10 @@
-system_stubs_ss = ss.source_set()
-system_stubs_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_HVF', if_false: files('hvf-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_NITRO', if_false: files('nitro-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_NVMM', if_false: files('nvmm-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_WHPX', if_false: files('whpx-stub.c'))
-system_stubs_ss.add(when: 'CONFIG_MSHV', if_false: files('mshv-stub.c'))
-
-specific_ss.add_all(when: ['CONFIG_SYSTEM_ONLY'], if_true: system_stubs_ss)
+stub_ss.add(files(
+ 'hvf-stub.c',
+ 'kvm-stub.c',
+ 'nitro-stub.c',
+ 'mshv-stub.c',
+ 'nvmm-stub.c',
+ 'tcg-stub.c',
+ 'whpx-stub.c',
+ 'xen-stub.c',
+))
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 12/16] accel/mshv: Forward-declare mshv_root_hvcall structure
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (10 preceding siblings ...)
2026-03-10 10:45 ` [PULL 11/16] accel/stubs: Build stubs once Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 13/16] accel/mshv: Build without target-specific knowledge Philippe Mathieu-Daudé
` (5 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Forward-declare the target-specific mshv_root_hvcall structure
in order to keep 'system/mshv_int.h' target-agnostic.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225051303.91614-3-philmd@linaro.org>
---
include/system/mshv_int.h | 5 ++---
accel/mshv/mshv-all.c | 2 +-
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/include/system/mshv_int.h b/include/system/mshv_int.h
index ad4d001c3cd..35386c422fa 100644
--- a/include/system/mshv_int.h
+++ b/include/system/mshv_int.h
@@ -96,9 +96,8 @@ void mshv_arch_amend_proc_features(
union hv_partition_synthetic_processor_features *features);
int mshv_arch_post_init_vm(int vm_fd);
-#if defined COMPILING_PER_TARGET && defined CONFIG_MSHV_IS_POSSIBLE
-int mshv_hvcall(int fd, const struct mshv_root_hvcall *args);
-#endif
+typedef struct mshv_root_hvcall mshv_root_hvcall;
+int mshv_hvcall(int fd, const mshv_root_hvcall *args);
/* memory */
typedef struct MshvMemoryRegion {
diff --git a/accel/mshv/mshv-all.c b/accel/mshv/mshv-all.c
index ddc4c18cba4..d4cc7f53715 100644
--- a/accel/mshv/mshv-all.c
+++ b/accel/mshv/mshv-all.c
@@ -381,7 +381,7 @@ static void register_mshv_memory_listener(MshvState *s, MshvMemoryListener *mml,
}
}
-int mshv_hvcall(int fd, const struct mshv_root_hvcall *args)
+int mshv_hvcall(int fd, const mshv_root_hvcall *args)
{
int ret = 0;
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 13/16] accel/mshv: Build without target-specific knowledge
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (11 preceding siblings ...)
2026-03-10 10:45 ` [PULL 12/16] accel/mshv: Forward-declare mshv_root_hvcall structure Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 14/16] accel/hvf: " Philippe Mathieu-Daudé
` (4 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Code in accel/ aims to be target-agnostic. Enforce that
by moving the MSHV file units to system_ss[], which is
target-agnostic.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225051303.91614-4-philmd@linaro.org>
---
accel/mshv/meson.build | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/accel/mshv/meson.build b/accel/mshv/meson.build
index d3a2b325811..c1b1787c5e6 100644
--- a/accel/mshv/meson.build
+++ b/accel/mshv/meson.build
@@ -1,9 +1,6 @@
-mshv_ss = ss.source_set()
-mshv_ss.add(if_true: files(
+system_ss.add(when: 'CONFIG_MSHV', if_true: files(
'irq.c',
'mem.c',
'msr.c',
'mshv-all.c'
))
-
-specific_ss.add_all(when: 'CONFIG_MSHV', if_true: mshv_ss)
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 14/16] accel/hvf: Build without target-specific knowledge
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (12 preceding siblings ...)
2026-03-10 10:45 ` [PULL 13/16] accel/mshv: Build without target-specific knowledge Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 15/16] accel/xen: " Philippe Mathieu-Daudé
` (3 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Code in accel/ aims to be target-agnostic. Enforce that
by moving the HVF file units to system_ss[], which is
target-agnostic.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225051303.91614-5-philmd@linaro.org>
---
accel/hvf/meson.build | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
index fc52cb78433..6e2dcc4a5f0 100644
--- a/accel/hvf/meson.build
+++ b/accel/hvf/meson.build
@@ -1,7 +1,4 @@
-hvf_ss = ss.source_set()
-hvf_ss.add(files(
+system_ss.add(when: 'CONFIG_HVF', if_true: files(
'hvf-all.c',
'hvf-accel-ops.c',
))
-
-specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 15/16] accel/xen: Build without target-specific knowledge
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (13 preceding siblings ...)
2026-03-10 10:45 ` [PULL 14/16] accel/hvf: " Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 16/16] accel/qtest: Build once as common object Philippe Mathieu-Daudé
` (2 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
Code in accel/ aims to be target-agnostic. Enforce that
by moving the Xen file units to system_ss[], which is
target-agnostic.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Anthony PERARD <anthony.perard@vates.tech>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20260225051303.91614-6-philmd@linaro.org>
---
accel/xen/meson.build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/accel/xen/meson.build b/accel/xen/meson.build
index 002bdb03c62..455ad5d6be4 100644
--- a/accel/xen/meson.build
+++ b/accel/xen/meson.build
@@ -1 +1 @@
-specific_ss.add(when: 'CONFIG_XEN', if_true: files('xen-all.c'))
+system_ss.add(when: 'CONFIG_XEN', if_true: files('xen-all.c'))
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 16/16] accel/qtest: Build once as common object
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (14 preceding siblings ...)
2026-03-10 10:45 ` [PULL 15/16] accel/xen: " Philippe Mathieu-Daudé
@ 2026-03-10 10:45 ` Philippe Mathieu-Daudé
2026-03-10 14:49 ` [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
2026-03-10 15:19 ` Peter Maydell
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 10:45 UTC (permalink / raw)
To: qemu-devel
No code within qtest.c uses target-specific knowledge:
build it once as target-agnostic common unit.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Message-Id: <20260225053408.18426-1-philmd@linaro.org>
---
meson.build | 3 ---
accel/qtest/meson.build | 5 ++++-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/meson.build b/meson.build
index f45885f05a1..1867560da63 100644
--- a/meson.build
+++ b/meson.build
@@ -3897,9 +3897,6 @@ subdir('linux-user')
subdir('tests/qtest/libqos')
subdir('tests/qtest/fuzz')
-# accel modules
-target_modules += { 'accel' : { 'qtest': qtest_module_ss }}
-
##############################################
# Internal static_libraries and dependencies #
##############################################
diff --git a/accel/qtest/meson.build b/accel/qtest/meson.build
index 2018de8a05d..e1b089e02c7 100644
--- a/accel/qtest/meson.build
+++ b/accel/qtest/meson.build
@@ -1 +1,4 @@
-qtest_module_ss.add(when: ['CONFIG_SYSTEM_ONLY'], if_true: files('qtest.c'))
+qtest_module_ss = ss.source_set()
+qtest_module_ss.add(files('qtest.c'))
+
+modules += {'accel': {'qtest': qtest_module_ss}}
--
2.53.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PULL 00/16] Accelerators and TCG patches for 2026-03-10
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (15 preceding siblings ...)
2026-03-10 10:45 ` [PULL 16/16] accel/qtest: Build once as common object Philippe Mathieu-Daudé
@ 2026-03-10 14:49 ` Philippe Mathieu-Daudé
2026-03-10 15:19 ` Peter Maydell
17 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2026-03-10 14:49 UTC (permalink / raw)
To: qemu-devel, Alex Bennée
Cc: Pierrick Bouvier, Anton Johansson, Richard Henderson, Thomas Huth,
Daniel P. Berrangé
On 10/3/26 11:44, Philippe Mathieu-Daudé wrote:
> The following changes since commit 31ee190665dd50054c39cef5ad740680aabda382:
>
> Merge tag 'hw-misc-20260309' of https://github.com/philmd/qemu into staging (2026-03-09 17:19:26 +0000)
>
> are available in the Git repository at:
>
> https://github.com/philmd/qemu.git tags/accel-tcg-20260310
>
> for you to fetch changes up to c574ff9245c7e9d10433efd60df2319ff4d69084:
>
> accel/qtest: Build once as common object (2026-03-10 11:36:21 +0100)
>
> ----------------------------------------------------------------
> Accelerators and TCG patches queue
>
> - Improve TCG extract and deposit
> - Build accelerator stub files once
> ----------------------------------------------------------------
v10.2
https://gitlab.com/qemu-project/qemu/-/jobs/12474666739
Total source files: 3817
Total build units: 6367
before this PR:
https://gitlab.com/qemu-project/qemu/-/jobs/13427222233
Total source files: 3866
Total build units: 5927
after this PR:
https://gitlab.com/qemu-project/qemu/-/jobs/13429265420
Total source files: 3869
Total build units: 5719
v10.2 -> v11.0 delta so far: 6367 - 5719 = 648
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PULL 00/16] Accelerators and TCG patches for 2026-03-10
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
` (16 preceding siblings ...)
2026-03-10 14:49 ` [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
@ 2026-03-10 15:19 ` Peter Maydell
17 siblings, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2026-03-10 15:19 UTC (permalink / raw)
To: Philippe Mathieu-Daudé; +Cc: qemu-devel
On Tue, 10 Mar 2026 at 10:46, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>
> The following changes since commit 31ee190665dd50054c39cef5ad740680aabda382:
>
> Merge tag 'hw-misc-20260309' of https://github.com/philmd/qemu into staging (2026-03-09 17:19:26 +0000)
>
> are available in the Git repository at:
>
> https://github.com/philmd/qemu.git tags/accel-tcg-20260310
>
> for you to fetch changes up to c574ff9245c7e9d10433efd60df2319ff4d69084:
>
> accel/qtest: Build once as common object (2026-03-10 11:36:21 +0100)
>
> ----------------------------------------------------------------
> Accelerators and TCG patches queue
>
> - Improve TCG extract and deposit
> - Build accelerator stub files once
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/11.0
for any user-visible changes.
-- PMM
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-03-10 15:19 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-10 10:44 [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 01/16] tcg: Drop extract+shl expansions in tcg_gen_deposit_z_* Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 02/16] tcg/optimize: Lower unsupported deposit during optimize Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 03/16] tcg/optimize: Lower unsupported extract2 " Philippe Mathieu-Daudé
2026-03-10 10:44 ` [PULL 04/16] tcg: Expand missing rotri with extract2 Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 05/16] tcg: Add tcg_op_imm_match Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 06/16] tcg: target-dependent lowering of extract to shr/and Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 07/16] tcg/optimize: possibly expand deposit into zero with shifts Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 08/16] target/hppa: Expand tcg_global_mem_new() -> tcg_global_mem_new_i64() Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 09/16] accel/kvm: Include missing 'exec/cpu-common.h' header Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 10/16] accel/kvm: Make kvm_irqchip*notifier() declaration non target-specific Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 11/16] accel/stubs: Build stubs once Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 12/16] accel/mshv: Forward-declare mshv_root_hvcall structure Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 13/16] accel/mshv: Build without target-specific knowledge Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 14/16] accel/hvf: " Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 15/16] accel/xen: " Philippe Mathieu-Daudé
2026-03-10 10:45 ` [PULL 16/16] accel/qtest: Build once as common object Philippe Mathieu-Daudé
2026-03-10 14:49 ` [PULL 00/16] Accelerators and TCG patches for 2026-03-10 Philippe Mathieu-Daudé
2026-03-10 15:19 ` Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox