From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: "Luis Pires" <luis.pires@eldorado.org.br>,
"Philippe Mathieu-Daudé" <f4bug@amsat.org>
Subject: [PULL 19/56] tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
Date: Wed, 27 Oct 2021 19:40:54 -0700 [thread overview]
Message-ID: <20211028024131.1492790-20-richard.henderson@linaro.org> (raw)
In-Reply-To: <20211028024131.1492790-1-richard.henderson@linaro.org>
This puts the separate mb optimization into the same framework
as the others. While fold_qemu_{ld,st} are currently identical,
that won't last as more code gets moved.
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 89 +++++++++++++++++++++++++++++---------------------
1 file changed, 51 insertions(+), 38 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 699476e2f1..159a5a9ee5 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -692,6 +692,44 @@ static bool fold_call(OptContext *ctx, TCGOp *op)
return true;
}
+static bool fold_mb(OptContext *ctx, TCGOp *op)
+{
+ /* Eliminate duplicate and redundant fence instructions. */
+ if (ctx->prev_mb) {
+ /*
+ * Merge two barriers of the same type into one,
+ * or a weaker barrier into a stronger one,
+ * or two weaker barriers into a stronger one.
+ * mb X; mb Y => mb X|Y
+ * mb; strl => mb; st
+ * ldaq; mb => ld; mb
+ * ldaq; strl => ld; mb; st
+ * Other combinations are also merged into a strong
+ * barrier. This is stricter than specified but for
+ * the purposes of TCG is better than not optimizing.
+ */
+ ctx->prev_mb->args[0] |= op->args[0];
+ tcg_op_remove(ctx->tcg, op);
+ } else {
+ ctx->prev_mb = op;
+ }
+ return true;
+}
+
+static bool fold_qemu_ld(OptContext *ctx, TCGOp *op)
+{
+ /* Opcodes that touch guest memory stop the mb optimization. */
+ ctx->prev_mb = NULL;
+ return false;
+}
+
+static bool fold_qemu_st(OptContext *ctx, TCGOp *op)
+{
+ /* Opcodes that touch guest memory stop the mb optimization. */
+ ctx->prev_mb = NULL;
+ return false;
+}
+
/* Propagate constants and copies, fold constant expressions. */
void tcg_optimize(TCGContext *s)
{
@@ -1599,6 +1637,19 @@ void tcg_optimize(TCGContext *s)
}
break;
+ case INDEX_op_mb:
+ done = fold_mb(&ctx, op);
+ break;
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
+ done = fold_qemu_ld(&ctx, op);
+ break;
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st8_i32:
+ case INDEX_op_qemu_st_i64:
+ done = fold_qemu_st(&ctx, op);
+ break;
+
default:
break;
}
@@ -1606,43 +1657,5 @@ void tcg_optimize(TCGContext *s)
if (!done) {
finish_folding(&ctx, op);
}
-
- /* Eliminate duplicate and redundant fence instructions. */
- if (ctx.prev_mb) {
- switch (opc) {
- case INDEX_op_mb:
- /* Merge two barriers of the same type into one,
- * or a weaker barrier into a stronger one,
- * or two weaker barriers into a stronger one.
- * mb X; mb Y => mb X|Y
- * mb; strl => mb; st
- * ldaq; mb => ld; mb
- * ldaq; strl => ld; mb; st
- * Other combinations are also merged into a strong
- * barrier. This is stricter than specified but for
- * the purposes of TCG is better than not optimizing.
- */
- ctx.prev_mb->args[0] |= op->args[0];
- tcg_op_remove(s, op);
- break;
-
- default:
- /* Opcodes that end the block stop the optimization. */
- if ((def->flags & TCG_OPF_BB_END) == 0) {
- break;
- }
- /* fallthru */
- case INDEX_op_qemu_ld_i32:
- case INDEX_op_qemu_ld_i64:
- case INDEX_op_qemu_st_i32:
- case INDEX_op_qemu_st8_i32:
- case INDEX_op_qemu_st_i64:
- /* Opcodes that touch guest memory stop the optimization. */
- ctx.prev_mb = NULL;
- break;
- }
- } else if (opc == INDEX_op_mb) {
- ctx.prev_mb = op;
- }
}
}
--
2.25.1
next prev parent reply other threads:[~2021-10-28 3:14 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-28 2:40 [PULL 00/56] tcg patch queue Richard Henderson
2021-10-28 2:40 ` [PULL 01/56] qemu/int128: Add int128_{not,xor} Richard Henderson
2021-10-28 2:40 ` [PULL 02/56] host-utils: move checks out of divu128/divs128 Richard Henderson
2021-10-28 2:40 ` [PULL 03/56] host-utils: move udiv_qrnnd() to host-utils Richard Henderson
2021-10-28 2:40 ` [PULL 04/56] host-utils: add 128-bit quotient support to divu128/divs128 Richard Henderson
2021-10-28 2:40 ` [PULL 05/56] host-utils: add unit tests for divu128/divs128 Richard Henderson
2021-10-28 2:40 ` [PULL 06/56] tcg/optimize: Rename "mask" to "z_mask" Richard Henderson
2021-10-28 2:40 ` [PULL 07/56] tcg/optimize: Split out OptContext Richard Henderson
2021-10-28 2:40 ` [PULL 08/56] tcg/optimize: Remove do_default label Richard Henderson
2021-10-28 2:40 ` [PULL 09/56] tcg/optimize: Change tcg_opt_gen_{mov,movi} interface Richard Henderson
2021-10-28 2:40 ` [PULL 10/56] tcg/optimize: Move prev_mb into OptContext Richard Henderson
2021-10-28 2:40 ` [PULL 11/56] tcg/optimize: Split out init_arguments Richard Henderson
2021-10-28 2:40 ` [PULL 12/56] tcg/optimize: Split out copy_propagate Richard Henderson
2021-10-28 2:40 ` [PULL 13/56] tcg/optimize: Split out fold_call Richard Henderson
2021-10-28 2:40 ` [PULL 14/56] tcg/optimize: Drop nb_oargs, nb_iargs locals Richard Henderson
2021-10-28 2:40 ` [PULL 15/56] tcg/optimize: Change fail return for do_constant_folding_cond* Richard Henderson
2021-10-28 2:40 ` [PULL 16/56] tcg/optimize: Return true from tcg_opt_gen_{mov,movi} Richard Henderson
2021-10-28 2:40 ` [PULL 17/56] tcg/optimize: Split out finish_folding Richard Henderson
2021-10-28 2:40 ` [PULL 18/56] tcg/optimize: Use a boolean to avoid a mass of continues Richard Henderson
2021-10-28 2:40 ` Richard Henderson [this message]
2021-10-28 2:40 ` [PULL 20/56] tcg/optimize: Split out fold_const{1,2} Richard Henderson
2021-10-28 2:40 ` [PULL 21/56] tcg/optimize: Split out fold_setcond2 Richard Henderson
2021-10-28 2:40 ` [PULL 22/56] tcg/optimize: Split out fold_brcond2 Richard Henderson
2021-10-28 2:40 ` [PULL 23/56] tcg/optimize: Split out fold_brcond Richard Henderson
2021-10-28 2:40 ` [PULL 24/56] tcg/optimize: Split out fold_setcond Richard Henderson
2021-10-28 2:41 ` [PULL 25/56] tcg/optimize: Split out fold_mulu2_i32 Richard Henderson
2021-10-28 2:41 ` [PULL 26/56] tcg/optimize: Split out fold_addsub2_i32 Richard Henderson
2021-10-28 2:41 ` [PULL 27/56] tcg/optimize: Split out fold_movcond Richard Henderson
2021-10-28 2:41 ` [PULL 28/56] tcg/optimize: Split out fold_extract2 Richard Henderson
2021-10-28 2:41 ` [PULL 29/56] tcg/optimize: Split out fold_extract, fold_sextract Richard Henderson
2021-10-28 2:41 ` [PULL 30/56] tcg/optimize: Split out fold_deposit Richard Henderson
2021-10-28 2:41 ` [PULL 31/56] tcg/optimize: Split out fold_count_zeros Richard Henderson
2021-10-28 2:41 ` [PULL 32/56] tcg/optimize: Split out fold_bswap Richard Henderson
2021-10-28 2:41 ` [PULL 33/56] tcg/optimize: Split out fold_dup, fold_dup2 Richard Henderson
2021-10-28 2:41 ` [PULL 34/56] tcg/optimize: Split out fold_mov Richard Henderson
2021-10-28 2:41 ` [PULL 35/56] tcg/optimize: Split out fold_xx_to_i Richard Henderson
2021-10-28 2:41 ` [PULL 36/56] tcg/optimize: Split out fold_xx_to_x Richard Henderson
2021-10-28 2:41 ` [PULL 37/56] tcg/optimize: Split out fold_xi_to_i Richard Henderson
2021-10-28 2:41 ` [PULL 38/56] tcg/optimize: Add type to OptContext Richard Henderson
2021-10-28 2:41 ` [PULL 39/56] tcg/optimize: Split out fold_to_not Richard Henderson
2021-10-28 2:41 ` [PULL 40/56] tcg/optimize: Split out fold_sub_to_neg Richard Henderson
2021-10-28 2:41 ` [PULL 41/56] tcg/optimize: Split out fold_xi_to_x Richard Henderson
2021-10-28 2:41 ` [PULL 42/56] tcg/optimize: Split out fold_ix_to_i Richard Henderson
2021-10-28 2:41 ` [PULL 43/56] tcg/optimize: Split out fold_masks Richard Henderson
2021-10-28 2:41 ` [PULL 44/56] tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies Richard Henderson
2021-10-28 2:41 ` [PULL 45/56] tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops Richard Henderson
2021-10-28 2:41 ` [PULL 46/56] tcg/optimize: Sink commutative operand swapping into fold functions Richard Henderson
2021-10-28 2:41 ` [PULL 47/56] tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values Richard Henderson
2021-10-28 2:41 ` [PULL 48/56] tcg/optimize: Use fold_xx_to_i for orc Richard Henderson
2021-10-28 2:41 ` [PULL 49/56] tcg/optimize: Use fold_xi_to_x for mul Richard Henderson
2021-10-28 2:41 ` [PULL 50/56] tcg/optimize: Use fold_xi_to_x for div Richard Henderson
2021-10-28 2:41 ` [PULL 51/56] tcg/optimize: Use fold_xx_to_i for rem Richard Henderson
2021-10-28 2:41 ` [PULL 52/56] tcg/optimize: Optimize sign extensions Richard Henderson
2021-10-28 2:41 ` [PULL 53/56] tcg/optimize: Propagate sign info for logical operations Richard Henderson
2021-10-28 2:41 ` [PULL 54/56] tcg/optimize: Propagate sign info for setcond Richard Henderson
2021-10-28 2:41 ` [PULL 55/56] tcg/optimize: Propagate sign info for bit counting Richard Henderson
2021-10-28 2:41 ` [PULL 56/56] tcg/optimize: Propagate sign info for shifting Richard Henderson
2021-10-28 14:51 ` [PULL 00/56] tcg patch queue Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211028024131.1492790-20-richard.henderson@linaro.org \
--to=richard.henderson@linaro.org \
--cc=f4bug@amsat.org \
--cc=luis.pires@eldorado.org.br \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).