From: Richard Henderson <richard.henderson@linaro.org>
To: Jiajie Chen <c@jia.je>, qemu-devel@nongnu.org
Cc: gaosong@loongson.cn, git@xen0n.name
Subject: Re: [PATCH v3 05/16] tcg/loongarch64: Lower add/sub_vec to vadd/vsub
Date: Sat, 2 Sep 2023 17:54:29 -0700 [thread overview]
Message-ID: <03f6765c-1a19-e525-e75f-c7d31b73f79b@linaro.org> (raw)
In-Reply-To: <20230902050415.1832700-6-c@jia.je>
On 9/1/23 22:02, Jiajie Chen wrote:
> Lower the following ops:
>
> - add_vec
> - sub_vec
>
> Signed-off-by: Jiajie Chen <c@jia.je>
> ---
> tcg/loongarch64/tcg-target-con-set.h | 1 +
> tcg/loongarch64/tcg-target-con-str.h | 1 +
> tcg/loongarch64/tcg-target.c.inc | 60 ++++++++++++++++++++++++++++
> 3 files changed, 62 insertions(+)
>
> diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h
> index 8c8ea5d919..2d5dce75c3 100644
> --- a/tcg/loongarch64/tcg-target-con-set.h
> +++ b/tcg/loongarch64/tcg-target-con-set.h
> @@ -32,4 +32,5 @@ C_O1_I2(r, rZ, ri)
> C_O1_I2(r, rZ, rJ)
> C_O1_I2(r, rZ, rZ)
> C_O1_I2(w, w, wM)
> +C_O1_I2(w, w, wA)
> C_O1_I4(r, rZ, rJ, rZ, rZ)
> diff --git a/tcg/loongarch64/tcg-target-con-str.h b/tcg/loongarch64/tcg-target-con-str.h
> index a8a1c44014..2ba9c135ac 100644
> --- a/tcg/loongarch64/tcg-target-con-str.h
> +++ b/tcg/loongarch64/tcg-target-con-str.h
> @@ -27,3 +27,4 @@ CONST('Z', TCG_CT_CONST_ZERO)
> CONST('C', TCG_CT_CONST_C12)
> CONST('W', TCG_CT_CONST_WSZ)
> CONST('M', TCG_CT_CONST_VCMP)
> +CONST('A', TCG_CT_CONST_VADD)
> diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
> index 129dd92910..0edcf5be35 100644
> --- a/tcg/loongarch64/tcg-target.c.inc
> +++ b/tcg/loongarch64/tcg-target.c.inc
> @@ -177,6 +177,7 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot)
> #define TCG_CT_CONST_C12 0x1000
> #define TCG_CT_CONST_WSZ 0x2000
> #define TCG_CT_CONST_VCMP 0x4000
> +#define TCG_CT_CONST_VADD 0x8000
>
> #define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32)
> #define ALL_VECTOR_REGS MAKE_64BIT_MASK(32, 32)
> @@ -214,6 +215,9 @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct, int vece)
> if ((ct & TCG_CT_CONST_VCMP) && -0x10 <= vec_val && vec_val <= 0x1f) {
> return true;
> }
> + if ((ct & TCG_CT_CONST_VADD) && -0x1f <= vec_val && vec_val <= 0x1f) {
> + return true;
> + }
> return false;
> }
>
> @@ -1646,6 +1650,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
> [TCG_COND_LTU] = {OPC_VSLTI_BU, OPC_VSLTI_HU, OPC_VSLTI_WU, OPC_VSLTI_DU},
> };
> LoongArchInsn insn;
> + static const LoongArchInsn add_vec_insn[4] = {
> + OPC_VADD_B, OPC_VADD_H, OPC_VADD_W, OPC_VADD_D
> + };
> + static const LoongArchInsn add_vec_imm_insn[4] = {
> + OPC_VADDI_BU, OPC_VADDI_HU, OPC_VADDI_WU, OPC_VADDI_DU
> + };
> + static const LoongArchInsn sub_vec_insn[4] = {
> + OPC_VSUB_B, OPC_VSUB_H, OPC_VSUB_W, OPC_VSUB_D
> + };
> + static const LoongArchInsn sub_vec_imm_insn[4] = {
> + OPC_VSUBI_BU, OPC_VSUBI_HU, OPC_VSUBI_WU, OPC_VSUBI_DU
> + };
>
> a0 = args[0];
> a1 = args[1];
> @@ -1712,6 +1728,44 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
> }
> tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2));
> break;
> + case INDEX_op_add_vec:
> + if (const_args[2]) {
> + int64_t value = sextract64(a2, 0, 8 << vece);
> + /* Try vaddi/vsubi */
> + if (0 <= value && value <= 0x1f) {
> + tcg_out32(s, encode_vdvjuk5_insn(add_vec_imm_insn[vece], a0, \
> + a1, value));
> + break;
> + } else if (-0x1f <= value && value < 0) {
> + tcg_out32(s, encode_vdvjuk5_insn(sub_vec_imm_insn[vece], a0, \
> + a1, -value));
> + break;
> + }
> +
> + /* constraint TCG_CT_CONST_VADD ensures unreachable */
> + g_assert_not_reached();
> + }
> + tcg_out32(s, encode_vdvjvk_insn(add_vec_insn[vece], a0, a1, a2));
> + break;
> + case INDEX_op_sub_vec:
> + if (const_args[2]) {
> + int64_t value = sextract64(a2, 0, 8 << vece);
> + /* Try vaddi/vsubi */
> + if (0 <= value && value <= 0x1f) {
> + tcg_out32(s, encode_vdvjuk5_insn(sub_vec_imm_insn[vece], a0, \
> + a1, value));
> + break;
> + } else if (-0x1f <= value && value < 0) {
> + tcg_out32(s, encode_vdvjuk5_insn(add_vec_imm_insn[vece], a0, \
> + a1, -value));
> + break;
> + }
> +
> + /* constraint TCG_CT_CONST_VADD ensures unreachable */
> + g_assert_not_reached();
> + }
> + tcg_out32(s, encode_vdvjvk_insn(sub_vec_insn[vece], a0, a1, a2));
It would be nice to share code here. Perhaps
case INDEX_op_sub_vec:
if (!const_args[2]) {
tcg_out32(s, encode_vdvjvk_insn(sub_vec_insn[vece], a0, a1, a2));
break;
}
a2 = -a2;
goto do_addi_vec;
case INDEX_op_add_vec:
if (!const_args[2]) {
tcg_out32(s, encode_vdvjvk_insn(add_vec_insn[vece], a0, a1, a2));
break;
}
do_addi_vec:
...
or a helper function.
Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
next prev parent reply other threads:[~2023-09-03 0:54 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-02 5:02 [PATCH v3 00/16] Lower TCG vector ops to LSX Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 01/16] tcg/loongarch64: Import LSX instructions Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 02/16] tcg/loongarch64: Lower basic tcg vec ops to LSX Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 03/16] tcg: pass vece to tcg_target_const_match() Jiajie Chen
2023-09-03 0:50 ` Richard Henderson
2023-09-02 5:02 ` [PATCH v3 04/16] tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt Jiajie Chen
2023-09-03 0:50 ` Richard Henderson
2023-09-02 5:02 ` [PATCH v3 05/16] tcg/loongarch64: Lower add/sub_vec to vadd/vsub Jiajie Chen
2023-09-03 0:54 ` Richard Henderson [this message]
2023-09-02 5:02 ` [PATCH v3 06/16] tcg/loongarch64: Lower vector bitwise operations Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 07/16] tcg/loongarch64: Lower neg_vec to vneg Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 08/16] tcg/loongarch64: Lower mul_vec to vmul Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 09/16] tcg/loongarch64: Lower vector min max ops Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 10/16] tcg/loongarch64: Lower vector saturated ops Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 11/16] tcg/loongarch64: Lower vector shift vector ops Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 12/16] tcg/loongarch64: Lower bitsel_vec to vbitsel Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 13/16] tcg/loongarch64: Lower vector shift integer ops Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 14/16] tcg/loongarch64: Lower rotv_vec ops to LSX Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 15/16] tcg/loongarch64: Lower rotli_vec to vrotri Jiajie Chen
2023-09-02 5:02 ` [PATCH v3 16/16] tcg/loongarch64: Implement 128-bit load & store Jiajie Chen
2023-09-03 1:06 ` Richard Henderson
2023-09-03 1:10 ` Jiajie Chen
2023-09-04 1:43 ` gaosong
2023-09-04 9:43 ` bibo mao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03f6765c-1a19-e525-e75f-c7d31b73f79b@linaro.org \
--to=richard.henderson@linaro.org \
--cc=c@jia.je \
--cc=gaosong@loongson.cn \
--cc=git@xen0n.name \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).