* [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support
@ 2023-07-11 16:58 Max Chou
2023-07-11 16:59 ` [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality Max Chou
` (15 more replies)
0 siblings, 16 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:58 UTC (permalink / raw)
To: qemu-devel, qemu-riscv; +Cc: dbarboza, alistair23, Max Chou
This patchset provides an implementation for Zvbb, Zvbc, Zvkned, Zvknh,
Zvksh, Zvkg, and Zvksed of the draft RISC-V vector cryptography
extensions as per the v20230620 version of the specification(1)(168e7b4).
This is an update to the patchset submitted to qemu-devel on
Wed, 12 Jul 2023 00:31:21 +0800
v2:
Squashed commits into one commit per extension with separate
commits for each refactoring.
Unified trans_rvzvk*.c.inc files into one trans_rvvk.c.inc.
Style fixes in insn32.decode and other files.
Added macros for EGS values in translation functions.
Updated from v20230303 to v20230407 of the spec:
Zvkb has been split into Zvbb and Zvbc.
vbrev, vclz, vctz, vcpop and vwsll have been added to Zvbb.
v3:
New patch 03/19 removes redundant “cpu_vl == 0” checks from
trans_rvv.c.inc.
Introduction of new tcg ops has been factored out of patch 11/19
and into 09/19.
These ops are now added to non riscv-specific files.
v4:
New patch 08/17 fixes the tcg_gen_gvec_andcs temporary variable
issue.
Patch 09/17 fixes imm mode for vror.vi.
Rebased to riscv-to-apply.next branch (de395bb):
Replace vstart constraint checking by TCG op.
Verified by code examples provided by vector crypto spec repository
(riscv-crypto).
v5:
Imported aes-round.h for Zvkned extension.
Rebased to 20230620110758.787479-1-richard.henderson@linaro.org
Exposed the properties of v4 patch 17/17 to the patches that the
extension was added.
Removed v4 patch 08/17 that is queued to tcg-next.
v6:
Resent the same content becaue v5 was splitted by broken
git-send-mail
v7:
Fixed endian issues
Replace the TCG ops of vstart & vl EGS checking by helper function
Changed the SEW selection of vsha2c[hl].vv to TCG translation
v8:
Rebased to the riscv-to-apply.next branch
Fixed cross win32 build issue
Removed redundent swap in AES key expanding (vaeskf1 & vaeskf2)
As v20230620 is a frozen version, we are not expecting any significant
changes to the specification or this patch series.
Please note that the Zvkt data-independent execution latency extension
(and all extensions including it) has not been implemented, and we
would recommend not using these patches in an environment where timing
attacks are an issue.
Work performed by Dickon, Lawrence, Nazar, Kiran, and William from
Codethink sponsored by SiFive, as well as Max Chou and Frank Chang
from SiFive.
https://github.com/riscv/riscv-crypto/releases
Thanks to those who have already reviewed:
Daniel Henrique Barboza dbarboza@ventanamicro.com
[PATCH v4 09/17] target/riscv: Add Zvbb ISA extension support
[PATCH v4 10/17] target/riscv: Add Zvkned ISA extension support
[PATCH v4 11/17] target/riscv: Add Zvknh ISA extension support
[PATCH v4 12/17] target/riscv: Add Zvksh ISA extension support
[PATCH v4 13/17] target/riscv: Add Zvkg ISA extension support
Weiwei Li liweiwei@iscas.ac.cn
[PATCH v3 01/19] target/riscv: Refactor some of the generic vector functionality
[PATCH v3 02/19] target/riscv: Refactor vector-vector translation macro
[PATCH v3 03/19] target/riscv: Remove redundant "cpu_vl == 0" checks
[PATCH v3 05/19] target/riscv: Move vector translation checks
[PATCH v3 06/19] target/riscv: Refactor translation of vector-widening instruction
[PATCH v3 07/19] target/riscv: Refactor some of the generic vector functionality
[PATCH v3 19/19] target/riscv: Expose Zvk* and Zvb[b, c] cpu properties
Richard Henderson richard.henderson@linaro.org
[PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
[PATCH v2 04/17] target/riscv: Move vector translation checks
[PATCH v2 05/17] target/riscv: Refactor translation of vector-widening instruction
[PATCH v2 07/17] qemu/bitops.h: Limit rotate amounts
[PATCH v2 08/17] qemu/host-utils.h: Add clz and ctz functions for lower-bit integers
[PATCH v2 14/17] crypto: Create sm4_subword
Alistair Francis alistair.francis@wdc.com
[PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
Philipp Tomsich philipp.tomsich@vrull.eu
Various v1 reviews
Christoph Müllner christoph.muellner@vrull.eu
Various v1 reviews
Dickon Hood (2):
target/riscv: Refactor translation of vector-widening instruction
target/riscv: Add Zvbb ISA extension support
Kiran Ostrolenk (4):
target/riscv: Refactor some of the generic vector functionality
target/riscv: Refactor vector-vector translation macro
target/riscv: Refactor some of the generic vector functionality
target/riscv: Add Zvknh ISA extension support
Lawrence Hunter (2):
target/riscv: Add Zvbc ISA extension support
target/riscv: Add Zvksh ISA extension support
Max Chou (3):
crypto: Create sm4_subword
crypto: Add SM4 constant parameter CK
target/riscv: Add Zvksed ISA extension support
Nazar Kazakov (4):
target/riscv: Remove redundant "cpu_vl == 0" checks
target/riscv: Move vector translation checks
target/riscv: Add Zvkned ISA extension support
target/riscv: Add Zvkg ISA extension support
crypto/sm4.c | 10 +
include/crypto/sm4.h | 9 +
target/arm/tcg/crypto_helper.c | 10 +-
target/riscv/cpu.c | 37 +
target/riscv/cpu_cfg.h | 8 +
target/riscv/helper.h | 98 +++
target/riscv/insn32.decode | 58 ++
target/riscv/insn_trans/trans_rvv.c.inc | 171 ++--
target/riscv/insn_trans/trans_rvvk.c.inc | 606 ++++++++++++++
target/riscv/meson.build | 4 +-
target/riscv/translate.c | 1 +
target/riscv/vcrypto_helper.c | 970 +++++++++++++++++++++++
target/riscv/vector_helper.c | 243 +-----
target/riscv/vector_internals.c | 81 ++
target/riscv/vector_internals.h | 228 ++++++
15 files changed, 2179 insertions(+), 355 deletions(-)
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
create mode 100644 target/riscv/vcrypto_helper.c
create mode 100644 target/riscv/vector_internals.c
create mode 100644 target/riscv/vector_internals.h
--
2.34.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-19 0:54 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 02/15] target/riscv: Refactor vector-vector translation macro Max Chou
` (14 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Kiran Ostrolenk, Weiwei Li, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Take some functions/macros out of `vector_helper` and put them in a new
module called `vector_internals`. This ensures they can be used by both
vector and vector-crypto helpers (latter implemented in proceeding
commits).
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/meson.build | 1 +
target/riscv/vector_helper.c | 201 +-------------------------------
target/riscv/vector_internals.c | 81 +++++++++++++
target/riscv/vector_internals.h | 182 +++++++++++++++++++++++++++++
4 files changed, 265 insertions(+), 200 deletions(-)
create mode 100644 target/riscv/vector_internals.c
create mode 100644 target/riscv/vector_internals.h
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
index 7f56c5f88d4..c3801ee5e04 100644
--- a/target/riscv/meson.build
+++ b/target/riscv/meson.build
@@ -16,6 +16,7 @@ riscv_ss.add(files(
'gdbstub.c',
'op_helper.c',
'vector_helper.c',
+ 'vector_internals.c',
'bitmanip_helper.c',
'translate.c',
'm128_helper.c',
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 71bb9b4457b..6434fd2f7e8 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -26,6 +26,7 @@
#include "fpu/softfloat.h"
#include "tcg/tcg-gvec-desc.h"
#include "internals.h"
+#include "vector_internals.h"
#include <math.h>
target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
@@ -72,68 +73,6 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
return vl;
}
-/*
- * Note that vector data is stored in host-endian 64-bit chunks,
- * so addressing units smaller than that needs a host-endian fixup.
- */
-#if HOST_BIG_ENDIAN
-#define H1(x) ((x) ^ 7)
-#define H1_2(x) ((x) ^ 6)
-#define H1_4(x) ((x) ^ 4)
-#define H2(x) ((x) ^ 3)
-#define H4(x) ((x) ^ 1)
-#define H8(x) ((x))
-#else
-#define H1(x) (x)
-#define H1_2(x) (x)
-#define H1_4(x) (x)
-#define H2(x) (x)
-#define H4(x) (x)
-#define H8(x) (x)
-#endif
-
-static inline uint32_t vext_nf(uint32_t desc)
-{
- return FIELD_EX32(simd_data(desc), VDATA, NF);
-}
-
-static inline uint32_t vext_vm(uint32_t desc)
-{
- return FIELD_EX32(simd_data(desc), VDATA, VM);
-}
-
-/*
- * Encode LMUL to lmul as following:
- * LMUL vlmul lmul
- * 1 000 0
- * 2 001 1
- * 4 010 2
- * 8 011 3
- * - 100 -
- * 1/8 101 -3
- * 1/4 110 -2
- * 1/2 111 -1
- */
-static inline int32_t vext_lmul(uint32_t desc)
-{
- return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
-}
-
-static inline uint32_t vext_vta(uint32_t desc)
-{
- return FIELD_EX32(simd_data(desc), VDATA, VTA);
-}
-
-static inline uint32_t vext_vma(uint32_t desc)
-{
- return FIELD_EX32(simd_data(desc), VDATA, VMA);
-}
-
-static inline uint32_t vext_vta_all_1s(uint32_t desc)
-{
- return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
-}
-
/*
* Get the maximum number of elements can be operated.
*
@@ -152,21 +91,6 @@ static inline uint32_t vext_max_elems(uint32_t desc, uint32_t log2_esz)
return scale < 0 ? vlenb >> -scale : vlenb << scale;
}
-/*
- * Get number of total elements, including prestart, body and tail elements.
- * Note that when LMUL < 1, the tail includes the elements past VLMAX that
- * are held in the same vector register.
- */
-static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
- uint32_t esz)
-{
- uint32_t vlenb = simd_maxsz(desc);
- uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
- int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
- ctzl(esz) - ctzl(sew) + vext_lmul(desc);
- return (vlenb << emul) / esz;
-}
-
static inline target_ulong adjust_addr(CPURISCVState *env, target_ulong addr)
{
return (addr & ~env->cur_pmmask) | env->cur_pmbase;
@@ -199,20 +123,6 @@ static void probe_pages(CPURISCVState *env, target_ulong addr,
}
}
-/* set agnostic elements to 1s */
-static void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
- uint32_t tot)
-{
- if (is_agnostic == 0) {
- /* policy undisturbed */
- return;
- }
- if (tot - cnt == 0) {
- return;
- }
- memset(base + cnt, -1, tot - cnt);
-}
-
static inline void vext_set_elem_mask(void *v0, int index,
uint8_t value)
{
@@ -222,18 +132,6 @@ static inline void vext_set_elem_mask(void *v0, int index,
((uint64_t *)v0)[idx] = deposit64(old, pos, 1, value);
}
-/*
- * Earlier designs (pre-0.9) had a varying number of bits
- * per mask value (MLEN). In the 0.9 design, MLEN=1.
- * (Section 4.5)
- */
-static inline int vext_elem_mask(void *v0, int index)
-{
- int idx = index / 64;
- int pos = index % 64;
- return (((uint64_t *)v0)[idx] >> pos) & 1;
-}
-
/* elements operations for load and store */
typedef void vext_ldst_elem_fn(CPURISCVState *env, target_ulong addr,
uint32_t idx, void *vd, uintptr_t retaddr);
@@ -728,18 +626,11 @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
* Vector Integer Arithmetic Instructions
*/
-/* expand macro args before macro */
-#define RVVCALL(macro, ...) macro(__VA_ARGS__)
-
/* (TD, T1, T2, TX1, TX2) */
#define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t
#define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t
#define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t
#define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t
-#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
-#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
-#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
-#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
#define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
@@ -763,16 +654,6 @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
#define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t
#define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t
-/* operation of two vector elements */
-typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
-
-#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
-static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
-{ \
- TX1 s1 = *((T1 *)vs1 + HS1(i)); \
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
- *((TD *)vd + HD(i)) = OP(s2, s1); \
-}
#define DO_SUB(N, M) (N - M)
#define DO_RSUB(N, M) (M - N)
@@ -785,40 +666,6 @@ RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB)
RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB)
RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB)
-static void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
- CPURISCVState *env, uint32_t desc,
- opivv2_fn *fn, uint32_t esz)
-{
- uint32_t vm = vext_vm(desc);
- uint32_t vl = env->vl;
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
- uint32_t vta = vext_vta(desc);
- uint32_t vma = vext_vma(desc);
- uint32_t i;
-
- for (i = env->vstart; i < vl; i++) {
- if (!vm && !vext_elem_mask(v0, i)) {
- /* set masked-off elements to 1s */
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
- continue;
- }
- fn(vd, vs1, vs2, i);
- }
- env->vstart = 0;
- /* set tail elements to 1s */
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
-}
-
-/* generate the helpers for OPIVV */
-#define GEN_VEXT_VV(NAME, ESZ) \
-void HELPER(NAME)(void *vd, void *v0, void *vs1, \
- void *vs2, CPURISCVState *env, \
- uint32_t desc) \
-{ \
- do_vext_vv(vd, v0, vs1, vs2, env, desc, \
- do_##NAME, ESZ); \
-}
-
GEN_VEXT_VV(vadd_vv_b, 1)
GEN_VEXT_VV(vadd_vv_h, 2)
GEN_VEXT_VV(vadd_vv_w, 4)
@@ -828,18 +675,6 @@ GEN_VEXT_VV(vsub_vv_h, 2)
GEN_VEXT_VV(vsub_vv_w, 4)
GEN_VEXT_VV(vsub_vv_d, 8)
-typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
-
-/*
- * (T1)s1 gives the real operator type.
- * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
- */
-#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
-static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
-{ \
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
- *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
-}
RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD)
RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD)
@@ -854,40 +689,6 @@ RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB)
RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB)
RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB)
-static void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
- CPURISCVState *env, uint32_t desc,
- opivx2_fn fn, uint32_t esz)
-{
- uint32_t vm = vext_vm(desc);
- uint32_t vl = env->vl;
- uint32_t total_elems = vext_get_total_elems(env, desc, esz);
- uint32_t vta = vext_vta(desc);
- uint32_t vma = vext_vma(desc);
- uint32_t i;
-
- for (i = env->vstart; i < vl; i++) {
- if (!vm && !vext_elem_mask(v0, i)) {
- /* set masked-off elements to 1s */
- vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
- continue;
- }
- fn(vd, s1, vs2, i);
- }
- env->vstart = 0;
- /* set tail elements to 1s */
- vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
-}
-
-/* generate the helpers for OPIVX */
-#define GEN_VEXT_VX(NAME, ESZ) \
-void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
- void *vs2, CPURISCVState *env, \
- uint32_t desc) \
-{ \
- do_vext_vx(vd, v0, s1, vs2, env, desc, \
- do_##NAME, ESZ); \
-}
-
GEN_VEXT_VX(vadd_vx_b, 1)
GEN_VEXT_VX(vadd_vx_h, 2)
GEN_VEXT_VX(vadd_vx_w, 4)
diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
new file mode 100644
index 00000000000..9cf5c17cdea
--- /dev/null
+++ b/target/riscv/vector_internals.c
@@ -0,0 +1,81 @@
+/*
+ * RISC-V Vector Extension Internals
+ *
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "vector_internals.h"
+
+/* set agnostic elements to 1s */
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
+ uint32_t tot)
+{
+ if (is_agnostic == 0) {
+ /* policy undisturbed */
+ return;
+ }
+ if (tot - cnt == 0) {
+ return ;
+ }
+ memset(base + cnt, -1, tot - cnt);
+}
+
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
+ CPURISCVState *env, uint32_t desc,
+ opivv2_fn *fn, uint32_t esz)
+{
+ uint32_t vm = vext_vm(desc);
+ uint32_t vl = env->vl;
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+ uint32_t vta = vext_vta(desc);
+ uint32_t vma = vext_vma(desc);
+ uint32_t i;
+
+ for (i = env->vstart; i < vl; i++) {
+ if (!vm && !vext_elem_mask(v0, i)) {
+ /* set masked-off elements to 1s */
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
+ continue;
+ }
+ fn(vd, vs1, vs2, i);
+ }
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
+}
+
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
+ CPURISCVState *env, uint32_t desc,
+ opivx2_fn fn, uint32_t esz)
+{
+ uint32_t vm = vext_vm(desc);
+ uint32_t vl = env->vl;
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+ uint32_t vta = vext_vta(desc);
+ uint32_t vma = vext_vma(desc);
+ uint32_t i;
+
+ for (i = env->vstart; i < vl; i++) {
+ if (!vm && !vext_elem_mask(v0, i)) {
+ /* set masked-off elements to 1s */
+ vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
+ continue;
+ }
+ fn(vd, s1, vs2, i);
+ }
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
+}
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
new file mode 100644
index 00000000000..749d138bebe
--- /dev/null
+++ b/target/riscv/vector_internals.h
@@ -0,0 +1,182 @@
+/*
+ * RISC-V Vector Extension Internals
+ *
+ * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef TARGET_RISCV_VECTOR_INTERNALS_H
+#define TARGET_RISCV_VECTOR_INTERNALS_H
+
+#include "qemu/osdep.h"
+#include "qemu/bitops.h"
+#include "cpu.h"
+#include "tcg/tcg-gvec-desc.h"
+#include "internals.h"
+
+static inline uint32_t vext_nf(uint32_t desc)
+{
+ return FIELD_EX32(simd_data(desc), VDATA, NF);
+}
+
+/*
+ * Note that vector data is stored in host-endian 64-bit chunks,
+ * so addressing units smaller than that needs a host-endian fixup.
+ */
+#if HOST_BIG_ENDIAN
+#define H1(x) ((x) ^ 7)
+#define H1_2(x) ((x) ^ 6)
+#define H1_4(x) ((x) ^ 4)
+#define H2(x) ((x) ^ 3)
+#define H4(x) ((x) ^ 1)
+#define H8(x) ((x))
+#else
+#define H1(x) (x)
+#define H1_2(x) (x)
+#define H1_4(x) (x)
+#define H2(x) (x)
+#define H4(x) (x)
+#define H8(x) (x)
+#endif
+
+/*
+ * Encode LMUL to lmul as following:
+ * LMUL vlmul lmul
+ * 1 000 0
+ * 2 001 1
+ * 4 010 2
+ * 8 011 3
+ * - 100 -
+ * 1/8 101 -3
+ * 1/4 110 -2
+ * 1/2 111 -1
+ */
+static inline int32_t vext_lmul(uint32_t desc)
+{
+ return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
+}
+
+static inline uint32_t vext_vm(uint32_t desc)
+{
+ return FIELD_EX32(simd_data(desc), VDATA, VM);
+}
+
+static inline uint32_t vext_vma(uint32_t desc)
+{
+ return FIELD_EX32(simd_data(desc), VDATA, VMA);
+}
+
+static inline uint32_t vext_vta(uint32_t desc)
+{
+ return FIELD_EX32(simd_data(desc), VDATA, VTA);
+}
+
+static inline uint32_t vext_vta_all_1s(uint32_t desc)
+{
+ return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
+}
+
+/*
+ * Earlier designs (pre-0.9) had a varying number of bits
+ * per mask value (MLEN). In the 0.9 design, MLEN=1.
+ * (Section 4.5)
+ */
+static inline int vext_elem_mask(void *v0, int index)
+{
+ int idx = index / 64;
+ int pos = index % 64;
+ return (((uint64_t *)v0)[idx] >> pos) & 1;
+}
+
+/*
+ * Get number of total elements, including prestart, body and tail elements.
+ * Note that when LMUL < 1, the tail includes the elements past VLMAX that
+ * are held in the same vector register.
+ */
+static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
+ uint32_t esz)
+{
+ uint32_t vlenb = simd_maxsz(desc);
+ uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
+ int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
+ ctzl(esz) - ctzl(sew) + vext_lmul(desc);
+ return (vlenb << emul) / esz;
+}
+
+/* set agnostic elements to 1s */
+void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
+ uint32_t tot);
+
+/* expand macro args before macro */
+#define RVVCALL(macro, ...) macro(__VA_ARGS__)
+
+/* (TD, T1, T2, TX1, TX2) */
+#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
+#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
+#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
+#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
+
+/* operation of two vector elements */
+typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
+
+#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
+static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
+{ \
+ TX1 s1 = *((T1 *)vs1 + HS1(i)); \
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
+ *((TD *)vd + HD(i)) = OP(s2, s1); \
+}
+
+void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
+ CPURISCVState *env, uint32_t desc,
+ opivv2_fn *fn, uint32_t esz);
+
+/* generate the helpers for OPIVV */
+#define GEN_VEXT_VV(NAME, ESZ) \
+void HELPER(NAME)(void *vd, void *v0, void *vs1, \
+ void *vs2, CPURISCVState *env, \
+ uint32_t desc) \
+{ \
+ do_vext_vv(vd, v0, vs1, vs2, env, desc, \
+ do_##NAME, ESZ); \
+}
+
+typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
+
+/*
+ * (T1)s1 gives the real operator type.
+ * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
+ */
+#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
+static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
+{ \
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
+ *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
+}
+
+void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
+ CPURISCVState *env, uint32_t desc,
+ opivx2_fn fn, uint32_t esz);
+
+/* generate the helpers for OPIVX */
+#define GEN_VEXT_VX(NAME, ESZ) \
+void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
+ void *vs2, CPURISCVState *env, \
+ uint32_t desc) \
+{ \
+ do_vext_vx(vd, v0, s1, vs2, env, desc, \
+ do_##NAME, ESZ); \
+}
+
+#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 02/15] target/riscv: Refactor vector-vector translation macro
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
2023-07-11 16:59 ` [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks Max Chou
` (13 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Kiran Ostrolenk, Richard Henderson,
Alistair Francis, Weiwei Li, Max Chou, Palmer Dabbelt, Bin Meng,
Liu Zhiwei, Frank Chang
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Refactor the non SEW-specific stuff out of `GEN_OPIVV_TRANS` into
function `opivv_trans` (similar to `opivi_trans`). `opivv_trans` will be
used in proceeding vector-crypto commits.
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/insn_trans/trans_rvv.c.inc | 62 +++++++++++++------------
1 file changed, 32 insertions(+), 30 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index c2f7527f53f..4a8e62a8bef 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -1643,38 +1643,40 @@ GEN_OPIWX_WIDEN_TRANS(vwadd_wx)
GEN_OPIWX_WIDEN_TRANS(vwsubu_wx)
GEN_OPIWX_WIDEN_TRANS(vwsub_wx)
+static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
+ gen_helper_gvec_4_ptr *fn, DisasContext *s)
+{
+ uint32_t data = 0;
+ TCGLabel *over = gen_new_label();
+ tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
+
+ data = FIELD_DP32(data, VDATA, VM, vm);
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
+ tcg_gen_gvec_4_ptr(vreg_ofs(s, vd), vreg_ofs(s, 0), vreg_ofs(s, vs1),
+ vreg_ofs(s, vs2), cpu_env, s->cfg_ptr->vlen / 8,
+ s->cfg_ptr->vlen / 8, data, fn);
+ mark_vs_dirty(s);
+ gen_set_label(over);
+ return true;
+}
+
/* Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions */
/* OPIVV without GVEC IR */
-#define GEN_OPIVV_TRANS(NAME, CHECK) \
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
-{ \
- if (CHECK(s, a)) { \
- uint32_t data = 0; \
- static gen_helper_gvec_4_ptr * const fns[4] = { \
- gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
- gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
- }; \
- TCGLabel *over = gen_new_label(); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
- tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
- \
- data = FIELD_DP32(data, VDATA, VM, a->vm); \
- data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
- data = FIELD_DP32(data, VDATA, VTA, s->vta); \
- data = \
- FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);\
- data = FIELD_DP32(data, VDATA, VMA, s->vma); \
- tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
- vreg_ofs(s, a->rs1), \
- vreg_ofs(s, a->rs2), cpu_env, \
- s->cfg_ptr->vlen / 8, \
- s->cfg_ptr->vlen / 8, data, \
- fns[s->sew]); \
- mark_vs_dirty(s); \
- gen_set_label(over); \
- return true; \
- } \
- return false; \
+#define GEN_OPIVV_TRANS(NAME, CHECK) \
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+{ \
+ if (CHECK(s, a)) { \
+ static gen_helper_gvec_4_ptr * const fns[4] = { \
+ gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
+ }; \
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s);\
+ } \
+ return false; \
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
2023-07-11 16:59 ` [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality Max Chou
2023-07-11 16:59 ` [PATCH v8 02/15] target/riscv: Refactor vector-vector translation macro Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-08-02 14:22 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support Max Chou
` (12 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Nazar Kazakov, Weiwei Li, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei,
Richard Henderson, Junqiang Wang
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Remove the redundant "vl == 0" check which is already included within the vstart >= vl check, when vl == 0.
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/insn_trans/trans_rvv.c.inc | 31 +------------------------
1 file changed, 1 insertion(+), 30 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index 4a8e62a8bef..7e194aae34a 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -617,7 +617,6 @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
TCGv_i32 desc;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -786,7 +785,6 @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
TCGv_i32 desc;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -893,7 +891,6 @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
TCGv_i32 desc;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -1034,7 +1031,6 @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
TCGv_i32 desc;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -1191,7 +1187,6 @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
return false;
}
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
@@ -1241,7 +1236,6 @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -1405,7 +1399,6 @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -1492,7 +1485,6 @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
if (checkfn(s, a)) {
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
data = FIELD_DP32(data, VDATA, VM, a->vm);
@@ -1575,7 +1567,6 @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
if (opiwv_widen_check(s, a)) {
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
data = FIELD_DP32(data, VDATA, VM, a->vm);
@@ -1648,7 +1639,6 @@ static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
{
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
data = FIELD_DP32(data, VDATA, VM, vm);
@@ -1842,7 +1832,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
gen_helper_##NAME##_w, \
}; \
TCGLabel *over = gen_new_label(); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2054,7 +2043,6 @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
gen_helper_vmv_v_v_w, gen_helper_vmv_v_v_d,
};
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
@@ -2078,7 +2066,6 @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
vext_check_ss(s, a->rd, 0, 1)) {
TCGv s1;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
s1 = get_gpr(s, a->rs1, EXT_SIGN);
@@ -2140,7 +2127,6 @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
gen_helper_vmv_v_x_w, gen_helper_vmv_v_x_d,
};
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
s1 = tcg_constant_i64(simm);
@@ -2288,7 +2274,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm(s, RISCV_FRM_DYN); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2323,7 +2308,6 @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
TCGv_i64 t1;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
dest = tcg_temp_new_ptr();
@@ -2408,7 +2392,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm(s, RISCV_FRM_DYN); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);\
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2483,7 +2466,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm(s, RISCV_FRM_DYN); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2601,7 +2583,6 @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
uint32_t data = 0;
TCGLabel *over = gen_new_label();
gen_set_rm_chkfrm(s, rm);
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
data = FIELD_DP32(data, VDATA, VM, a->vm);
@@ -2713,7 +2694,6 @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
gen_helper_vmv_v_x_d,
};
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
t1 = tcg_temp_new_i64();
@@ -2792,7 +2772,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm_chkfrm(s, FRM); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2844,7 +2823,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm(s, RISCV_FRM_DYN); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2912,7 +2890,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm_chkfrm(s, FRM); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -2962,7 +2939,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
}; \
TCGLabel *over = gen_new_label(); \
gen_set_rm_chkfrm(s, FRM); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
@@ -3053,7 +3029,6 @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
uint32_t data = 0; \
gen_helper_gvec_4_ptr *fn = gen_helper_##NAME; \
TCGLabel *over = gen_new_label(); \
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
\
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
@@ -3224,7 +3199,6 @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
require_vm(a->vm, a->rd)) {
uint32_t data = 0;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
data = FIELD_DP32(data, VDATA, VM, a->vm);
@@ -3411,7 +3385,6 @@ static bool trans_vmv_s_x(DisasContext *s, arg_vmv_s_x *a)
TCGv s1;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
t1 = tcg_temp_new_i64();
@@ -3468,8 +3441,7 @@ static bool trans_vfmv_s_f(DisasContext *s, arg_vfmv_s_f *a)
TCGv_i64 t1;
TCGLabel *over = gen_new_label();
- /* if vl == 0 or vstart >= vl, skip vector register write back */
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
+ /* if vstart >= vl, skip vector register write back */
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
/* NaN-box f[rs1] */
@@ -3720,7 +3692,6 @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
uint32_t data = 0;
gen_helper_gvec_3_ptr *fn;
TCGLabel *over = gen_new_label();
- tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
static gen_helper_gvec_3_ptr * const fns[6][4] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (2 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-08-02 14:29 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 05/15] target/riscv: Move vector translation checks Max Chou
` (11 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Lawrence Hunter, Nazar Kazakov, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
Kiran Ostrolenk
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
This commit adds support for the Zvbc vector-crypto extension, which
consists of the following instructions:
* vclmulh.[vx,vv]
* vclmul.[vx,vv]
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Co-authored-by: Max Chou <max.chou@sifive.com>
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
[max.chou@sifive.com: Exposed x-zvbc property]
---
target/riscv/cpu.c | 9 ++++
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 6 +++
target/riscv/insn32.decode | 6 +++
target/riscv/insn_trans/trans_rvvk.c.inc | 62 ++++++++++++++++++++++++
target/riscv/meson.build | 3 +-
target/riscv/translate.c | 1 +
target/riscv/vcrypto_helper.c | 59 ++++++++++++++++++++++
8 files changed, 146 insertions(+), 1 deletion(-)
create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
create mode 100644 target/riscv/vcrypto_helper.c
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 9339c0241d6..d1dc78d483f 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -119,6 +119,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
+ ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
@@ -1268,6 +1269,11 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
return;
}
+ if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
+ error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
+ return;
+ }
+
if (cpu->cfg.ext_zk) {
cpu->cfg.ext_zkn = true;
cpu->cfg.ext_zkr = true;
@@ -1845,6 +1851,9 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("x-zvfbfmin", RISCVCPU, cfg.ext_zvfbfmin, false),
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
+ /* Vector cryptography extensions */
+ DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
+
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index 2bd9510ba3e..d25b36a5128 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -85,6 +85,7 @@ struct RISCVCPUConfig {
bool ext_zve32f;
bool ext_zve64f;
bool ext_zve64d;
+ bool ext_zvbc;
bool ext_zmmul;
bool ext_zvfbfmin;
bool ext_zvfbfwma;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index c95adaf08ac..6776777c4eb 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1182,3 +1182,9 @@ DEF_HELPER_5(vfwcvtbf16_f_f_v, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_6(vfwmaccbf16_vv, void, ptr, ptr, ptr, ptr, env, i32)
DEF_HELPER_6(vfwmaccbf16_vf, void, ptr, ptr, i64, ptr, env, i32)
+
+/* Vector crypto functions */
+DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index e341fa92139..dd50d5a48c7 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -946,3 +946,9 @@ vfwcvtbf16_f_f_v 010010 . ..... 01101 001 ..... 1010111 @r2_vm
# *** Zvfbfwma Standard Extension ***
vfwmaccbf16_vv 111011 . ..... ..... 001 ..... 1010111 @r_vm
vfwmaccbf16_vf 111011 . ..... ..... 101 ..... 1010111 @r_vm
+
+# *** Zvbc vector crypto extension ***
+vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
+vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
+vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
+vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
new file mode 100644
index 00000000000..552b08a2fd1
--- /dev/null
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -0,0 +1,62 @@
+/*
+ * RISC-V translation routines for the vector crypto extension.
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Written by Codethink Ltd and SiFive.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+/*
+ * Zvbc
+ */
+
+#define GEN_VV_MASKED_TRANS(NAME, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, \
+ gen_helper_##NAME, s); \
+ } \
+ return false; \
+ }
+
+static bool vclmul_vv_check(DisasContext *s, arg_rmrr *a)
+{
+ return opivv_check(s, a) &&
+ s->cfg_ptr->ext_zvbc == true &&
+ s->sew == MO_64;
+}
+
+GEN_VV_MASKED_TRANS(vclmul_vv, vclmul_vv_check)
+GEN_VV_MASKED_TRANS(vclmulh_vv, vclmul_vv_check)
+
+#define GEN_VX_MASKED_TRANS(NAME, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, \
+ gen_helper_##NAME, s); \
+ } \
+ return false; \
+ }
+
+static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
+{
+ return opivx_check(s, a) &&
+ s->cfg_ptr->ext_zvbc == true &&
+ s->sew == MO_64;
+}
+
+GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
+GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
index c3801ee5e04..660078bda1f 100644
--- a/target/riscv/meson.build
+++ b/target/riscv/meson.build
@@ -21,7 +21,8 @@ riscv_ss.add(files(
'translate.c',
'm128_helper.c',
'crypto_helper.c',
- 'zce_helper.c'
+ 'zce_helper.c',
+ 'vcrypto_helper.c'
))
riscv_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: files('kvm-stub.c'))
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
index 697df1be9e2..7dbf173adb5 100644
--- a/target/riscv/translate.c
+++ b/target/riscv/translate.c
@@ -1094,6 +1094,7 @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
#include "insn_trans/trans_rvzfa.c.inc"
#include "insn_trans/trans_rvzfh.c.inc"
#include "insn_trans/trans_rvk.c.inc"
+#include "insn_trans/trans_rvvk.c.inc"
#include "insn_trans/trans_privileged.c.inc"
#include "insn_trans/trans_svinval.c.inc"
#include "insn_trans/trans_rvbf16.c.inc"
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
new file mode 100644
index 00000000000..8b7c63d4997
--- /dev/null
+++ b/target/riscv/vcrypto_helper.c
@@ -0,0 +1,59 @@
+/*
+ * RISC-V Vector Crypto Extension Helpers for QEMU.
+ *
+ * Copyright (C) 2023 SiFive, Inc.
+ * Written by Codethink Ltd and SiFive.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/host-utils.h"
+#include "qemu/bitops.h"
+#include "cpu.h"
+#include "exec/memop.h"
+#include "exec/exec-all.h"
+#include "exec/helper-proto.h"
+#include "internals.h"
+#include "vector_internals.h"
+
+static uint64_t clmul64(uint64_t y, uint64_t x)
+{
+ uint64_t result = 0;
+ for (int j = 63; j >= 0; j--) {
+ if ((y >> j) & 1) {
+ result ^= (x << j);
+ }
+ }
+ return result;
+}
+
+static uint64_t clmulh64(uint64_t y, uint64_t x)
+{
+ uint64_t result = 0;
+ for (int j = 63; j >= 1; j--) {
+ if ((y >> j) & 1) {
+ result ^= (x >> (64 - j));
+ }
+ }
+ return result;
+}
+
+RVVCALL(OPIVV2, vclmul_vv, OP_UUU_D, H8, H8, H8, clmul64)
+GEN_VEXT_VV(vclmul_vv, 8)
+RVVCALL(OPIVX2, vclmul_vx, OP_UUU_D, H8, H8, clmul64)
+GEN_VEXT_VX(vclmul_vx, 8)
+RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
+GEN_VEXT_VV(vclmulh_vv, 8)
+RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
+GEN_VEXT_VX(vclmulh_vx, 8)
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 05/15] target/riscv: Move vector translation checks
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (3 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 06/15] target/riscv: Refactor translation of vector-widening instruction Max Chou
` (10 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Nazar Kazakov, Richard Henderson, Weiwei Li,
Max Chou, Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei,
Frank Chang
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Move the checks out of `do_opiv{v,x,i}_gvec{,_shift}` functions
and into the corresponding macros. This enables the functions to be
reused in proceeding commits without check duplication.
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/insn_trans/trans_rvv.c.inc | 28 +++++++++++--------------
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index 7e194aae34a..5dfd524c7d2 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -1183,9 +1183,6 @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
gen_helper_gvec_4_ptr *fn)
{
TCGLabel *over = gen_new_label();
- if (!opivv_check(s, a)) {
- return false;
- }
tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
@@ -1218,6 +1215,9 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
}; \
+ if (!opivv_check(s, a)) { \
+ return false; \
+ } \
return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
}
@@ -1276,10 +1276,6 @@ static inline bool
do_opivx_gvec(DisasContext *s, arg_rmrr *a, GVecGen2sFn *gvec_fn,
gen_helper_opivx *fn)
{
- if (!opivx_check(s, a)) {
- return false;
- }
-
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
TCGv_i64 src1 = tcg_temp_new_i64();
@@ -1301,6 +1297,9 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
}; \
+ if (!opivx_check(s, a)) { \
+ return false; \
+ } \
return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
}
@@ -1432,10 +1431,6 @@ static inline bool
do_opivi_gvec(DisasContext *s, arg_rmrr *a, GVecGen2iFn *gvec_fn,
gen_helper_opivx *fn, imm_mode_t imm_mode)
{
- if (!opivx_check(s, a)) {
- return false;
- }
-
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
gvec_fn(s->sew, vreg_ofs(s, a->rd), vreg_ofs(s, a->rs2),
extract_imm(s, a->rs1, imm_mode), MAXSZ(s), MAXSZ(s));
@@ -1453,6 +1448,9 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \
gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \
}; \
+ if (!opivx_check(s, a)) { \
+ return false; \
+ } \
return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, \
fns[s->sew], IMM_MODE); \
}
@@ -1775,10 +1773,6 @@ static inline bool
do_opivx_gvec_shift(DisasContext *s, arg_rmrr *a, GVecGen2sFn32 *gvec_fn,
gen_helper_opivx *fn)
{
- if (!opivx_check(s, a)) {
- return false;
- }
-
if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
TCGv_i32 src1 = tcg_temp_new_i32();
@@ -1800,7 +1794,9 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
}; \
- \
+ if (!opivx_check(s, a)) { \
+ return false; \
+ } \
return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 06/15] target/riscv: Refactor translation of vector-widening instruction
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (4 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 05/15] target/riscv: Move vector translation checks Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 07/15] target/riscv: Refactor some of the generic vector functionality Max Chou
` (9 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Dickon Hood, Richard Henderson, Weiwei Li,
Max Chou, Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei,
Frank Chang
From: Dickon Hood <dickon.hood@codethink.co.uk>
Zvbb (implemented in later commit) has a widening instruction, which
requires an extra check on the enabled extensions. Refactor
GEN_OPIVX_WIDEN_TRANS() to take a check function to avoid reimplementing
it.
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/insn_trans/trans_rvv.c.inc | 52 +++++++++++--------------
1 file changed, 23 insertions(+), 29 deletions(-)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index 5dfd524c7d2..a5562505531 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -1526,30 +1526,24 @@ static bool opivx_widen_check(DisasContext *s, arg_rmrr *a)
vext_check_ds(s, a->rd, a->rs2, a->vm);
}
-static bool do_opivx_widen(DisasContext *s, arg_rmrr *a,
- gen_helper_opivx *fn)
-{
- if (opivx_widen_check(s, a)) {
- return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fn, s);
- }
- return false;
-}
-
-#define GEN_OPIVX_WIDEN_TRANS(NAME) \
-static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
-{ \
- static gen_helper_opivx * const fns[3] = { \
- gen_helper_##NAME##_b, \
- gen_helper_##NAME##_h, \
- gen_helper_##NAME##_w \
- }; \
- return do_opivx_widen(s, a, fns[s->sew]); \
+#define GEN_OPIVX_WIDEN_TRANS(NAME, CHECK) \
+static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+{ \
+ if (CHECK(s, a)) { \
+ static gen_helper_opivx * const fns[3] = { \
+ gen_helper_##NAME##_b, \
+ gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w \
+ }; \
+ return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s); \
+ } \
+ return false; \
}
-GEN_OPIVX_WIDEN_TRANS(vwaddu_vx)
-GEN_OPIVX_WIDEN_TRANS(vwadd_vx)
-GEN_OPIVX_WIDEN_TRANS(vwsubu_vx)
-GEN_OPIVX_WIDEN_TRANS(vwsub_vx)
+GEN_OPIVX_WIDEN_TRANS(vwaddu_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwadd_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwsubu_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwsub_vx, opivx_widen_check)
/* WIDEN OPIVV with WIDEN */
static bool opiwv_widen_check(DisasContext *s, arg_rmrr *a)
@@ -1997,9 +1991,9 @@ GEN_OPIVX_TRANS(vrem_vx, opivx_check)
GEN_OPIVV_WIDEN_TRANS(vwmul_vv, opivv_widen_check)
GEN_OPIVV_WIDEN_TRANS(vwmulu_vv, opivv_widen_check)
GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check)
-GEN_OPIVX_WIDEN_TRANS(vwmul_vx)
-GEN_OPIVX_WIDEN_TRANS(vwmulu_vx)
-GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx)
+GEN_OPIVX_WIDEN_TRANS(vwmul_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwmulu_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx, opivx_widen_check)
/* Vector Single-Width Integer Multiply-Add Instructions */
GEN_OPIVV_TRANS(vmacc_vv, opivv_check)
@@ -2015,10 +2009,10 @@ GEN_OPIVX_TRANS(vnmsub_vx, opivx_check)
GEN_OPIVV_WIDEN_TRANS(vwmaccu_vv, opivv_widen_check)
GEN_OPIVV_WIDEN_TRANS(vwmacc_vv, opivv_widen_check)
GEN_OPIVV_WIDEN_TRANS(vwmaccsu_vv, opivv_widen_check)
-GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx)
-GEN_OPIVX_WIDEN_TRANS(vwmacc_vx)
-GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx)
-GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx)
+GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwmacc_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx, opivx_widen_check)
+GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx, opivx_widen_check)
/* Vector Integer Merge and Move Instructions */
static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 07/15] target/riscv: Refactor some of the generic vector functionality
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (5 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 06/15] target/riscv: Refactor translation of vector-widening instruction Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support Max Chou
` (8 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Kiran Ostrolenk, Weiwei Li, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Move some macros out of `vector_helper` and into `vector_internals`.
This ensures they can be used by both vector and vector-crypto helpers
(latter implemented in proceeding commits).
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 42 ------------------------------
target/riscv/vector_internals.h | 46 +++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+), 42 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 6434fd2f7e8..ae94576f3df 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -635,9 +635,6 @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
#define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t
-#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
-#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
-#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
#define WOP_SSS_B int16_t, int8_t, int8_t, int16_t, int16_t
#define WOP_SSS_H int32_t, int16_t, int16_t, int32_t, int32_t
#define WOP_SSS_W int64_t, int32_t, int32_t, int64_t, int64_t
@@ -3437,11 +3434,6 @@ GEN_VEXT_VF(vfwnmsac_vf_h, 4)
GEN_VEXT_VF(vfwnmsac_vf_w, 8)
/* Vector Floating-Point Square-Root Instruction */
-/* (TD, T2, TX2) */
-#define OP_UU_H uint16_t, uint16_t, uint16_t
-#define OP_UU_W uint32_t, uint32_t, uint32_t
-#define OP_UU_D uint64_t, uint64_t, uint64_t
-
#define OPFVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
static void do_##NAME(void *vd, void *vs2, int i, \
CPURISCVState *env) \
@@ -4138,40 +4130,6 @@ GEN_VEXT_CMP_VF(vmfge_vf_w, uint32_t, H4, vmfge32)
GEN_VEXT_CMP_VF(vmfge_vf_d, uint64_t, H8, vmfge64)
/* Vector Floating-Point Classify Instruction */
-#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
-static void do_##NAME(void *vd, void *vs2, int i) \
-{ \
- TX2 s2 = *((T2 *)vs2 + HS2(i)); \
- *((TD *)vd + HD(i)) = OP(s2); \
-}
-
-#define GEN_VEXT_V(NAME, ESZ) \
-void HELPER(NAME)(void *vd, void *v0, void *vs2, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- uint32_t vm = vext_vm(desc); \
- uint32_t vl = env->vl; \
- uint32_t total_elems = \
- vext_get_total_elems(env, desc, ESZ); \
- uint32_t vta = vext_vta(desc); \
- uint32_t vma = vext_vma(desc); \
- uint32_t i; \
- \
- for (i = env->vstart; i < vl; i++) { \
- if (!vm && !vext_elem_mask(v0, i)) { \
- /* set masked-off elements to 1s */ \
- vext_set_elems_1s(vd, vma, i * ESZ, \
- (i + 1) * ESZ); \
- continue; \
- } \
- do_##NAME(vd, vs2, i); \
- } \
- env->vstart = 0; \
- /* set tail elements to 1s */ \
- vext_set_elems_1s(vd, vta, vl * ESZ, \
- total_elems * ESZ); \
-}
-
target_ulong fclass_h(uint64_t frs1)
{
float16 f = frs1;
diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
index 749d138bebe..8133111e5f6 100644
--- a/target/riscv/vector_internals.h
+++ b/target/riscv/vector_internals.h
@@ -121,12 +121,52 @@ void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
/* expand macro args before macro */
#define RVVCALL(macro, ...) macro(__VA_ARGS__)
+/* (TD, T2, TX2) */
+#define OP_UU_B uint8_t, uint8_t, uint8_t
+#define OP_UU_H uint16_t, uint16_t, uint16_t
+#define OP_UU_W uint32_t, uint32_t, uint32_t
+#define OP_UU_D uint64_t, uint64_t, uint64_t
+
/* (TD, T1, T2, TX1, TX2) */
#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
+#define OPIVV1(NAME, TD, T2, TX2, HD, HS2, OP) \
+static void do_##NAME(void *vd, void *vs2, int i) \
+{ \
+ TX2 s2 = *((T2 *)vs2 + HS2(i)); \
+ *((TD *)vd + HD(i)) = OP(s2); \
+}
+
+#define GEN_VEXT_V(NAME, ESZ) \
+void HELPER(NAME)(void *vd, void *v0, void *vs2, \
+ CPURISCVState *env, uint32_t desc) \
+{ \
+ uint32_t vm = vext_vm(desc); \
+ uint32_t vl = env->vl; \
+ uint32_t total_elems = \
+ vext_get_total_elems(env, desc, ESZ); \
+ uint32_t vta = vext_vta(desc); \
+ uint32_t vma = vext_vma(desc); \
+ uint32_t i; \
+ \
+ for (i = env->vstart; i < vl; i++) { \
+ if (!vm && !vext_elem_mask(v0, i)) { \
+ /* set masked-off elements to 1s */ \
+ vext_set_elems_1s(vd, vma, i * ESZ, \
+ (i + 1) * ESZ); \
+ continue; \
+ } \
+ do_##NAME(vd, vs2, i); \
+ } \
+ env->vstart = 0; \
+ /* set tail elements to 1s */ \
+ vext_set_elems_1s(vd, vta, vl * ESZ, \
+ total_elems * ESZ); \
+}
+
/* operation of two vector elements */
typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
@@ -179,4 +219,10 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
do_##NAME, ESZ); \
}
+/* Three of the widening shortening macros: */
+/* (TD, T1, T2, TX1, TX2) */
+#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t
+#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t
+#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t
+
#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (6 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 07/15] target/riscv: Refactor some of the generic vector functionality Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-08-02 14:38 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 09/15] target/riscv: Add Zvkned " Max Chou
` (7 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Dickon Hood, Nazar Kazakov, William Salmon,
Kiran Ostrolenk, Max Chou, Palmer Dabbelt, Alistair Francis,
Bin Meng, Weiwei Li, Liu Zhiwei, Lawrence Hunter
From: Dickon Hood <dickon.hood@codethink.co.uk>
This commit adds support for the Zvbb vector-crypto extension, which
consists of the following instructions:
* vrol.[vv,vx]
* vror.[vv,vx,vi]
* vbrev8.v
* vrev8.v
* vandn.[vv,vx]
* vbrev.v
* vclz.v
* vctz.v
* vcpop.v
* vwsll.[vv,vx,vi]
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
[max.chou@sifive.com: Fix imm mode of vror.vi]
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
[max.chou@sifive.com: Exposed x-zvbb property]
---
target/riscv/cpu.c | 12 ++
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 62 +++++++++
target/riscv/insn32.decode | 20 +++
target/riscv/insn_trans/trans_rvvk.c.inc | 164 +++++++++++++++++++++++
target/riscv/vcrypto_helper.c | 138 +++++++++++++++++++
6 files changed, 397 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index d1dc78d483f..e39a6118bbc 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -119,6 +119,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
+ ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
@@ -1269,6 +1270,16 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
return;
}
+ /*
+ * In principle Zve*x would also suffice here, were they supported
+ * in qemu
+ */
+ if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
+ error_setg(errp,
+ "Vector crypto extensions require V or Zve* extensions");
+ return;
+ }
+
if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
return;
@@ -1852,6 +1863,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
/* Vector cryptography extensions */
+ DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
DEFINE_PROP_END_OF_LIST(),
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index d25b36a5128..0e31ebeed93 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -85,6 +85,7 @@ struct RISCVCPUConfig {
bool ext_zve32f;
bool ext_zve64f;
bool ext_zve64d;
+ bool ext_zvbb;
bool ext_zvbc;
bool ext_zmmul;
bool ext_zvfbfmin;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index 6776777c4eb..3db25ed2a20 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1188,3 +1188,65 @@ DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
+
+DEF_HELPER_6(vror_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vror_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vror_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vror_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
+
+DEF_HELPER_6(vror_vx_b, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vror_vx_h, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vror_vx_w, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vror_vx_d, void, ptr, ptr, tl, ptr, env, i32)
+
+DEF_HELPER_6(vrol_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vrol_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vrol_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vrol_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
+
+DEF_HELPER_6(vrol_vx_b, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vrol_vx_h, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vrol_vx_w, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vrol_vx_d, void, ptr, ptr, tl, ptr, env, i32)
+
+DEF_HELPER_5(vrev8_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vrev8_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vrev8_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vrev8_v_d, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev8_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev8_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev8_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev8_v_d, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vbrev_v_d, void, ptr, ptr, ptr, env, i32)
+
+DEF_HELPER_5(vclz_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vclz_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vclz_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vclz_v_d, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vctz_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vctz_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vctz_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vctz_v_d, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vcpop_v_b, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vcpop_v_h, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vcpop_v_w, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vcpop_v_d, void, ptr, ptr, ptr, env, i32)
+
+DEF_HELPER_6(vwsll_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vwsll_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vwsll_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vwsll_vx_b, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vwsll_vx_h, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vwsll_vx_w, void, ptr, ptr, tl, ptr, env, i32)
+
+DEF_HELPER_6(vandn_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vandn_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vandn_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vandn_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
+DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
+DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index dd50d5a48c7..b982a8325bb 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -37,6 +37,7 @@
%imm_u 12:s20 !function=ex_shift_12
%imm_bs 30:2 !function=ex_shift_3
%imm_rnum 20:4
+%imm_z6 26:1 15:5
# Argument sets:
&empty
@@ -82,6 +83,7 @@
@r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd
@r_vm_1 ...... . ..... ..... ... ..... ....... &rmrr vm=1 %rs2 %rs1 %rd
@r_vm_0 ...... . ..... ..... ... ..... ....... &rmrr vm=0 %rs2 %rs1 %rd
+@r2_zimm6 ..... . vm:1 ..... ..... ... ..... ....... &rmrr %rs2 rs1=%imm_z6 %rd
@r2_zimm11 . zimm:11 ..... ... ..... ....... %rs1 %rd
@r2_zimm10 .. zimm:10 ..... ... ..... ....... %rs1 %rd
@r2_s ....... ..... ..... ... ..... ....... %rs2 %rs1
@@ -952,3 +954,21 @@ vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
+
+# *** Zvbb vector crypto extension ***
+vrol_vv 010101 . ..... ..... 000 ..... 1010111 @r_vm
+vrol_vx 010101 . ..... ..... 100 ..... 1010111 @r_vm
+vror_vv 010100 . ..... ..... 000 ..... 1010111 @r_vm
+vror_vx 010100 . ..... ..... 100 ..... 1010111 @r_vm
+vror_vi 01010. . ..... ..... 011 ..... 1010111 @r2_zimm6
+vbrev8_v 010010 . ..... 01000 010 ..... 1010111 @r2_vm
+vrev8_v 010010 . ..... 01001 010 ..... 1010111 @r2_vm
+vandn_vv 000001 . ..... ..... 000 ..... 1010111 @r_vm
+vandn_vx 000001 . ..... ..... 100 ..... 1010111 @r_vm
+vbrev_v 010010 . ..... 01010 010 ..... 1010111 @r2_vm
+vclz_v 010010 . ..... 01100 010 ..... 1010111 @r2_vm
+vctz_v 010010 . ..... 01101 010 ..... 1010111 @r2_vm
+vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
+vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
+vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
+vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index 552b08a2fd1..0e4b3376134 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -60,3 +60,167 @@ static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
+
+/*
+ * Zvbb
+ */
+
+#define GEN_OPIVI_GVEC_TRANS_CHECK(NAME, IMM_MODE, OPIVX, SUF, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ static gen_helper_opivx *const fns[4] = { \
+ gen_helper_##OPIVX##_b, \
+ gen_helper_##OPIVX##_h, \
+ gen_helper_##OPIVX##_w, \
+ gen_helper_##OPIVX##_d, \
+ }; \
+ return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew], \
+ IMM_MODE); \
+ } \
+ return false; \
+ }
+
+#define GEN_OPIVV_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ static gen_helper_gvec_4_ptr *const fns[4] = { \
+ gen_helper_##NAME##_b, \
+ gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w, \
+ gen_helper_##NAME##_d, \
+ }; \
+ return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
+ } \
+ return false; \
+ }
+
+#define GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(NAME, SUF, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ static gen_helper_opivx *const fns[4] = { \
+ gen_helper_##NAME##_b, \
+ gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w, \
+ gen_helper_##NAME##_d, \
+ }; \
+ return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, \
+ fns[s->sew]); \
+ } \
+ return false; \
+ }
+
+static bool zvbb_vv_check(DisasContext *s, arg_rmrr *a)
+{
+ return opivv_check(s, a) && s->cfg_ptr->ext_zvbb == true;
+}
+
+static bool zvbb_vx_check(DisasContext *s, arg_rmrr *a)
+{
+ return opivx_check(s, a) && s->cfg_ptr->ext_zvbb == true;
+}
+
+/* vrol.v[vx] */
+GEN_OPIVV_GVEC_TRANS_CHECK(vrol_vv, rotlv, zvbb_vv_check)
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vrol_vx, rotls, zvbb_vx_check)
+
+/* vror.v[vxi] */
+GEN_OPIVV_GVEC_TRANS_CHECK(vror_vv, rotrv, zvbb_vv_check)
+GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vror_vx, rotrs, zvbb_vx_check)
+GEN_OPIVI_GVEC_TRANS_CHECK(vror_vi, IMM_TRUNC_SEW, vror_vx, rotri, zvbb_vx_check)
+
+#define GEN_OPIVX_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ static gen_helper_opivx *const fns[4] = { \
+ gen_helper_##NAME##_b, \
+ gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w, \
+ gen_helper_##NAME##_d, \
+ }; \
+ return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
+ } \
+ return false; \
+ }
+
+/* vandn.v[vx] */
+GEN_OPIVV_GVEC_TRANS_CHECK(vandn_vv, andc, zvbb_vv_check)
+GEN_OPIVX_GVEC_TRANS_CHECK(vandn_vx, andcs, zvbb_vx_check)
+
+#define GEN_OPIV_TRANS(NAME, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ uint32_t data = 0; \
+ static gen_helper_gvec_3_ptr *const fns[4] = { \
+ gen_helper_##NAME##_b, \
+ gen_helper_##NAME##_h, \
+ gen_helper_##NAME##_w, \
+ gen_helper_##NAME##_d, \
+ }; \
+ TCGLabel *over = gen_new_label(); \
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
+ \
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
+ vreg_ofs(s, a->rs2), cpu_env, \
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
+ data, fns[s->sew]); \
+ mark_vs_dirty(s); \
+ gen_set_label(over); \
+ return true; \
+ } \
+ return false; \
+ }
+
+static bool zvbb_opiv_check(DisasContext *s, arg_rmr *a)
+{
+ return s->cfg_ptr->ext_zvbb == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ vext_check_ss(s, a->rd, a->rs2, a->vm);
+}
+
+GEN_OPIV_TRANS(vbrev8_v, zvbb_opiv_check)
+GEN_OPIV_TRANS(vrev8_v, zvbb_opiv_check)
+GEN_OPIV_TRANS(vbrev_v, zvbb_opiv_check)
+GEN_OPIV_TRANS(vclz_v, zvbb_opiv_check)
+GEN_OPIV_TRANS(vctz_v, zvbb_opiv_check)
+GEN_OPIV_TRANS(vcpop_v, zvbb_opiv_check)
+
+static bool vwsll_vv_check(DisasContext *s, arg_rmrr *a)
+{
+ return s->cfg_ptr->ext_zvbb && opivv_widen_check(s, a);
+}
+
+static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
+{
+ return s->cfg_ptr->ext_zvbb && opivx_widen_check(s, a);
+}
+
+/* OPIVI without GVEC IR */
+#define GEN_OPIVI_WIDEN_TRANS(NAME, IMM_MODE, OPIVX, CHECK) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ static gen_helper_opivx *const fns[3] = { \
+ gen_helper_##OPIVX##_b, \
+ gen_helper_##OPIVX##_h, \
+ gen_helper_##OPIVX##_w, \
+ }; \
+ return opivi_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s, \
+ IMM_MODE); \
+ } \
+ return false; \
+ }
+
+GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
+GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
+GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index 8b7c63d4997..11239b59d6f 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -20,6 +20,7 @@
#include "qemu/osdep.h"
#include "qemu/host-utils.h"
#include "qemu/bitops.h"
+#include "qemu/bswap.h"
#include "cpu.h"
#include "exec/memop.h"
#include "exec/exec-all.h"
@@ -57,3 +58,140 @@ RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
GEN_VEXT_VV(vclmulh_vv, 8)
RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
GEN_VEXT_VX(vclmulh_vx, 8)
+
+RVVCALL(OPIVV2, vror_vv_b, OP_UUU_B, H1, H1, H1, ror8)
+RVVCALL(OPIVV2, vror_vv_h, OP_UUU_H, H2, H2, H2, ror16)
+RVVCALL(OPIVV2, vror_vv_w, OP_UUU_W, H4, H4, H4, ror32)
+RVVCALL(OPIVV2, vror_vv_d, OP_UUU_D, H8, H8, H8, ror64)
+GEN_VEXT_VV(vror_vv_b, 1)
+GEN_VEXT_VV(vror_vv_h, 2)
+GEN_VEXT_VV(vror_vv_w, 4)
+GEN_VEXT_VV(vror_vv_d, 8)
+
+RVVCALL(OPIVX2, vror_vx_b, OP_UUU_B, H1, H1, ror8)
+RVVCALL(OPIVX2, vror_vx_h, OP_UUU_H, H2, H2, ror16)
+RVVCALL(OPIVX2, vror_vx_w, OP_UUU_W, H4, H4, ror32)
+RVVCALL(OPIVX2, vror_vx_d, OP_UUU_D, H8, H8, ror64)
+GEN_VEXT_VX(vror_vx_b, 1)
+GEN_VEXT_VX(vror_vx_h, 2)
+GEN_VEXT_VX(vror_vx_w, 4)
+GEN_VEXT_VX(vror_vx_d, 8)
+
+RVVCALL(OPIVV2, vrol_vv_b, OP_UUU_B, H1, H1, H1, rol8)
+RVVCALL(OPIVV2, vrol_vv_h, OP_UUU_H, H2, H2, H2, rol16)
+RVVCALL(OPIVV2, vrol_vv_w, OP_UUU_W, H4, H4, H4, rol32)
+RVVCALL(OPIVV2, vrol_vv_d, OP_UUU_D, H8, H8, H8, rol64)
+GEN_VEXT_VV(vrol_vv_b, 1)
+GEN_VEXT_VV(vrol_vv_h, 2)
+GEN_VEXT_VV(vrol_vv_w, 4)
+GEN_VEXT_VV(vrol_vv_d, 8)
+
+RVVCALL(OPIVX2, vrol_vx_b, OP_UUU_B, H1, H1, rol8)
+RVVCALL(OPIVX2, vrol_vx_h, OP_UUU_H, H2, H2, rol16)
+RVVCALL(OPIVX2, vrol_vx_w, OP_UUU_W, H4, H4, rol32)
+RVVCALL(OPIVX2, vrol_vx_d, OP_UUU_D, H8, H8, rol64)
+GEN_VEXT_VX(vrol_vx_b, 1)
+GEN_VEXT_VX(vrol_vx_h, 2)
+GEN_VEXT_VX(vrol_vx_w, 4)
+GEN_VEXT_VX(vrol_vx_d, 8)
+
+static uint64_t brev8(uint64_t val)
+{
+ val = ((val & 0x5555555555555555ull) << 1) |
+ ((val & 0xAAAAAAAAAAAAAAAAull) >> 1);
+ val = ((val & 0x3333333333333333ull) << 2) |
+ ((val & 0xCCCCCCCCCCCCCCCCull) >> 2);
+ val = ((val & 0x0F0F0F0F0F0F0F0Full) << 4) |
+ ((val & 0xF0F0F0F0F0F0F0F0ull) >> 4);
+
+ return val;
+}
+
+RVVCALL(OPIVV1, vbrev8_v_b, OP_UU_B, H1, H1, brev8)
+RVVCALL(OPIVV1, vbrev8_v_h, OP_UU_H, H2, H2, brev8)
+RVVCALL(OPIVV1, vbrev8_v_w, OP_UU_W, H4, H4, brev8)
+RVVCALL(OPIVV1, vbrev8_v_d, OP_UU_D, H8, H8, brev8)
+GEN_VEXT_V(vbrev8_v_b, 1)
+GEN_VEXT_V(vbrev8_v_h, 2)
+GEN_VEXT_V(vbrev8_v_w, 4)
+GEN_VEXT_V(vbrev8_v_d, 8)
+
+#define DO_IDENTITY(a) (a)
+RVVCALL(OPIVV1, vrev8_v_b, OP_UU_B, H1, H1, DO_IDENTITY)
+RVVCALL(OPIVV1, vrev8_v_h, OP_UU_H, H2, H2, bswap16)
+RVVCALL(OPIVV1, vrev8_v_w, OP_UU_W, H4, H4, bswap32)
+RVVCALL(OPIVV1, vrev8_v_d, OP_UU_D, H8, H8, bswap64)
+GEN_VEXT_V(vrev8_v_b, 1)
+GEN_VEXT_V(vrev8_v_h, 2)
+GEN_VEXT_V(vrev8_v_w, 4)
+GEN_VEXT_V(vrev8_v_d, 8)
+
+#define DO_ANDN(a, b) ((a) & ~(b))
+RVVCALL(OPIVV2, vandn_vv_b, OP_UUU_B, H1, H1, H1, DO_ANDN)
+RVVCALL(OPIVV2, vandn_vv_h, OP_UUU_H, H2, H2, H2, DO_ANDN)
+RVVCALL(OPIVV2, vandn_vv_w, OP_UUU_W, H4, H4, H4, DO_ANDN)
+RVVCALL(OPIVV2, vandn_vv_d, OP_UUU_D, H8, H8, H8, DO_ANDN)
+GEN_VEXT_VV(vandn_vv_b, 1)
+GEN_VEXT_VV(vandn_vv_h, 2)
+GEN_VEXT_VV(vandn_vv_w, 4)
+GEN_VEXT_VV(vandn_vv_d, 8)
+
+RVVCALL(OPIVX2, vandn_vx_b, OP_UUU_B, H1, H1, DO_ANDN)
+RVVCALL(OPIVX2, vandn_vx_h, OP_UUU_H, H2, H2, DO_ANDN)
+RVVCALL(OPIVX2, vandn_vx_w, OP_UUU_W, H4, H4, DO_ANDN)
+RVVCALL(OPIVX2, vandn_vx_d, OP_UUU_D, H8, H8, DO_ANDN)
+GEN_VEXT_VX(vandn_vx_b, 1)
+GEN_VEXT_VX(vandn_vx_h, 2)
+GEN_VEXT_VX(vandn_vx_w, 4)
+GEN_VEXT_VX(vandn_vx_d, 8)
+
+RVVCALL(OPIVV1, vbrev_v_b, OP_UU_B, H1, H1, revbit8)
+RVVCALL(OPIVV1, vbrev_v_h, OP_UU_H, H2, H2, revbit16)
+RVVCALL(OPIVV1, vbrev_v_w, OP_UU_W, H4, H4, revbit32)
+RVVCALL(OPIVV1, vbrev_v_d, OP_UU_D, H8, H8, revbit64)
+GEN_VEXT_V(vbrev_v_b, 1)
+GEN_VEXT_V(vbrev_v_h, 2)
+GEN_VEXT_V(vbrev_v_w, 4)
+GEN_VEXT_V(vbrev_v_d, 8)
+
+RVVCALL(OPIVV1, vclz_v_b, OP_UU_B, H1, H1, clz8)
+RVVCALL(OPIVV1, vclz_v_h, OP_UU_H, H2, H2, clz16)
+RVVCALL(OPIVV1, vclz_v_w, OP_UU_W, H4, H4, clz32)
+RVVCALL(OPIVV1, vclz_v_d, OP_UU_D, H8, H8, clz64)
+GEN_VEXT_V(vclz_v_b, 1)
+GEN_VEXT_V(vclz_v_h, 2)
+GEN_VEXT_V(vclz_v_w, 4)
+GEN_VEXT_V(vclz_v_d, 8)
+
+RVVCALL(OPIVV1, vctz_v_b, OP_UU_B, H1, H1, ctz8)
+RVVCALL(OPIVV1, vctz_v_h, OP_UU_H, H2, H2, ctz16)
+RVVCALL(OPIVV1, vctz_v_w, OP_UU_W, H4, H4, ctz32)
+RVVCALL(OPIVV1, vctz_v_d, OP_UU_D, H8, H8, ctz64)
+GEN_VEXT_V(vctz_v_b, 1)
+GEN_VEXT_V(vctz_v_h, 2)
+GEN_VEXT_V(vctz_v_w, 4)
+GEN_VEXT_V(vctz_v_d, 8)
+
+RVVCALL(OPIVV1, vcpop_v_b, OP_UU_B, H1, H1, ctpop8)
+RVVCALL(OPIVV1, vcpop_v_h, OP_UU_H, H2, H2, ctpop16)
+RVVCALL(OPIVV1, vcpop_v_w, OP_UU_W, H4, H4, ctpop32)
+RVVCALL(OPIVV1, vcpop_v_d, OP_UU_D, H8, H8, ctpop64)
+GEN_VEXT_V(vcpop_v_b, 1)
+GEN_VEXT_V(vcpop_v_h, 2)
+GEN_VEXT_V(vcpop_v_w, 4)
+GEN_VEXT_V(vcpop_v_d, 8)
+
+#define DO_SLL(N, M) (N << (M & (sizeof(N) * 8 - 1)))
+RVVCALL(OPIVV2, vwsll_vv_b, WOP_UUU_B, H2, H1, H1, DO_SLL)
+RVVCALL(OPIVV2, vwsll_vv_h, WOP_UUU_H, H4, H2, H2, DO_SLL)
+RVVCALL(OPIVV2, vwsll_vv_w, WOP_UUU_W, H8, H4, H4, DO_SLL)
+GEN_VEXT_VV(vwsll_vv_b, 2)
+GEN_VEXT_VV(vwsll_vv_h, 4)
+GEN_VEXT_VV(vwsll_vv_w, 8)
+
+RVVCALL(OPIVX2, vwsll_vx_b, WOP_UUU_B, H2, H1, DO_SLL)
+RVVCALL(OPIVX2, vwsll_vx_h, WOP_UUU_H, H4, H2, DO_SLL)
+RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
+GEN_VEXT_VX(vwsll_vx_b, 2)
+GEN_VEXT_VX(vwsll_vx_h, 4)
+GEN_VEXT_VX(vwsll_vx_w, 8)
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 09/15] target/riscv: Add Zvkned ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (7 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 10/15] target/riscv: Add Zvknh " Max Chou
` (6 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Nazar Kazakov, Lawrence Hunter,
William Salmon, Max Chou, Palmer Dabbelt, Alistair Francis,
Bin Meng, Weiwei Li, Liu Zhiwei, Kiran Ostrolenk
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
This commit adds support for the Zvkned vector-crypto extension, which
consists of the following instructions:
* vaesef.[vv,vs]
* vaesdf.[vv,vs]
* vaesdm.[vv,vs]
* vaesz.vs
* vaesem.[vv,vs]
* vaeskf1.vi
* vaeskf2.vi
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
[max.chou@sifive.com: Replaced vstart checking by TCG op]
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
[max.chou@sifive.com: Imported aes-round.h and exposed x-zvkned
property]
[max.chou@sifive.com: Fixed endian issues and replaced the vstart & vl
egs checking by helper function]
[max.chou@sifive.com: Replaced bswap32 calls in aes key expanding]
---
target/riscv/cpu.c | 4 +-
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 14 ++
target/riscv/insn32.decode | 14 ++
target/riscv/insn_trans/trans_rvvk.c.inc | 147 +++++++++++++++++
target/riscv/vcrypto_helper.c | 202 +++++++++++++++++++++++
6 files changed, 381 insertions(+), 1 deletion(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index e39a6118bbc..a1a40875f95 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -128,6 +128,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
+ ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
@@ -1274,7 +1275,7 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
* In principle Zve*x would also suffice here, were they supported
* in qemu
*/
- if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
error_setg(errp,
"Vector crypto extensions require V or Zve* extensions");
return;
@@ -1865,6 +1866,7 @@ static Property riscv_cpu_extensions[] = {
/* Vector cryptography extensions */
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
+ DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index 0e31ebeed93..c7eafe27c0d 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -87,6 +87,7 @@ struct RISCVCPUConfig {
bool ext_zve64d;
bool ext_zvbb;
bool ext_zvbc;
+ bool ext_zvkned;
bool ext_zmmul;
bool ext_zvfbfmin;
bool ext_zvfbfwma;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index 3db25ed2a20..02e5dbe6ee4 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1250,3 +1250,17 @@ DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
+
+DEF_HELPER_2(egs_check, void, i32, env)
+
+DEF_HELPER_4(vaesef_vv, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesef_vs, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesdf_vv, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesdf_vs, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesem_vv, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesem_vs, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesdm_vv, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
+DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
+DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index b982a8325bb..4f3c50f10f3 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -75,6 +75,7 @@
@r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd
@r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd
@r2 ....... ..... ..... ... ..... ....... &r2 %rs1 %rd
+@r2_vm_1 ...... . ..... ..... ... ..... ....... &rmr vm=1 %rs2 %rd
@r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd
@r2_vm ...... vm:1 ..... ..... ... ..... ....... &rmr %rs2 %rd
@r1_vm ...... vm:1 ..... ..... ... ..... ....... %rd
@@ -972,3 +973,16 @@ vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
+
+# *** Zvkned vector crypto extension ***
+vaesef_vv 101000 1 ..... 00011 010 ..... 1110111 @r2_vm_1
+vaesef_vs 101001 1 ..... 00011 010 ..... 1110111 @r2_vm_1
+vaesdf_vv 101000 1 ..... 00001 010 ..... 1110111 @r2_vm_1
+vaesdf_vs 101001 1 ..... 00001 010 ..... 1110111 @r2_vm_1
+vaesem_vv 101000 1 ..... 00010 010 ..... 1110111 @r2_vm_1
+vaesem_vs 101001 1 ..... 00010 010 ..... 1110111 @r2_vm_1
+vaesdm_vv 101000 1 ..... 00000 010 ..... 1110111 @r2_vm_1
+vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
+vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
+vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index 0e4b3376134..817353f4d30 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -224,3 +224,150 @@ static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
+
+/*
+ * Zvkned
+ */
+
+#define ZVKNED_EGS 4
+
+#define GEN_V_UNMASKED_TRANS(NAME, CHECK, EGS) \
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
+ { \
+ if (CHECK(s, a)) { \
+ TCGv_ptr rd_v, rs2_v; \
+ TCGv_i32 desc, egs; \
+ uint32_t data = 0; \
+ TCGLabel *over = gen_new_label(); \
+ \
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
+ /* save opcode for unwinding in case we throw an exception */ \
+ decode_save_opc(s); \
+ egs = tcg_constant_i32(EGS); \
+ gen_helper_egs_check(egs, cpu_env); \
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
+ } \
+ \
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
+ rd_v = tcg_temp_new_ptr(); \
+ rs2_v = tcg_temp_new_ptr(); \
+ desc = tcg_constant_i32( \
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
+ gen_helper_##NAME(rd_v, rs2_v, cpu_env, desc); \
+ mark_vs_dirty(s); \
+ gen_set_label(over); \
+ return true; \
+ } \
+ return false; \
+ }
+
+static bool vaes_check_vv(DisasContext *s, arg_rmr *a)
+{
+ int egw_bytes = ZVKNED_EGS << s->sew;
+ return s->cfg_ptr->ext_zvkned == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ require_align(a->rd, s->lmul) &&
+ require_align(a->rs2, s->lmul) &&
+ s->sew == MO_32;
+}
+
+static bool vaes_check_overlap(DisasContext *s, int vd, int vs2)
+{
+ int8_t op_size = s->lmul <= 0 ? 1 : 1 << s->lmul;
+ return !is_overlapped(vd, op_size, vs2, 1);
+}
+
+static bool vaes_check_vs(DisasContext *s, arg_rmr *a)
+{
+ int egw_bytes = ZVKNED_EGS << s->sew;
+ return vaes_check_overlap(s, a->rd, a->rs2) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->cfg_ptr->ext_zvkned == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ require_align(a->rd, s->lmul) &&
+ s->sew == MO_32;
+}
+
+GEN_V_UNMASKED_TRANS(vaesef_vv, vaes_check_vv, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesef_vs, vaes_check_vs, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesdf_vv, vaes_check_vv, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesdf_vs, vaes_check_vs, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesdm_vv, vaes_check_vv, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesdm_vs, vaes_check_vs, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesz_vs, vaes_check_vs, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesem_vv, vaes_check_vv, ZVKNED_EGS)
+GEN_V_UNMASKED_TRANS(vaesem_vs, vaes_check_vs, ZVKNED_EGS)
+
+#define GEN_VI_UNMASKED_TRANS(NAME, CHECK, EGS) \
+ static bool trans_##NAME(DisasContext *s, arg_##NAME *a) \
+ { \
+ if (CHECK(s, a)) { \
+ TCGv_ptr rd_v, rs2_v; \
+ TCGv_i32 uimm_v, desc, egs; \
+ uint32_t data = 0; \
+ TCGLabel *over = gen_new_label(); \
+ \
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
+ /* save opcode for unwinding in case we throw an exception */ \
+ decode_save_opc(s); \
+ egs = tcg_constant_i32(EGS); \
+ gen_helper_egs_check(egs, cpu_env); \
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
+ } \
+ \
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
+ \
+ rd_v = tcg_temp_new_ptr(); \
+ rs2_v = tcg_temp_new_ptr(); \
+ uimm_v = tcg_constant_i32(a->rs1); \
+ desc = tcg_constant_i32( \
+ simd_desc(s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, data)); \
+ tcg_gen_addi_ptr(rd_v, cpu_env, vreg_ofs(s, a->rd)); \
+ tcg_gen_addi_ptr(rs2_v, cpu_env, vreg_ofs(s, a->rs2)); \
+ gen_helper_##NAME(rd_v, rs2_v, uimm_v, cpu_env, desc); \
+ mark_vs_dirty(s); \
+ gen_set_label(over); \
+ return true; \
+ } \
+ return false; \
+ }
+
+static bool vaeskf1_check(DisasContext *s, arg_vaeskf1_vi *a)
+{
+ int egw_bytes = ZVKNED_EGS << s->sew;
+ return s->cfg_ptr->ext_zvkned == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->sew == MO_32 &&
+ require_align(a->rd, s->lmul) &&
+ require_align(a->rs2, s->lmul);
+}
+
+static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
+{
+ int egw_bytes = ZVKNED_EGS << s->sew;
+ return s->cfg_ptr->ext_zvkned == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->sew == MO_32 &&
+ require_align(a->rd, s->lmul) &&
+ require_align(a->rs2, s->lmul);
+}
+
+GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
+GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index 11239b59d6f..cca78184e9b 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -22,6 +22,8 @@
#include "qemu/bitops.h"
#include "qemu/bswap.h"
#include "cpu.h"
+#include "crypto/aes.h"
+#include "crypto/aes-round.h"
#include "exec/memop.h"
#include "exec/exec-all.h"
#include "exec/helper-proto.h"
@@ -195,3 +197,203 @@ RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
GEN_VEXT_VX(vwsll_vx_b, 2)
GEN_VEXT_VX(vwsll_vx_h, 4)
GEN_VEXT_VX(vwsll_vx_w, 8)
+
+void HELPER(egs_check)(uint32_t egs, CPURISCVState *env)
+{
+ uint32_t vl = env->vl;
+ uint32_t vstart = env->vstart;
+
+ if (vl % egs != 0 || vstart % egs != 0) {
+ riscv_raise_exception(env, RISCV_EXCP_ILLEGAL_INST, GETPC());
+ }
+}
+
+static inline void xor_round_key(AESState *round_state, AESState *round_key)
+{
+ round_state->v = round_state->v ^ round_key->v;
+}
+
+#define GEN_ZVKNED_HELPER_VV(NAME, ...) \
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
+ uint32_t desc) \
+ { \
+ uint32_t vl = env->vl; \
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
+ uint32_t vta = vext_vta(desc); \
+ \
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
+ AESState round_key; \
+ round_key.d[0] = *((uint64_t *)vs2 + H8(i * 2 + 0)); \
+ round_key.d[1] = *((uint64_t *)vs2 + H8(i * 2 + 1)); \
+ AESState round_state; \
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
+ __VA_ARGS__; \
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
+ } \
+ env->vstart = 0; \
+ /* set tail elements to 1s */ \
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
+ }
+
+#define GEN_ZVKNED_HELPER_VS(NAME, ...) \
+ void HELPER(NAME)(void *vd, void *vs2, CPURISCVState *env, \
+ uint32_t desc) \
+ { \
+ uint32_t vl = env->vl; \
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4); \
+ uint32_t vta = vext_vta(desc); \
+ \
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) { \
+ AESState round_key; \
+ round_key.d[0] = *((uint64_t *)vs2 + H8(0)); \
+ round_key.d[1] = *((uint64_t *)vs2 + H8(1)); \
+ AESState round_state; \
+ round_state.d[0] = *((uint64_t *)vd + H8(i * 2 + 0)); \
+ round_state.d[1] = *((uint64_t *)vd + H8(i * 2 + 1)); \
+ __VA_ARGS__; \
+ *((uint64_t *)vd + H8(i * 2 + 0)) = round_state.d[0]; \
+ *((uint64_t *)vd + H8(i * 2 + 1)) = round_state.d[1]; \
+ } \
+ env->vstart = 0; \
+ /* set tail elements to 1s */ \
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4); \
+ }
+
+GEN_ZVKNED_HELPER_VV(vaesef_vv, aesenc_SB_SR_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VS(vaesef_vs, aesenc_SB_SR_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VV(vaesdf_vv, aesdec_ISB_ISR_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VS(vaesdf_vs, aesdec_ISB_ISR_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VV(vaesem_vv, aesenc_SB_SR_MC_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VS(vaesem_vs, aesenc_SB_SR_MC_AK(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VV(vaesdm_vv, aesdec_ISB_ISR_AK_IMC(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VS(vaesdm_vs, aesdec_ISB_ISR_AK_IMC(&round_state,
+ &round_state,
+ &round_key,
+ false);)
+GEN_ZVKNED_HELPER_VS(vaesz_vs, xor_round_key(&round_state, &round_key);)
+
+void HELPER(vaeskf1_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
+ CPURISCVState *env, uint32_t desc)
+{
+ uint32_t *vd = vd_vptr;
+ uint32_t *vs2 = vs2_vptr;
+ uint32_t vl = env->vl;
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
+ uint32_t vta = vext_vta(desc);
+
+ uimm &= 0b1111;
+ if (uimm > 10 || uimm == 0) {
+ uimm ^= 0b1000;
+ }
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ uint32_t rk[8], tmp;
+ static const uint32_t rcon[] = {
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
+ };
+
+ rk[0] = vs2[i * 4 + H4(0)];
+ rk[1] = vs2[i * 4 + H4(1)];
+ rk[2] = vs2[i * 4 + H4(2)];
+ rk[3] = vs2[i * 4 + H4(3)];
+ tmp = ror32(rk[3], 8);
+
+ rk[4] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
+ ^ rcon[uimm - 1];
+ rk[5] = rk[1] ^ rk[4];
+ rk[6] = rk[2] ^ rk[5];
+ rk[7] = rk[3] ^ rk[6];
+
+ vd[i * 4 + H4(0)] = rk[4];
+ vd[i * 4 + H4(1)] = rk[5];
+ vd[i * 4 + H4(2)] = rk[6];
+ vd[i * 4 + H4(3)] = rk[7];
+ }
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
+}
+
+void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
+ CPURISCVState *env, uint32_t desc)
+{
+ uint32_t *vd = vd_vptr;
+ uint32_t *vs2 = vs2_vptr;
+ uint32_t vl = env->vl;
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
+ uint32_t vta = vext_vta(desc);
+
+ uimm &= 0b1111;
+ if (uimm > 14 || uimm < 2) {
+ uimm ^= 0b1000;
+ }
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ uint32_t rk[12], tmp;
+ static const uint32_t rcon[] = {
+ 0x00000001, 0x00000002, 0x00000004, 0x00000008, 0x00000010,
+ 0x00000020, 0x00000040, 0x00000080, 0x0000001B, 0x00000036,
+ };
+
+ rk[0] = vd[i * 4 + H4(0)];
+ rk[1] = vd[i * 4 + H4(1)];
+ rk[2] = vd[i * 4 + H4(2)];
+ rk[3] = vd[i * 4 + H4(3)];
+ rk[4] = vs2[i * 4 + H4(0)];
+ rk[5] = vs2[i * 4 + H4(1)];
+ rk[6] = vs2[i * 4 + H4(2)];
+ rk[7] = vs2[i * 4 + H4(3)];
+
+ if (uimm % 2 == 0) {
+ tmp = ror32(rk[7], 8);
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(tmp >> 24) & 0xff] << 24) |
+ ((uint32_t)AES_sbox[(tmp >> 16) & 0xff] << 16) |
+ ((uint32_t)AES_sbox[(tmp >> 8) & 0xff] << 8) |
+ ((uint32_t)AES_sbox[(tmp >> 0) & 0xff] << 0))
+ ^ rcon[(uimm - 1) / 2];
+ } else {
+ rk[8] = rk[0] ^ (((uint32_t)AES_sbox[(rk[7] >> 24) & 0xff] << 24) |
+ ((uint32_t)AES_sbox[(rk[7] >> 16) & 0xff] << 16) |
+ ((uint32_t)AES_sbox[(rk[7] >> 8) & 0xff] << 8) |
+ ((uint32_t)AES_sbox[(rk[7] >> 0) & 0xff] << 0));
+ }
+ rk[9] = rk[1] ^ rk[8];
+ rk[10] = rk[2] ^ rk[9];
+ rk[11] = rk[3] ^ rk[10];
+
+ vd[i * 4 + H4(0)] = rk[8];
+ vd[i * 4 + H4(1)] = rk[9];
+ vd[i * 4 + H4(2)] = rk[10];
+ vd[i * 4 + H4(3)] = rk[11];
+ }
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 10/15] target/riscv: Add Zvknh ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (8 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 09/15] target/riscv: Add Zvkned " Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 11/15] target/riscv: Add Zvksh " Max Chou
` (5 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Kiran Ostrolenk, Nazar Kazakov,
Lawrence Hunter, Max Chou, Palmer Dabbelt, Alistair Francis,
Bin Meng, Weiwei Li, Liu Zhiwei
From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
This commit adds support for the Zvknh vector-crypto extension, which
consists of the following instructions:
* vsha2ms.vv
* vsha2c[hl].vv
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
[max.chou@sifive.com: Replaced vstart checking by TCG op]
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
[max.chou@sifive.com: Exposed x-zvknha & x-zvknhb properties]
[max.chou@sifive.com: Replaced SEW selection to happened during
translation]
---
target/riscv/cpu.c | 13 +-
target/riscv/cpu_cfg.h | 2 +
target/riscv/helper.h | 6 +
target/riscv/insn32.decode | 5 +
target/riscv/insn_trans/trans_rvvk.c.inc | 129 ++++++++++++
target/riscv/vcrypto_helper.c | 238 +++++++++++++++++++++++
6 files changed, 390 insertions(+), 3 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index a1a40875f95..e66d9639e20 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -129,6 +129,8 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
+ ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
+ ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
@@ -1275,14 +1277,17 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
* In principle Zve*x would also suffice here, were they supported
* in qemu
*/
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned) && !cpu->cfg.ext_zve32f) {
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
+ !cpu->cfg.ext_zve32f) {
error_setg(errp,
"Vector crypto extensions require V or Zve* extensions");
return;
}
- if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
- error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
+ if ((cpu->cfg.ext_zvbc || cpu->cfg.ext_zvknhb) && !cpu->cfg.ext_zve64f) {
+ error_setg(
+ errp,
+ "Zvbc and Zvknhb extensions require V or Zve64{f,d} extensions");
return;
}
@@ -1867,6 +1872,8 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
+ DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
+ DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index c7eafe27c0d..800b8783c14 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -88,6 +88,8 @@ struct RISCVCPUConfig {
bool ext_zvbb;
bool ext_zvbc;
bool ext_zvkned;
+ bool ext_zvknha;
+ bool ext_zvknhb;
bool ext_zmmul;
bool ext_zvfbfmin;
bool ext_zvfbfwma;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index 02e5dbe6ee4..34329b52fe1 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1264,3 +1264,9 @@ DEF_HELPER_4(vaesdm_vs, void, ptr, ptr, env, i32)
DEF_HELPER_4(vaesz_vs, void, ptr, ptr, env, i32)
DEF_HELPER_5(vaeskf1_vi, void, ptr, ptr, i32, env, i32)
DEF_HELPER_5(vaeskf2_vi, void, ptr, ptr, i32, env, i32)
+
+DEF_HELPER_5(vsha2ms_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index 4f3c50f10f3..e2b83186dc7 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -986,3 +986,8 @@ vaesdm_vs 101001 1 ..... 00000 010 ..... 1110111 @r2_vm_1
vaesz_vs 101001 1 ..... 00111 010 ..... 1110111 @r2_vm_1
vaeskf1_vi 100010 1 ..... ..... 010 ..... 1110111 @r_vm_1
vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
+
+# *** Zvknh vector crypto extension ***
+vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index 817353f4d30..a35be11b95e 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -371,3 +371,132 @@ static bool vaeskf2_check(DisasContext *s, arg_vaeskf2_vi *a)
GEN_VI_UNMASKED_TRANS(vaeskf1_vi, vaeskf1_check, ZVKNED_EGS)
GEN_VI_UNMASKED_TRANS(vaeskf2_vi, vaeskf2_check, ZVKNED_EGS)
+
+/*
+ * Zvknh
+ */
+
+#define ZVKNH_EGS 4
+
+#define GEN_VV_UNMASKED_TRANS(NAME, CHECK, EGS) \
+ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
+ { \
+ if (CHECK(s, a)) { \
+ uint32_t data = 0; \
+ TCGLabel *over = gen_new_label(); \
+ TCGv_i32 egs; \
+ \
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) { \
+ /* save opcode for unwinding in case we throw an exception */ \
+ decode_save_opc(s); \
+ egs = tcg_constant_i32(EGS); \
+ gen_helper_egs_check(egs, cpu_env); \
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
+ } \
+ \
+ data = FIELD_DP32(data, VDATA, VM, a->vm); \
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
+ data = FIELD_DP32(data, VDATA, VTA, s->vta); \
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
+ data = FIELD_DP32(data, VDATA, VMA, s->vma); \
+ \
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1), \
+ vreg_ofs(s, a->rs2), cpu_env, \
+ s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
+ data, gen_helper_##NAME); \
+ \
+ mark_vs_dirty(s); \
+ gen_set_label(over); \
+ return true; \
+ } \
+ return false; \
+ }
+
+static bool vsha_check_sew(DisasContext *s)
+{
+ return (s->cfg_ptr->ext_zvknha == true && s->sew == MO_32) ||
+ (s->cfg_ptr->ext_zvknhb == true &&
+ (s->sew == MO_32 || s->sew == MO_64));
+}
+
+static bool vsha_check(DisasContext *s, arg_rmrr *a)
+{
+ int egw_bytes = ZVKNH_EGS << s->sew;
+ int mult = 1 << MAX(s->lmul, 0);
+ return opivv_check(s, a) &&
+ vsha_check_sew(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ !is_overlapped(a->rd, mult, a->rs1, mult) &&
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
+ s->lmul >= 0;
+}
+
+GEN_VV_UNMASKED_TRANS(vsha2ms_vv, vsha_check, ZVKNH_EGS)
+
+static bool trans_vsha2cl_vv(DisasContext *s, arg_rmrr *a)
+{
+ if (vsha_check(s, a)) {
+ uint32_t data = 0;
+ TCGLabel *over = gen_new_label();
+ TCGv_i32 egs;
+
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
+ /* save opcode for unwinding in case we throw an exception */
+ decode_save_opc(s);
+ egs = tcg_constant_i32(ZVKNH_EGS);
+ gen_helper_egs_check(egs, cpu_env);
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
+ }
+
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
+
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
+ s->cfg_ptr->vlen / 8, data,
+ s->sew == MO_32 ?
+ gen_helper_vsha2cl32_vv : gen_helper_vsha2cl64_vv);
+
+ mark_vs_dirty(s);
+ gen_set_label(over);
+ return true;
+ }
+ return false;
+}
+
+static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
+{
+ if (vsha_check(s, a)) {
+ uint32_t data = 0;
+ TCGLabel *over = gen_new_label();
+ TCGv_i32 egs;
+
+ if (!s->vstart_eq_zero || !s->vl_eq_vlmax) {
+ /* save opcode for unwinding in case we throw an exception */
+ decode_save_opc(s);
+ egs = tcg_constant_i32(ZVKNH_EGS);
+ gen_helper_egs_check(egs, cpu_env);
+ tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
+ }
+
+ data = FIELD_DP32(data, VDATA, VM, a->vm);
+ data = FIELD_DP32(data, VDATA, LMUL, s->lmul);
+ data = FIELD_DP32(data, VDATA, VTA, s->vta);
+ data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s);
+ data = FIELD_DP32(data, VDATA, VMA, s->vma);
+
+ tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
+ vreg_ofs(s, a->rs2), cpu_env, s->cfg_ptr->vlen / 8,
+ s->cfg_ptr->vlen / 8, data,
+ s->sew == MO_32 ?
+ gen_helper_vsha2ch32_vv : gen_helper_vsha2ch64_vv);
+
+ mark_vs_dirty(s);
+ gen_set_label(over);
+ return true;
+ }
+ return false;
+}
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index cca78184e9b..2f2099b6fb5 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -397,3 +397,241 @@ void HELPER(vaeskf2_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
/* set tail elements to 1s */
vext_set_elems_1s(vd, vta, vl * 4, total_elems * 4);
}
+
+static inline uint32_t sig0_sha256(uint32_t x)
+{
+ return ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3);
+}
+
+static inline uint32_t sig1_sha256(uint32_t x)
+{
+ return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
+}
+
+static inline uint64_t sig0_sha512(uint64_t x)
+{
+ return ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7);
+}
+
+static inline uint64_t sig1_sha512(uint64_t x)
+{
+ return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
+}
+
+static inline void vsha2ms_e32(uint32_t *vd, uint32_t *vs1, uint32_t *vs2)
+{
+ uint32_t res[4];
+ res[0] = sig1_sha256(vs1[H4(2)]) + vs2[H4(1)] + sig0_sha256(vd[H4(1)]) +
+ vd[H4(0)];
+ res[1] = sig1_sha256(vs1[H4(3)]) + vs2[H4(2)] + sig0_sha256(vd[H4(2)]) +
+ vd[H4(1)];
+ res[2] =
+ sig1_sha256(res[0]) + vs2[H4(3)] + sig0_sha256(vd[H4(3)]) + vd[H4(2)];
+ res[3] =
+ sig1_sha256(res[1]) + vs1[H4(0)] + sig0_sha256(vs2[H4(0)]) + vd[H4(3)];
+ vd[H4(3)] = res[3];
+ vd[H4(2)] = res[2];
+ vd[H4(1)] = res[1];
+ vd[H4(0)] = res[0];
+}
+
+static inline void vsha2ms_e64(uint64_t *vd, uint64_t *vs1, uint64_t *vs2)
+{
+ uint64_t res[4];
+ res[0] = sig1_sha512(vs1[2]) + vs2[1] + sig0_sha512(vd[1]) + vd[0];
+ res[1] = sig1_sha512(vs1[3]) + vs2[2] + sig0_sha512(vd[2]) + vd[1];
+ res[2] = sig1_sha512(res[0]) + vs2[3] + sig0_sha512(vd[3]) + vd[2];
+ res[3] = sig1_sha512(res[1]) + vs1[0] + sig0_sha512(vs2[0]) + vd[3];
+ vd[3] = res[3];
+ vd[2] = res[2];
+ vd[1] = res[1];
+ vd[0] = res[0];
+}
+
+void HELPER(vsha2ms_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
+ uint32_t desc)
+{
+ uint32_t sew = FIELD_EX64(env->vtype, VTYPE, VSEW);
+ uint32_t esz = sew == MO_32 ? 4 : 8;
+ uint32_t total_elems;
+ uint32_t vta = vext_vta(desc);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ if (sew == MO_32) {
+ vsha2ms_e32(((uint32_t *)vd) + i * 4, ((uint32_t *)vs1) + i * 4,
+ ((uint32_t *)vs2) + i * 4);
+ } else {
+ /* If not 32 then SEW should be 64 */
+ vsha2ms_e64(((uint64_t *)vd) + i * 4, ((uint64_t *)vs1) + i * 4,
+ ((uint64_t *)vs2) + i * 4);
+ }
+ }
+ /* set tail elements to 1s */
+ total_elems = vext_get_total_elems(env, desc, esz);
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
+
+static inline uint64_t sum0_64(uint64_t x)
+{
+ return ror64(x, 28) ^ ror64(x, 34) ^ ror64(x, 39);
+}
+
+static inline uint32_t sum0_32(uint32_t x)
+{
+ return ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22);
+}
+
+static inline uint64_t sum1_64(uint64_t x)
+{
+ return ror64(x, 14) ^ ror64(x, 18) ^ ror64(x, 41);
+}
+
+static inline uint32_t sum1_32(uint32_t x)
+{
+ return ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25);
+}
+
+#define ch(x, y, z) ((x & y) ^ ((~x) & z))
+
+#define maj(x, y, z) ((x & y) ^ (x & z) ^ (y & z))
+
+static void vsha2c_64(uint64_t *vs2, uint64_t *vd, uint64_t *vs1)
+{
+ uint64_t a = vs2[3], b = vs2[2], e = vs2[1], f = vs2[0];
+ uint64_t c = vd[3], d = vd[2], g = vd[1], h = vd[0];
+ uint64_t W0 = vs1[0], W1 = vs1[1];
+ uint64_t T1 = h + sum1_64(e) + ch(e, f, g) + W0;
+ uint64_t T2 = sum0_64(a) + maj(a, b, c);
+
+ h = g;
+ g = f;
+ f = e;
+ e = d + T1;
+ d = c;
+ c = b;
+ b = a;
+ a = T1 + T2;
+
+ T1 = h + sum1_64(e) + ch(e, f, g) + W1;
+ T2 = sum0_64(a) + maj(a, b, c);
+ h = g;
+ g = f;
+ f = e;
+ e = d + T1;
+ d = c;
+ c = b;
+ b = a;
+ a = T1 + T2;
+
+ vd[0] = f;
+ vd[1] = e;
+ vd[2] = b;
+ vd[3] = a;
+}
+
+static void vsha2c_32(uint32_t *vs2, uint32_t *vd, uint32_t *vs1)
+{
+ uint32_t a = vs2[H4(3)], b = vs2[H4(2)], e = vs2[H4(1)], f = vs2[H4(0)];
+ uint32_t c = vd[H4(3)], d = vd[H4(2)], g = vd[H4(1)], h = vd[H4(0)];
+ uint32_t W0 = vs1[H4(0)], W1 = vs1[H4(1)];
+ uint32_t T1 = h + sum1_32(e) + ch(e, f, g) + W0;
+ uint32_t T2 = sum0_32(a) + maj(a, b, c);
+
+ h = g;
+ g = f;
+ f = e;
+ e = d + T1;
+ d = c;
+ c = b;
+ b = a;
+ a = T1 + T2;
+
+ T1 = h + sum1_32(e) + ch(e, f, g) + W1;
+ T2 = sum0_32(a) + maj(a, b, c);
+ h = g;
+ g = f;
+ f = e;
+ e = d + T1;
+ d = c;
+ c = b;
+ b = a;
+ a = T1 + T2;
+
+ vd[H4(0)] = f;
+ vd[H4(1)] = e;
+ vd[H4(2)] = b;
+ vd[H4(3)] = a;
+}
+
+void HELPER(vsha2ch32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
+ uint32_t desc)
+{
+ const uint32_t esz = 4;
+ uint32_t total_elems;
+ uint32_t vta = vext_vta(desc);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
+ ((uint32_t *)vs1) + 4 * i + 2);
+ }
+
+ /* set tail elements to 1s */
+ total_elems = vext_get_total_elems(env, desc, esz);
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
+
+void HELPER(vsha2ch64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
+ uint32_t desc)
+{
+ const uint32_t esz = 8;
+ uint32_t total_elems;
+ uint32_t vta = vext_vta(desc);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
+ ((uint64_t *)vs1) + 4 * i + 2);
+ }
+
+ /* set tail elements to 1s */
+ total_elems = vext_get_total_elems(env, desc, esz);
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
+
+void HELPER(vsha2cl32_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
+ uint32_t desc)
+{
+ const uint32_t esz = 4;
+ uint32_t total_elems;
+ uint32_t vta = vext_vta(desc);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ vsha2c_32(((uint32_t *)vs2) + 4 * i, ((uint32_t *)vd) + 4 * i,
+ (((uint32_t *)vs1) + 4 * i));
+ }
+
+ /* set tail elements to 1s */
+ total_elems = vext_get_total_elems(env, desc, esz);
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
+
+void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
+ uint32_t desc)
+{
+ uint32_t esz = 8;
+ uint32_t total_elems;
+ uint32_t vta = vext_vta(desc);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ vsha2c_64(((uint64_t *)vs2) + 4 * i, ((uint64_t *)vd) + 4 * i,
+ (((uint64_t *)vs1) + 4 * i));
+ }
+
+ /* set tail elements to 1s */
+ total_elems = vext_get_total_elems(env, desc, esz);
+ vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 11/15] target/riscv: Add Zvksh ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (9 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 10/15] target/riscv: Add Zvknh " Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 12/15] target/riscv: Add Zvkg " Max Chou
` (4 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Lawrence Hunter, Kiran Ostrolenk, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
Nazar Kazakov
From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
This commit adds support for the Zvksh vector-crypto extension, which
consists of the following instructions:
* vsm3me.vv
* vsm3c.vi
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
[max.chou@sifive.com: Replaced vstart checking by TCG op]
Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
[max.chou@sifive.com: Exposed x-zvksh property]
---
target/riscv/cpu.c | 6 +-
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 3 +
target/riscv/insn32.decode | 4 +
target/riscv/insn_trans/trans_rvvk.c.inc | 31 ++++++
target/riscv/vcrypto_helper.c | 134 +++++++++++++++++++++++
6 files changed, 177 insertions(+), 2 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index e66d9639e20..2546467f53b 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -131,6 +131,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
+ ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
@@ -1277,8 +1278,8 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
* In principle Zve*x would also suffice here, were they supported
* in qemu
*/
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha) &&
- !cpu->cfg.ext_zve32f) {
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
+ cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
error_setg(errp,
"Vector crypto extensions require V or Zve* extensions");
return;
@@ -1874,6 +1875,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
+ DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index 800b8783c14..ab2d9294db6 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -90,6 +90,7 @@ struct RISCVCPUConfig {
bool ext_zvkned;
bool ext_zvknha;
bool ext_zvknhb;
+ bool ext_zvksh;
bool ext_zmmul;
bool ext_zvfbfmin;
bool ext_zvfbfwma;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index 34329b52fe1..6d21347c39b 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1270,3 +1270,6 @@ DEF_HELPER_5(vsha2ch32_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_5(vsha2ch64_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_5(vsha2cl32_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
+
+DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index e2b83186dc7..4050e843f7e 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -991,3 +991,7 @@ vaeskf2_vi 101010 1 ..... ..... 010 ..... 1110111 @r_vm_1
vsha2ms_vv 101101 1 ..... ..... 010 ..... 1110111 @r_vm_1
vsha2ch_vv 101110 1 ..... ..... 010 ..... 1110111 @r_vm_1
vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
+
+# *** Zvksh vector crypto extension ***
+vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index a35be11b95e..6469dd2f024 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -500,3 +500,34 @@ static bool trans_vsha2ch_vv(DisasContext *s, arg_rmrr *a)
}
return false;
}
+
+/*
+ * Zvksh
+ */
+
+#define ZVKSH_EGS 8
+
+static inline bool vsm3_check(DisasContext *s, arg_rmrr *a)
+{
+ int egw_bytes = ZVKSH_EGS << s->sew;
+ int mult = 1 << MAX(s->lmul, 0);
+ return s->cfg_ptr->ext_zvksh == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ !is_overlapped(a->rd, mult, a->rs2, mult) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->sew == MO_32;
+}
+
+static inline bool vsm3me_check(DisasContext *s, arg_rmrr *a)
+{
+ return vsm3_check(s, a) && vext_check_sss(s, a->rd, a->rs1, a->rs2, a->vm);
+}
+
+static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
+{
+ return vsm3_check(s, a) && vext_check_ss(s, a->rd, a->rs2, a->vm);
+}
+
+GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
+GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index 2f2099b6fb5..e8bbb698c17 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -635,3 +635,137 @@ void HELPER(vsha2cl64_vv)(void *vd, void *vs1, void *vs2, CPURISCVState *env,
vext_set_elems_1s(vd, vta, env->vl * esz, total_elems * esz);
env->vstart = 0;
}
+
+static inline uint32_t p1(uint32_t x)
+{
+ return x ^ rol32(x, 15) ^ rol32(x, 23);
+}
+
+static inline uint32_t zvksh_w(uint32_t m16, uint32_t m9, uint32_t m3,
+ uint32_t m13, uint32_t m6)
+{
+ return p1(m16 ^ m9 ^ rol32(m3, 15)) ^ rol32(m13, 7) ^ m6;
+}
+
+void HELPER(vsm3me_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
+ CPURISCVState *env, uint32_t desc)
+{
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+ uint32_t vta = vext_vta(desc);
+ uint32_t *vd = vd_vptr;
+ uint32_t *vs1 = vs1_vptr;
+ uint32_t *vs2 = vs2_vptr;
+
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
+ uint32_t w[24];
+ for (int j = 0; j < 8; j++) {
+ w[j] = bswap32(vs1[H4((i * 8) + j)]);
+ w[j + 8] = bswap32(vs2[H4((i * 8) + j)]);
+ }
+ for (int j = 0; j < 8; j++) {
+ w[j + 16] =
+ zvksh_w(w[j], w[j + 7], w[j + 13], w[j + 3], w[j + 10]);
+ }
+ for (int j = 0; j < 8; j++) {
+ vd[(i * 8) + j] = bswap32(w[H4(j + 16)]);
+ }
+ }
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
+
+static inline uint32_t ff1(uint32_t x, uint32_t y, uint32_t z)
+{
+ return x ^ y ^ z;
+}
+
+static inline uint32_t ff2(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | (x & z) | (y & z);
+}
+
+static inline uint32_t ff_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
+{
+ return (j <= 15) ? ff1(x, y, z) : ff2(x, y, z);
+}
+
+static inline uint32_t gg1(uint32_t x, uint32_t y, uint32_t z)
+{
+ return x ^ y ^ z;
+}
+
+static inline uint32_t gg2(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | (~x & z);
+}
+
+static inline uint32_t gg_j(uint32_t x, uint32_t y, uint32_t z, uint32_t j)
+{
+ return (j <= 15) ? gg1(x, y, z) : gg2(x, y, z);
+}
+
+static inline uint32_t t_j(uint32_t j)
+{
+ return (j <= 15) ? 0x79cc4519 : 0x7a879d8a;
+}
+
+static inline uint32_t p_0(uint32_t x)
+{
+ return x ^ rol32(x, 9) ^ rol32(x, 17);
+}
+
+static void sm3c(uint32_t *vd, uint32_t *vs1, uint32_t *vs2, uint32_t uimm)
+{
+ uint32_t x0, x1;
+ uint32_t j;
+ uint32_t ss1, ss2, tt1, tt2;
+ x0 = vs2[0] ^ vs2[4];
+ x1 = vs2[1] ^ vs2[5];
+ j = 2 * uimm;
+ ss1 = rol32(rol32(vs1[0], 12) + vs1[4] + rol32(t_j(j), j % 32), 7);
+ ss2 = ss1 ^ rol32(vs1[0], 12);
+ tt1 = ff_j(vs1[0], vs1[1], vs1[2], j) + vs1[3] + ss2 + x0;
+ tt2 = gg_j(vs1[4], vs1[5], vs1[6], j) + vs1[7] + ss1 + vs2[0];
+ vs1[3] = vs1[2];
+ vd[3] = rol32(vs1[1], 9);
+ vs1[1] = vs1[0];
+ vd[1] = tt1;
+ vs1[7] = vs1[6];
+ vd[7] = rol32(vs1[5], 19);
+ vs1[5] = vs1[4];
+ vd[5] = p_0(tt2);
+ j = 2 * uimm + 1;
+ ss1 = rol32(rol32(vd[1], 12) + vd[5] + rol32(t_j(j), j % 32), 7);
+ ss2 = ss1 ^ rol32(vd[1], 12);
+ tt1 = ff_j(vd[1], vs1[1], vd[3], j) + vs1[3] + ss2 + x1;
+ tt2 = gg_j(vd[5], vs1[5], vd[7], j) + vs1[7] + ss1 + vs2[1];
+ vd[2] = rol32(vs1[1], 9);
+ vd[0] = tt1;
+ vd[6] = rol32(vs1[5], 19);
+ vd[4] = p_0(tt2);
+}
+
+void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
+ CPURISCVState *env, uint32_t desc)
+{
+ uint32_t esz = memop_size(FIELD_EX64(env->vtype, VTYPE, VSEW));
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+ uint32_t vta = vext_vta(desc);
+ uint32_t *vd = vd_vptr;
+ uint32_t *vs2 = vs2_vptr;
+ uint32_t v1[8], v2[8], v3[8];
+
+ for (int i = env->vstart / 8; i < env->vl / 8; i++) {
+ for (int k = 0; k < 8; k++) {
+ v2[k] = bswap32(vd[H4(i * 8 + k)]);
+ v3[k] = bswap32(vs2[H4(i * 8 + k)]);
+ }
+ sm3c(v1, v2, v3, uimm);
+ for (int k = 0; k < 8; k++) {
+ vd[i * 8 + k] = bswap32(v1[H4(k)]);
+ }
+ }
+ vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
+ env->vstart = 0;
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 12/15] target/riscv: Add Zvkg ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (10 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 11/15] target/riscv: Add Zvksh " Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 13/15] crypto: Create sm4_subword Max Chou
` (3 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Nazar Kazakov, Lawrence Hunter, Max Chou,
Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
Kiran Ostrolenk
From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
This commit adds support for the Zvkg vector-crypto extension, which
consists of the following instructions:
* vgmul.vv
* vghsh.vv
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Co-authored-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
[max.chou@sifive.com: Replaced vstart checking by TCG op]
Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
[max.chou@sifive.com: Exposed x-zvkg property]
[max.chou@sifive.com: Replaced uint by int for cross win32 build]
---
target/riscv/cpu.c | 6 +-
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 3 +
target/riscv/insn32.decode | 4 ++
target/riscv/insn_trans/trans_rvvk.c.inc | 30 ++++++++++
target/riscv/vcrypto_helper.c | 72 ++++++++++++++++++++++++
6 files changed, 114 insertions(+), 2 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 2546467f53b..075692582dd 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -128,6 +128,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zvfbfwma, PRIV_VERSION_1_12_0, ext_zvfbfwma),
ISA_EXT_DATA_ENTRY(zvfh, PRIV_VERSION_1_12_0, ext_zvfh),
ISA_EXT_DATA_ENTRY(zvfhmin, PRIV_VERSION_1_12_0, ext_zvfhmin),
+ ISA_EXT_DATA_ENTRY(zvkg, PRIV_VERSION_1_12_0, ext_zvkg),
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
@@ -1278,8 +1279,8 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
* In principle Zve*x would also suffice here, were they supported
* in qemu
*/
- if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkned || cpu->cfg.ext_zvknha ||
- cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
+ if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
error_setg(errp,
"Vector crypto extensions require V or Zve* extensions");
return;
@@ -1872,6 +1873,7 @@ static Property riscv_cpu_extensions[] = {
/* Vector cryptography extensions */
DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
+ DEFINE_PROP_BOOL("x-zvkg", RISCVCPU, cfg.ext_zvkg, false),
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index ab2d9294db6..b754ec23446 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -87,6 +87,7 @@ struct RISCVCPUConfig {
bool ext_zve64d;
bool ext_zvbb;
bool ext_zvbc;
+ bool ext_zvkg;
bool ext_zvkned;
bool ext_zvknha;
bool ext_zvknhb;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index 6d21347c39b..ceec97e1657 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1273,3 +1273,6 @@ DEF_HELPER_5(vsha2cl64_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_5(vsm3me_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
+
+DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
+DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index 4050e843f7e..0fae01c6bb3 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -995,3 +995,7 @@ vsha2cl_vv 101111 1 ..... ..... 010 ..... 1110111 @r_vm_1
# *** Zvksh vector crypto extension ***
vsm3me_vv 100000 1 ..... ..... 010 ..... 1110111 @r_vm_1
vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
+
+# *** Zvkg vector crypto extension ***
+vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index 6469dd2f024..af7cd62e7d0 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -531,3 +531,33 @@ static inline bool vsm3c_check(DisasContext *s, arg_rmrr *a)
GEN_VV_UNMASKED_TRANS(vsm3me_vv, vsm3me_check, ZVKSH_EGS)
GEN_VI_UNMASKED_TRANS(vsm3c_vi, vsm3c_check, ZVKSH_EGS)
+
+/*
+ * Zvkg
+ */
+
+#define ZVKG_EGS 4
+
+static bool vgmul_check(DisasContext *s, arg_rmr *a)
+{
+ int egw_bytes = ZVKG_EGS << s->sew;
+ return s->cfg_ptr->ext_zvkg == true &&
+ vext_check_isa_ill(s) &&
+ require_rvv(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ vext_check_ss(s, a->rd, a->rs2, a->vm) &&
+ s->sew == MO_32;
+}
+
+GEN_V_UNMASKED_TRANS(vgmul_vv, vgmul_check, ZVKG_EGS)
+
+static bool vghsh_check(DisasContext *s, arg_rmrr *a)
+{
+ int egw_bytes = ZVKG_EGS << s->sew;
+ return s->cfg_ptr->ext_zvkg == true &&
+ opivv_check(s, a) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->sew == MO_32;
+}
+
+GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index e8bbb698c17..a5e2f7fbb04 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -769,3 +769,75 @@ void HELPER(vsm3c_vi)(void *vd_vptr, void *vs2_vptr, uint32_t uimm,
vext_set_elems_1s(vd_vptr, vta, env->vl * esz, total_elems * esz);
env->vstart = 0;
}
+
+void HELPER(vghsh_vv)(void *vd_vptr, void *vs1_vptr, void *vs2_vptr,
+ CPURISCVState *env, uint32_t desc)
+{
+ uint64_t *vd = vd_vptr;
+ uint64_t *vs1 = vs1_vptr;
+ uint64_t *vs2 = vs2_vptr;
+ uint32_t vta = vext_vta(desc);
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ uint64_t Y[2] = {vd[i * 2 + 0], vd[i * 2 + 1]};
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
+ uint64_t X[2] = {vs1[i * 2 + 0], vs1[i * 2 + 1]};
+ uint64_t Z[2] = {0, 0};
+
+ uint64_t S[2] = {brev8(Y[0] ^ X[0]), brev8(Y[1] ^ X[1])};
+
+ for (int j = 0; j < 128; j++) {
+ if ((S[j / 64] >> (j % 64)) & 1) {
+ Z[0] ^= H[0];
+ Z[1] ^= H[1];
+ }
+ bool reduce = ((H[1] >> 63) & 1);
+ H[1] = H[1] << 1 | H[0] >> 63;
+ H[0] = H[0] << 1;
+ if (reduce) {
+ H[0] ^= 0x87;
+ }
+ }
+
+ vd[i * 2 + 0] = brev8(Z[0]);
+ vd[i * 2 + 1] = brev8(Z[1]);
+ }
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
+ env->vstart = 0;
+}
+
+void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
+ uint32_t desc)
+{
+ uint64_t *vd = vd_vptr;
+ uint64_t *vs2 = vs2_vptr;
+ uint32_t vta = vext_vta(desc);
+ uint32_t total_elems = vext_get_total_elems(env, desc, 4);
+
+ for (uint32_t i = env->vstart / 4; i < env->vl / 4; i++) {
+ uint64_t Y[2] = {brev8(vd[i * 2 + 0]), brev8(vd[i * 2 + 1])};
+ uint64_t H[2] = {brev8(vs2[i * 2 + 0]), brev8(vs2[i * 2 + 1])};
+ uint64_t Z[2] = {0, 0};
+
+ for (int j = 0; j < 128; j++) {
+ if ((Y[j / 64] >> (j % 64)) & 1) {
+ Z[0] ^= H[0];
+ Z[1] ^= H[1];
+ }
+ bool reduce = ((H[1] >> 63) & 1);
+ H[1] = H[1] << 1 | H[0] >> 63;
+ H[0] = H[0] << 1;
+ if (reduce) {
+ H[0] ^= 0x87;
+ }
+ }
+
+ vd[i * 2 + 0] = brev8(Z[0]);
+ vd[i * 2 + 1] = brev8(Z[1]);
+ }
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
+ env->vstart = 0;
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 13/15] crypto: Create sm4_subword
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (11 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 12/15] target/riscv: Add Zvkg " Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-07-11 16:59 ` [PATCH v8 14/15] crypto: Add SM4 constant parameter CK Max Chou
` (2 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Max Chou, Frank Chang, Richard Henderson,
Daniel P. Berrangé, Peter Maydell, qemu-arm
Allows sharing of sm4_subword between different targets.
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
include/crypto/sm4.h | 8 ++++++++
target/arm/tcg/crypto_helper.c | 10 ++--------
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
index 9bd3ebc62e8..de8245d8a71 100644
--- a/include/crypto/sm4.h
+++ b/include/crypto/sm4.h
@@ -3,4 +3,12 @@
extern const uint8_t sm4_sbox[256];
+static inline uint32_t sm4_subword(uint32_t word)
+{
+ return sm4_sbox[word & 0xff] |
+ sm4_sbox[(word >> 8) & 0xff] << 8 |
+ sm4_sbox[(word >> 16) & 0xff] << 16 |
+ sm4_sbox[(word >> 24) & 0xff] << 24;
+}
+
#endif
diff --git a/target/arm/tcg/crypto_helper.c b/target/arm/tcg/crypto_helper.c
index fdd70abbfd6..7cadd61e124 100644
--- a/target/arm/tcg/crypto_helper.c
+++ b/target/arm/tcg/crypto_helper.c
@@ -614,10 +614,7 @@ static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
CR_ST_WORD(d, (i + 3) % 4) ^
CR_ST_WORD(n, i);
- t = sm4_sbox[t & 0xff] |
- sm4_sbox[(t >> 8) & 0xff] << 8 |
- sm4_sbox[(t >> 16) & 0xff] << 16 |
- sm4_sbox[(t >> 24) & 0xff] << 24;
+ t = sm4_subword(t);
CR_ST_WORD(d, i) ^= t ^ rol32(t, 2) ^ rol32(t, 10) ^ rol32(t, 18) ^
rol32(t, 24);
@@ -651,10 +648,7 @@ static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
CR_ST_WORD(d, (i + 3) % 4) ^
CR_ST_WORD(m, i);
- t = sm4_sbox[t & 0xff] |
- sm4_sbox[(t >> 8) & 0xff] << 8 |
- sm4_sbox[(t >> 16) & 0xff] << 16 |
- sm4_sbox[(t >> 24) & 0xff] << 24;
+ t = sm4_subword(t);
CR_ST_WORD(d, i) ^= t ^ rol32(t, 13) ^ rol32(t, 23);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 14/15] crypto: Add SM4 constant parameter CK
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (12 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 13/15] crypto: Create sm4_subword Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-08-02 14:45 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 15/15] target/riscv: Add Zvksed ISA extension support Max Chou
2023-08-02 14:54 ` [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Alistair Francis
15 siblings, 1 reply; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Max Chou, Frank Chang,
Daniel P. Berrangé
Adds sm4_ck constant for use in sm4 cryptography across different targets.
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Signed-off-by: Max Chou <max.chou@sifive.com>
---
crypto/sm4.c | 10 ++++++++++
include/crypto/sm4.h | 1 +
2 files changed, 11 insertions(+)
diff --git a/crypto/sm4.c b/crypto/sm4.c
index 9f0cd452c78..2987306cf7a 100644
--- a/crypto/sm4.c
+++ b/crypto/sm4.c
@@ -47,3 +47,13 @@ uint8_t const sm4_sbox[] = {
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
};
+uint32_t const sm4_ck[] = {
+ 0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
+ 0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
+ 0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
+ 0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
+ 0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
+ 0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
+ 0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
+ 0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
+};
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
index de8245d8a71..382b26d9224 100644
--- a/include/crypto/sm4.h
+++ b/include/crypto/sm4.h
@@ -2,6 +2,7 @@
#define QEMU_SM4_H
extern const uint8_t sm4_sbox[256];
+extern const uint32_t sm4_ck[32];
static inline uint32_t sm4_subword(uint32_t word)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v8 15/15] target/riscv: Add Zvksed ISA extension support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (13 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 14/15] crypto: Add SM4 constant parameter CK Max Chou
@ 2023-07-11 16:59 ` Max Chou
2023-08-02 14:54 ` [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Alistair Francis
15 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-07-11 16:59 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: dbarboza, alistair23, Max Chou, Frank Chang, Palmer Dabbelt,
Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
Lawrence Hunter, Nazar Kazakov, Kiran Ostrolenk
This commit adds support for the Zvksed vector-crypto extension, which
consists of the following instructions:
* vsm4k.vi
* vsm4r.[vv,vs]
Translation functions are defined in
`target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
`target/riscv/vcrypto_helper.c`.
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
[lawrence.hunter@codethink.co.uk: Moved SM4 functions from
crypto_helper.c to vcrypto_helper.c]
[nazar.kazakov@codethink.co.uk: Added alignment checks, refactored code to
use macros, and minor style changes]
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/cpu.c | 5 +-
target/riscv/cpu_cfg.h | 1 +
target/riscv/helper.h | 4 +
target/riscv/insn32.decode | 5 +
target/riscv/insn_trans/trans_rvvk.c.inc | 43 ++++++++
target/riscv/vcrypto_helper.c | 127 +++++++++++++++++++++++
6 files changed, 184 insertions(+), 1 deletion(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 075692582dd..41f12f3d7a9 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -132,6 +132,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zvkned, PRIV_VERSION_1_12_0, ext_zvkned),
ISA_EXT_DATA_ENTRY(zvknha, PRIV_VERSION_1_12_0, ext_zvknha),
ISA_EXT_DATA_ENTRY(zvknhb, PRIV_VERSION_1_12_0, ext_zvknhb),
+ ISA_EXT_DATA_ENTRY(zvksed, PRIV_VERSION_1_12_0, ext_zvksed),
ISA_EXT_DATA_ENTRY(zvksh, PRIV_VERSION_1_12_0, ext_zvksh),
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
@@ -1280,7 +1281,8 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
* in qemu
*/
if ((cpu->cfg.ext_zvbb || cpu->cfg.ext_zvkg || cpu->cfg.ext_zvkned ||
- cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksh) && !cpu->cfg.ext_zve32f) {
+ cpu->cfg.ext_zvknha || cpu->cfg.ext_zvksed || cpu->cfg.ext_zvksh) &&
+ !cpu->cfg.ext_zve32f) {
error_setg(errp,
"Vector crypto extensions require V or Zve* extensions");
return;
@@ -1877,6 +1879,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("x-zvkned", RISCVCPU, cfg.ext_zvkned, false),
DEFINE_PROP_BOOL("x-zvknha", RISCVCPU, cfg.ext_zvknha, false),
DEFINE_PROP_BOOL("x-zvknhb", RISCVCPU, cfg.ext_zvknhb, false),
+ DEFINE_PROP_BOOL("x-zvksed", RISCVCPU, cfg.ext_zvksed, false),
DEFINE_PROP_BOOL("x-zvksh", RISCVCPU, cfg.ext_zvksh, false),
DEFINE_PROP_END_OF_LIST(),
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index b754ec23446..61f6238756f 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -91,6 +91,7 @@ struct RISCVCPUConfig {
bool ext_zvkned;
bool ext_zvknha;
bool ext_zvknhb;
+ bool ext_zvksed;
bool ext_zvksh;
bool ext_zmmul;
bool ext_zvfbfmin;
diff --git a/target/riscv/helper.h b/target/riscv/helper.h
index ceec97e1657..8a635238514 100644
--- a/target/riscv/helper.h
+++ b/target/riscv/helper.h
@@ -1276,3 +1276,7 @@ DEF_HELPER_5(vsm3c_vi, void, ptr, ptr, i32, env, i32)
DEF_HELPER_5(vghsh_vv, void, ptr, ptr, ptr, env, i32)
DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
+
+DEF_HELPER_5(vsm4k_vi, void, ptr, ptr, i32, env, i32)
+DEF_HELPER_4(vsm4r_vv, void, ptr, ptr, env, i32)
+DEF_HELPER_4(vsm4r_vs, void, ptr, ptr, env, i32)
diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
index 0fae01c6bb3..33597fe2bb1 100644
--- a/target/riscv/insn32.decode
+++ b/target/riscv/insn32.decode
@@ -999,3 +999,8 @@ vsm3c_vi 101011 1 ..... ..... 010 ..... 1110111 @r_vm_1
# *** Zvkg vector crypto extension ***
vghsh_vv 101100 1 ..... ..... 010 ..... 1110111 @r_vm_1
vgmul_vv 101000 1 ..... 10001 010 ..... 1110111 @r2_vm_1
+
+# *** Zvksed vector crypto extension ***
+vsm4k_vi 100001 1 ..... ..... 010 ..... 1110111 @r_vm_1
+vsm4r_vv 101000 1 ..... 10000 010 ..... 1110111 @r2_vm_1
+vsm4r_vs 101001 1 ..... 10000 010 ..... 1110111 @r2_vm_1
diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
index af7cd62e7d0..c00c70dfc62 100644
--- a/target/riscv/insn_trans/trans_rvvk.c.inc
+++ b/target/riscv/insn_trans/trans_rvvk.c.inc
@@ -561,3 +561,46 @@ static bool vghsh_check(DisasContext *s, arg_rmrr *a)
}
GEN_VV_UNMASKED_TRANS(vghsh_vv, vghsh_check, ZVKG_EGS)
+
+/*
+ * Zvksed
+ */
+
+#define ZVKSED_EGS 4
+
+static bool zvksed_check(DisasContext *s)
+{
+ int egw_bytes = ZVKSED_EGS << s->sew;
+ return s->cfg_ptr->ext_zvksed == true &&
+ require_rvv(s) &&
+ vext_check_isa_ill(s) &&
+ MAXSZ(s) >= egw_bytes &&
+ s->sew == MO_32;
+}
+
+static bool vsm4k_vi_check(DisasContext *s, arg_rmrr *a)
+{
+ return zvksed_check(s) &&
+ require_align(a->rd, s->lmul) &&
+ require_align(a->rs2, s->lmul);
+}
+
+GEN_VI_UNMASKED_TRANS(vsm4k_vi, vsm4k_vi_check, ZVKSED_EGS)
+
+static bool vsm4r_vv_check(DisasContext *s, arg_rmr *a)
+{
+ return zvksed_check(s) &&
+ require_align(a->rd, s->lmul) &&
+ require_align(a->rs2, s->lmul);
+}
+
+GEN_V_UNMASKED_TRANS(vsm4r_vv, vsm4r_vv_check, ZVKSED_EGS)
+
+static bool vsm4r_vs_check(DisasContext *s, arg_rmr *a)
+{
+ return zvksed_check(s) &&
+ !is_overlapped(a->rd, 1 << MAX(s->lmul, 0), a->rs2, 1) &&
+ require_align(a->rd, s->lmul);
+}
+
+GEN_V_UNMASKED_TRANS(vsm4r_vs, vsm4r_vs_check, ZVKSED_EGS)
diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
index a5e2f7fbb04..e2d719b13b6 100644
--- a/target/riscv/vcrypto_helper.c
+++ b/target/riscv/vcrypto_helper.c
@@ -24,6 +24,7 @@
#include "cpu.h"
#include "crypto/aes.h"
#include "crypto/aes-round.h"
+#include "crypto/sm4.h"
#include "exec/memop.h"
#include "exec/exec-all.h"
#include "exec/helper-proto.h"
@@ -841,3 +842,129 @@ void HELPER(vgmul_vv)(void *vd_vptr, void *vs2_vptr, CPURISCVState *env,
vext_set_elems_1s(vd, vta, env->vl * 4, total_elems * 4);
env->vstart = 0;
}
+
+void HELPER(vsm4k_vi)(void *vd, void *vs2, uint32_t uimm5, CPURISCVState *env,
+ uint32_t desc)
+{
+ const uint32_t egs = 4;
+ uint32_t rnd = uimm5 & 0x7;
+ uint32_t group_start = env->vstart / egs;
+ uint32_t group_end = env->vl / egs;
+ uint32_t esz = sizeof(uint32_t);
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+
+ for (uint32_t i = group_start; i < group_end; ++i) {
+ uint32_t vstart = i * egs;
+ uint32_t vend = (i + 1) * egs;
+ uint32_t rk[4] = {0};
+ uint32_t tmp[8] = {0};
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
+ }
+
+ for (uint32_t j = 0; j < egs; ++j) {
+ tmp[j] = rk[j];
+ }
+
+ for (uint32_t j = 0; j < egs; ++j) {
+ uint32_t b, s;
+ b = tmp[j + 1] ^ tmp[j + 2] ^ tmp[j + 3] ^ sm4_ck[rnd * 4 + j];
+
+ s = sm4_subword(b);
+
+ tmp[j + 4] = tmp[j] ^ (s ^ rol32(s, 13) ^ rol32(s, 23));
+ }
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
+ }
+ }
+
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
+}
+
+static void do_sm4_round(uint32_t *rk, uint32_t *buf)
+{
+ const uint32_t egs = 4;
+ uint32_t s, b;
+
+ for (uint32_t j = egs; j < egs * 2; ++j) {
+ b = buf[j - 3] ^ buf[j - 2] ^ buf[j - 1] ^ rk[j - 4];
+
+ s = sm4_subword(b);
+
+ buf[j] = buf[j - 4] ^ (s ^ rol32(s, 2) ^ rol32(s, 10) ^ rol32(s, 18) ^
+ rol32(s, 24));
+ }
+}
+
+void HELPER(vsm4r_vv)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
+{
+ const uint32_t egs = 4;
+ uint32_t group_start = env->vstart / egs;
+ uint32_t group_end = env->vl / egs;
+ uint32_t esz = sizeof(uint32_t);
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+
+ for (uint32_t i = group_start; i < group_end; ++i) {
+ uint32_t vstart = i * egs;
+ uint32_t vend = (i + 1) * egs;
+ uint32_t rk[4] = {0};
+ uint32_t tmp[8] = {0};
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ rk[j - vstart] = *((uint32_t *)vs2 + H4(j));
+ }
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
+ }
+
+ do_sm4_round(rk, tmp);
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
+ }
+ }
+
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
+}
+
+void HELPER(vsm4r_vs)(void *vd, void *vs2, CPURISCVState *env, uint32_t desc)
+{
+ const uint32_t egs = 4;
+ uint32_t group_start = env->vstart / egs;
+ uint32_t group_end = env->vl / egs;
+ uint32_t esz = sizeof(uint32_t);
+ uint32_t total_elems = vext_get_total_elems(env, desc, esz);
+
+ for (uint32_t i = group_start; i < group_end; ++i) {
+ uint32_t vstart = i * egs;
+ uint32_t vend = (i + 1) * egs;
+ uint32_t rk[4] = {0};
+ uint32_t tmp[8] = {0};
+
+ for (uint32_t j = 0; j < egs; ++j) {
+ rk[j] = *((uint32_t *)vs2 + H4(j));
+ }
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ tmp[j - vstart] = *((uint32_t *)vd + H4(j));
+ }
+
+ do_sm4_round(rk, tmp);
+
+ for (uint32_t j = vstart; j < vend; ++j) {
+ *((uint32_t *)vd + H4(j)) = tmp[egs + (j - vstart)];
+ }
+ }
+
+ env->vstart = 0;
+ /* set tail elements to 1s */
+ vext_set_elems_1s(vd, vext_vta(desc), env->vl * esz, total_elems * esz);
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality
2023-07-11 16:59 ` [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality Max Chou
@ 2023-07-19 0:54 ` Alistair Francis
0 siblings, 0 replies; 23+ messages in thread
From: Alistair Francis @ 2023-07-19 0:54 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, dbarboza, Kiran Ostrolenk, Weiwei Li,
Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei
On Wed, Jul 12, 2023 at 2:59 AM Max Chou <max.chou@sifive.com> wrote:
>
> From: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
>
> Take some functions/macros out of `vector_helper` and put them in a new
> module called `vector_internals`. This ensures they can be used by both
> vector and vector-crypto helpers (latter implemented in proceeding
> commits).
>
> Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
> Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/meson.build | 1 +
> target/riscv/vector_helper.c | 201 +-------------------------------
> target/riscv/vector_internals.c | 81 +++++++++++++
> target/riscv/vector_internals.h | 182 +++++++++++++++++++++++++++++
> 4 files changed, 265 insertions(+), 200 deletions(-)
> create mode 100644 target/riscv/vector_internals.c
> create mode 100644 target/riscv/vector_internals.h
>
> diff --git a/target/riscv/meson.build b/target/riscv/meson.build
> index 7f56c5f88d4..c3801ee5e04 100644
> --- a/target/riscv/meson.build
> +++ b/target/riscv/meson.build
> @@ -16,6 +16,7 @@ riscv_ss.add(files(
> 'gdbstub.c',
> 'op_helper.c',
> 'vector_helper.c',
> + 'vector_internals.c',
> 'bitmanip_helper.c',
> 'translate.c',
> 'm128_helper.c',
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 71bb9b4457b..6434fd2f7e8 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -26,6 +26,7 @@
> #include "fpu/softfloat.h"
> #include "tcg/tcg-gvec-desc.h"
> #include "internals.h"
> +#include "vector_internals.h"
> #include <math.h>
>
> target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
> @@ -72,68 +73,6 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
> return vl;
> }
>
> -/*
> - * Note that vector data is stored in host-endian 64-bit chunks,
> - * so addressing units smaller than that needs a host-endian fixup.
> - */
> -#if HOST_BIG_ENDIAN
> -#define H1(x) ((x) ^ 7)
> -#define H1_2(x) ((x) ^ 6)
> -#define H1_4(x) ((x) ^ 4)
> -#define H2(x) ((x) ^ 3)
> -#define H4(x) ((x) ^ 1)
> -#define H8(x) ((x))
> -#else
> -#define H1(x) (x)
> -#define H1_2(x) (x)
> -#define H1_4(x) (x)
> -#define H2(x) (x)
> -#define H4(x) (x)
> -#define H8(x) (x)
> -#endif
> -
> -static inline uint32_t vext_nf(uint32_t desc)
> -{
> - return FIELD_EX32(simd_data(desc), VDATA, NF);
> -}
> -
> -static inline uint32_t vext_vm(uint32_t desc)
> -{
> - return FIELD_EX32(simd_data(desc), VDATA, VM);
> -}
> -
> -/*
> - * Encode LMUL to lmul as following:
> - * LMUL vlmul lmul
> - * 1 000 0
> - * 2 001 1
> - * 4 010 2
> - * 8 011 3
> - * - 100 -
> - * 1/8 101 -3
> - * 1/4 110 -2
> - * 1/2 111 -1
> - */
> -static inline int32_t vext_lmul(uint32_t desc)
> -{
> - return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
> -}
> -
> -static inline uint32_t vext_vta(uint32_t desc)
> -{
> - return FIELD_EX32(simd_data(desc), VDATA, VTA);
> -}
> -
> -static inline uint32_t vext_vma(uint32_t desc)
> -{
> - return FIELD_EX32(simd_data(desc), VDATA, VMA);
> -}
> -
> -static inline uint32_t vext_vta_all_1s(uint32_t desc)
> -{
> - return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
> -}
> -
> /*
> * Get the maximum number of elements can be operated.
> *
> @@ -152,21 +91,6 @@ static inline uint32_t vext_max_elems(uint32_t desc, uint32_t log2_esz)
> return scale < 0 ? vlenb >> -scale : vlenb << scale;
> }
>
> -/*
> - * Get number of total elements, including prestart, body and tail elements.
> - * Note that when LMUL < 1, the tail includes the elements past VLMAX that
> - * are held in the same vector register.
> - */
> -static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
> - uint32_t esz)
> -{
> - uint32_t vlenb = simd_maxsz(desc);
> - uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
> - int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
> - ctzl(esz) - ctzl(sew) + vext_lmul(desc);
> - return (vlenb << emul) / esz;
> -}
> -
> static inline target_ulong adjust_addr(CPURISCVState *env, target_ulong addr)
> {
> return (addr & ~env->cur_pmmask) | env->cur_pmbase;
> @@ -199,20 +123,6 @@ static void probe_pages(CPURISCVState *env, target_ulong addr,
> }
> }
>
> -/* set agnostic elements to 1s */
> -static void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
> - uint32_t tot)
> -{
> - if (is_agnostic == 0) {
> - /* policy undisturbed */
> - return;
> - }
> - if (tot - cnt == 0) {
> - return;
> - }
> - memset(base + cnt, -1, tot - cnt);
> -}
> -
> static inline void vext_set_elem_mask(void *v0, int index,
> uint8_t value)
> {
> @@ -222,18 +132,6 @@ static inline void vext_set_elem_mask(void *v0, int index,
> ((uint64_t *)v0)[idx] = deposit64(old, pos, 1, value);
> }
>
> -/*
> - * Earlier designs (pre-0.9) had a varying number of bits
> - * per mask value (MLEN). In the 0.9 design, MLEN=1.
> - * (Section 4.5)
> - */
> -static inline int vext_elem_mask(void *v0, int index)
> -{
> - int idx = index / 64;
> - int pos = index % 64;
> - return (((uint64_t *)v0)[idx] >> pos) & 1;
> -}
> -
> /* elements operations for load and store */
> typedef void vext_ldst_elem_fn(CPURISCVState *env, target_ulong addr,
> uint32_t idx, void *vd, uintptr_t retaddr);
> @@ -728,18 +626,11 @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
> * Vector Integer Arithmetic Instructions
> */
>
> -/* expand macro args before macro */
> -#define RVVCALL(macro, ...) macro(__VA_ARGS__)
> -
> /* (TD, T1, T2, TX1, TX2) */
> #define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t
> #define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t
> #define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t
> #define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t
> -#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
> -#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
> -#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
> -#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
> #define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t
> #define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t
> #define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t
> @@ -763,16 +654,6 @@ GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
> #define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t
> #define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t
>
> -/* operation of two vector elements */
> -typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
> -
> -#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
> -static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
> -{ \
> - TX1 s1 = *((T1 *)vs1 + HS1(i)); \
> - TX2 s2 = *((T2 *)vs2 + HS2(i)); \
> - *((TD *)vd + HD(i)) = OP(s2, s1); \
> -}
> #define DO_SUB(N, M) (N - M)
> #define DO_RSUB(N, M) (M - N)
>
> @@ -785,40 +666,6 @@ RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB)
> RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB)
> RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB)
>
> -static void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
> - CPURISCVState *env, uint32_t desc,
> - opivv2_fn *fn, uint32_t esz)
> -{
> - uint32_t vm = vext_vm(desc);
> - uint32_t vl = env->vl;
> - uint32_t total_elems = vext_get_total_elems(env, desc, esz);
> - uint32_t vta = vext_vta(desc);
> - uint32_t vma = vext_vma(desc);
> - uint32_t i;
> -
> - for (i = env->vstart; i < vl; i++) {
> - if (!vm && !vext_elem_mask(v0, i)) {
> - /* set masked-off elements to 1s */
> - vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
> - continue;
> - }
> - fn(vd, vs1, vs2, i);
> - }
> - env->vstart = 0;
> - /* set tail elements to 1s */
> - vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
> -}
> -
> -/* generate the helpers for OPIVV */
> -#define GEN_VEXT_VV(NAME, ESZ) \
> -void HELPER(NAME)(void *vd, void *v0, void *vs1, \
> - void *vs2, CPURISCVState *env, \
> - uint32_t desc) \
> -{ \
> - do_vext_vv(vd, v0, vs1, vs2, env, desc, \
> - do_##NAME, ESZ); \
> -}
> -
> GEN_VEXT_VV(vadd_vv_b, 1)
> GEN_VEXT_VV(vadd_vv_h, 2)
> GEN_VEXT_VV(vadd_vv_w, 4)
> @@ -828,18 +675,6 @@ GEN_VEXT_VV(vsub_vv_h, 2)
> GEN_VEXT_VV(vsub_vv_w, 4)
> GEN_VEXT_VV(vsub_vv_d, 8)
>
> -typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
> -
> -/*
> - * (T1)s1 gives the real operator type.
> - * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
> - */
> -#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
> -static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
> -{ \
> - TX2 s2 = *((T2 *)vs2 + HS2(i)); \
> - *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
> -}
>
> RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD)
> RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD)
> @@ -854,40 +689,6 @@ RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB)
> RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB)
> RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB)
>
> -static void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
> - CPURISCVState *env, uint32_t desc,
> - opivx2_fn fn, uint32_t esz)
> -{
> - uint32_t vm = vext_vm(desc);
> - uint32_t vl = env->vl;
> - uint32_t total_elems = vext_get_total_elems(env, desc, esz);
> - uint32_t vta = vext_vta(desc);
> - uint32_t vma = vext_vma(desc);
> - uint32_t i;
> -
> - for (i = env->vstart; i < vl; i++) {
> - if (!vm && !vext_elem_mask(v0, i)) {
> - /* set masked-off elements to 1s */
> - vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
> - continue;
> - }
> - fn(vd, s1, vs2, i);
> - }
> - env->vstart = 0;
> - /* set tail elements to 1s */
> - vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
> -}
> -
> -/* generate the helpers for OPIVX */
> -#define GEN_VEXT_VX(NAME, ESZ) \
> -void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
> - void *vs2, CPURISCVState *env, \
> - uint32_t desc) \
> -{ \
> - do_vext_vx(vd, v0, s1, vs2, env, desc, \
> - do_##NAME, ESZ); \
> -}
> -
> GEN_VEXT_VX(vadd_vx_b, 1)
> GEN_VEXT_VX(vadd_vx_h, 2)
> GEN_VEXT_VX(vadd_vx_w, 4)
> diff --git a/target/riscv/vector_internals.c b/target/riscv/vector_internals.c
> new file mode 100644
> index 00000000000..9cf5c17cdea
> --- /dev/null
> +++ b/target/riscv/vector_internals.c
> @@ -0,0 +1,81 @@
> +/*
> + * RISC-V Vector Extension Internals
> + *
> + * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "vector_internals.h"
> +
> +/* set agnostic elements to 1s */
> +void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
> + uint32_t tot)
> +{
> + if (is_agnostic == 0) {
> + /* policy undisturbed */
> + return;
> + }
> + if (tot - cnt == 0) {
> + return ;
> + }
> + memset(base + cnt, -1, tot - cnt);
> +}
> +
> +void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
> + CPURISCVState *env, uint32_t desc,
> + opivv2_fn *fn, uint32_t esz)
> +{
> + uint32_t vm = vext_vm(desc);
> + uint32_t vl = env->vl;
> + uint32_t total_elems = vext_get_total_elems(env, desc, esz);
> + uint32_t vta = vext_vta(desc);
> + uint32_t vma = vext_vma(desc);
> + uint32_t i;
> +
> + for (i = env->vstart; i < vl; i++) {
> + if (!vm && !vext_elem_mask(v0, i)) {
> + /* set masked-off elements to 1s */
> + vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
> + continue;
> + }
> + fn(vd, vs1, vs2, i);
> + }
> + env->vstart = 0;
> + /* set tail elements to 1s */
> + vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
> +}
> +
> +void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
> + CPURISCVState *env, uint32_t desc,
> + opivx2_fn fn, uint32_t esz)
> +{
> + uint32_t vm = vext_vm(desc);
> + uint32_t vl = env->vl;
> + uint32_t total_elems = vext_get_total_elems(env, desc, esz);
> + uint32_t vta = vext_vta(desc);
> + uint32_t vma = vext_vma(desc);
> + uint32_t i;
> +
> + for (i = env->vstart; i < vl; i++) {
> + if (!vm && !vext_elem_mask(v0, i)) {
> + /* set masked-off elements to 1s */
> + vext_set_elems_1s(vd, vma, i * esz, (i + 1) * esz);
> + continue;
> + }
> + fn(vd, s1, vs2, i);
> + }
> + env->vstart = 0;
> + /* set tail elements to 1s */
> + vext_set_elems_1s(vd, vta, vl * esz, total_elems * esz);
> +}
> diff --git a/target/riscv/vector_internals.h b/target/riscv/vector_internals.h
> new file mode 100644
> index 00000000000..749d138bebe
> --- /dev/null
> +++ b/target/riscv/vector_internals.h
> @@ -0,0 +1,182 @@
> +/*
> + * RISC-V Vector Extension Internals
> + *
> + * Copyright (c) 2020 T-Head Semiconductor Co., Ltd. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef TARGET_RISCV_VECTOR_INTERNALS_H
> +#define TARGET_RISCV_VECTOR_INTERNALS_H
> +
> +#include "qemu/osdep.h"
> +#include "qemu/bitops.h"
> +#include "cpu.h"
> +#include "tcg/tcg-gvec-desc.h"
> +#include "internals.h"
> +
> +static inline uint32_t vext_nf(uint32_t desc)
> +{
> + return FIELD_EX32(simd_data(desc), VDATA, NF);
> +}
> +
> +/*
> + * Note that vector data is stored in host-endian 64-bit chunks,
> + * so addressing units smaller than that needs a host-endian fixup.
> + */
> +#if HOST_BIG_ENDIAN
> +#define H1(x) ((x) ^ 7)
> +#define H1_2(x) ((x) ^ 6)
> +#define H1_4(x) ((x) ^ 4)
> +#define H2(x) ((x) ^ 3)
> +#define H4(x) ((x) ^ 1)
> +#define H8(x) ((x))
> +#else
> +#define H1(x) (x)
> +#define H1_2(x) (x)
> +#define H1_4(x) (x)
> +#define H2(x) (x)
> +#define H4(x) (x)
> +#define H8(x) (x)
> +#endif
> +
> +/*
> + * Encode LMUL to lmul as following:
> + * LMUL vlmul lmul
> + * 1 000 0
> + * 2 001 1
> + * 4 010 2
> + * 8 011 3
> + * - 100 -
> + * 1/8 101 -3
> + * 1/4 110 -2
> + * 1/2 111 -1
> + */
> +static inline int32_t vext_lmul(uint32_t desc)
> +{
> + return sextract32(FIELD_EX32(simd_data(desc), VDATA, LMUL), 0, 3);
> +}
> +
> +static inline uint32_t vext_vm(uint32_t desc)
> +{
> + return FIELD_EX32(simd_data(desc), VDATA, VM);
> +}
> +
> +static inline uint32_t vext_vma(uint32_t desc)
> +{
> + return FIELD_EX32(simd_data(desc), VDATA, VMA);
> +}
> +
> +static inline uint32_t vext_vta(uint32_t desc)
> +{
> + return FIELD_EX32(simd_data(desc), VDATA, VTA);
> +}
> +
> +static inline uint32_t vext_vta_all_1s(uint32_t desc)
> +{
> + return FIELD_EX32(simd_data(desc), VDATA, VTA_ALL_1S);
> +}
> +
> +/*
> + * Earlier designs (pre-0.9) had a varying number of bits
> + * per mask value (MLEN). In the 0.9 design, MLEN=1.
> + * (Section 4.5)
> + */
> +static inline int vext_elem_mask(void *v0, int index)
> +{
> + int idx = index / 64;
> + int pos = index % 64;
> + return (((uint64_t *)v0)[idx] >> pos) & 1;
> +}
> +
> +/*
> + * Get number of total elements, including prestart, body and tail elements.
> + * Note that when LMUL < 1, the tail includes the elements past VLMAX that
> + * are held in the same vector register.
> + */
> +static inline uint32_t vext_get_total_elems(CPURISCVState *env, uint32_t desc,
> + uint32_t esz)
> +{
> + uint32_t vlenb = simd_maxsz(desc);
> + uint32_t sew = 1 << FIELD_EX64(env->vtype, VTYPE, VSEW);
> + int8_t emul = ctzl(esz) - ctzl(sew) + vext_lmul(desc) < 0 ? 0 :
> + ctzl(esz) - ctzl(sew) + vext_lmul(desc);
> + return (vlenb << emul) / esz;
> +}
> +
> +/* set agnostic elements to 1s */
> +void vext_set_elems_1s(void *base, uint32_t is_agnostic, uint32_t cnt,
> + uint32_t tot);
> +
> +/* expand macro args before macro */
> +#define RVVCALL(macro, ...) macro(__VA_ARGS__)
> +
> +/* (TD, T1, T2, TX1, TX2) */
> +#define OP_UUU_B uint8_t, uint8_t, uint8_t, uint8_t, uint8_t
> +#define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t
> +#define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t
> +#define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t
> +
> +/* operation of two vector elements */
> +typedef void opivv2_fn(void *vd, void *vs1, void *vs2, int i);
> +
> +#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \
> +static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \
> +{ \
> + TX1 s1 = *((T1 *)vs1 + HS1(i)); \
> + TX2 s2 = *((T2 *)vs2 + HS2(i)); \
> + *((TD *)vd + HD(i)) = OP(s2, s1); \
> +}
> +
> +void do_vext_vv(void *vd, void *v0, void *vs1, void *vs2,
> + CPURISCVState *env, uint32_t desc,
> + opivv2_fn *fn, uint32_t esz);
> +
> +/* generate the helpers for OPIVV */
> +#define GEN_VEXT_VV(NAME, ESZ) \
> +void HELPER(NAME)(void *vd, void *v0, void *vs1, \
> + void *vs2, CPURISCVState *env, \
> + uint32_t desc) \
> +{ \
> + do_vext_vv(vd, v0, vs1, vs2, env, desc, \
> + do_##NAME, ESZ); \
> +}
> +
> +typedef void opivx2_fn(void *vd, target_long s1, void *vs2, int i);
> +
> +/*
> + * (T1)s1 gives the real operator type.
> + * (TX1)(T1)s1 expands the operator type of widen or narrow operations.
> + */
> +#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \
> +static void do_##NAME(void *vd, target_long s1, void *vs2, int i) \
> +{ \
> + TX2 s2 = *((T2 *)vs2 + HS2(i)); \
> + *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1); \
> +}
> +
> +void do_vext_vx(void *vd, void *v0, target_long s1, void *vs2,
> + CPURISCVState *env, uint32_t desc,
> + opivx2_fn fn, uint32_t esz);
> +
> +/* generate the helpers for OPIVX */
> +#define GEN_VEXT_VX(NAME, ESZ) \
> +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \
> + void *vs2, CPURISCVState *env, \
> + uint32_t desc) \
> +{ \
> + do_vext_vx(vd, v0, s1, vs2, env, desc, \
> + do_##NAME, ESZ); \
> +}
> +
> +#endif /* TARGET_RISCV_VECTOR_INTERNALS_H */
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks
2023-07-11 16:59 ` [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks Max Chou
@ 2023-08-02 14:22 ` Alistair Francis
0 siblings, 0 replies; 23+ messages in thread
From: Alistair Francis @ 2023-08-02 14:22 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, dbarboza, Nazar Kazakov, Weiwei Li,
Palmer Dabbelt, Alistair Francis, Bin Meng, Liu Zhiwei,
Richard Henderson, Junqiang Wang
On Tue, Jul 11, 2023 at 1:00 PM Max Chou <max.chou@sifive.com> wrote:
>
> From: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
>
> Remove the redundant "vl == 0" check which is already included within the vstart >= vl check, when vl == 0.
>
> Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
> Signed-off-by: Max Chou <max.chou@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/insn_trans/trans_rvv.c.inc | 31 +------------------------
> 1 file changed, 1 insertion(+), 30 deletions(-)
>
> diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
> index 4a8e62a8bef..7e194aae34a 100644
> --- a/target/riscv/insn_trans/trans_rvv.c.inc
> +++ b/target/riscv/insn_trans/trans_rvv.c.inc
> @@ -617,7 +617,6 @@ static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data,
> TCGv_i32 desc;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -786,7 +785,6 @@ static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2,
> TCGv_i32 desc;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -893,7 +891,6 @@ static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
> TCGv_i32 desc;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -1034,7 +1031,6 @@ static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data,
> TCGv_i32 desc;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -1191,7 +1187,6 @@ do_opivv_gvec(DisasContext *s, arg_rmrr *a, GVecGen3Fn *gvec_fn,
> return false;
> }
>
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> if (a->vm && s->vl_eq_vlmax && !(s->vta && s->lmul < 0)) {
> @@ -1241,7 +1236,6 @@ static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, uint32_t vm,
> uint32_t data = 0;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -1405,7 +1399,6 @@ static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, uint32_t vm,
> uint32_t data = 0;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -1492,7 +1485,6 @@ static bool do_opivv_widen(DisasContext *s, arg_rmrr *a,
> if (checkfn(s, a)) {
> uint32_t data = 0;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> data = FIELD_DP32(data, VDATA, VM, a->vm);
> @@ -1575,7 +1567,6 @@ static bool do_opiwv_widen(DisasContext *s, arg_rmrr *a,
> if (opiwv_widen_check(s, a)) {
> uint32_t data = 0;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> data = FIELD_DP32(data, VDATA, VM, a->vm);
> @@ -1648,7 +1639,6 @@ static bool opivv_trans(uint32_t vd, uint32_t vs1, uint32_t vs2, uint32_t vm,
> {
> uint32_t data = 0;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> data = FIELD_DP32(data, VDATA, VM, vm);
> @@ -1842,7 +1832,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> gen_helper_##NAME##_w, \
> }; \
> TCGLabel *over = gen_new_label(); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2054,7 +2043,6 @@ static bool trans_vmv_v_v(DisasContext *s, arg_vmv_v_v *a)
> gen_helper_vmv_v_v_w, gen_helper_vmv_v_v_d,
> };
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> tcg_gen_gvec_2_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, a->rs1),
> @@ -2078,7 +2066,6 @@ static bool trans_vmv_v_x(DisasContext *s, arg_vmv_v_x *a)
> vext_check_ss(s, a->rd, 0, 1)) {
> TCGv s1;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> s1 = get_gpr(s, a->rs1, EXT_SIGN);
> @@ -2140,7 +2127,6 @@ static bool trans_vmv_v_i(DisasContext *s, arg_vmv_v_i *a)
> gen_helper_vmv_v_x_w, gen_helper_vmv_v_x_d,
> };
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> s1 = tcg_constant_i64(simm);
> @@ -2288,7 +2274,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm(s, RISCV_FRM_DYN); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2323,7 +2308,6 @@ static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2,
> TCGv_i64 t1;
>
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> dest = tcg_temp_new_ptr();
> @@ -2408,7 +2392,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm(s, RISCV_FRM_DYN); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);\
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2483,7 +2466,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm(s, RISCV_FRM_DYN); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2601,7 +2583,6 @@ static bool do_opfv(DisasContext *s, arg_rmr *a,
> uint32_t data = 0;
> TCGLabel *over = gen_new_label();
> gen_set_rm_chkfrm(s, rm);
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> data = FIELD_DP32(data, VDATA, VM, a->vm);
> @@ -2713,7 +2694,6 @@ static bool trans_vfmv_v_f(DisasContext *s, arg_vfmv_v_f *a)
> gen_helper_vmv_v_x_d,
> };
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> t1 = tcg_temp_new_i64();
> @@ -2792,7 +2772,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm_chkfrm(s, FRM); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2844,7 +2823,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm(s, RISCV_FRM_DYN); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2912,7 +2890,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm_chkfrm(s, FRM); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -2962,7 +2939,6 @@ static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
> }; \
> TCGLabel *over = gen_new_label(); \
> gen_set_rm_chkfrm(s, FRM); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, VM, a->vm); \
> @@ -3053,7 +3029,6 @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \
> uint32_t data = 0; \
> gen_helper_gvec_4_ptr *fn = gen_helper_##NAME; \
> TCGLabel *over = gen_new_label(); \
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over); \
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> \
> data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
> @@ -3224,7 +3199,6 @@ static bool trans_vid_v(DisasContext *s, arg_vid_v *a)
> require_vm(a->vm, a->rd)) {
> uint32_t data = 0;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> data = FIELD_DP32(data, VDATA, VM, a->vm);
> @@ -3411,7 +3385,6 @@ static bool trans_vmv_s_x(DisasContext *s, arg_vmv_s_x *a)
> TCGv s1;
> TCGLabel *over = gen_new_label();
>
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> t1 = tcg_temp_new_i64();
> @@ -3468,8 +3441,7 @@ static bool trans_vfmv_s_f(DisasContext *s, arg_vfmv_s_f *a)
> TCGv_i64 t1;
> TCGLabel *over = gen_new_label();
>
> - /* if vl == 0 or vstart >= vl, skip vector register write back */
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> + /* if vstart >= vl, skip vector register write back */
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> /* NaN-box f[rs1] */
> @@ -3720,7 +3692,6 @@ static bool int_ext_op(DisasContext *s, arg_rmr *a, uint8_t seq)
> uint32_t data = 0;
> gen_helper_gvec_3_ptr *fn;
> TCGLabel *over = gen_new_label();
> - tcg_gen_brcondi_tl(TCG_COND_EQ, cpu_vl, 0, over);
> tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over);
>
> static gen_helper_gvec_3_ptr * const fns[6][4] = {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support
2023-07-11 16:59 ` [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support Max Chou
@ 2023-08-02 14:29 ` Alistair Francis
0 siblings, 0 replies; 23+ messages in thread
From: Alistair Francis @ 2023-08-02 14:29 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, dbarboza, Lawrence Hunter, Nazar Kazakov,
Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
Kiran Ostrolenk
On Tue, Jul 11, 2023 at 1:00 PM Max Chou <max.chou@sifive.com> wrote:
>
> From: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
>
> This commit adds support for the Zvbc vector-crypto extension, which
> consists of the following instructions:
>
> * vclmulh.[vx,vv]
> * vclmul.[vx,vv]
>
> Translation functions are defined in
> `target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
> `target/riscv/vcrypto_helper.c`.
>
> Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
> Co-authored-by: Max Chou <max.chou@sifive.com>
> Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
> Signed-off-by: Lawrence Hunter <lawrence.hunter@codethink.co.uk>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> [max.chou@sifive.com: Exposed x-zvbc property]
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.c | 9 ++++
> target/riscv/cpu_cfg.h | 1 +
> target/riscv/helper.h | 6 +++
> target/riscv/insn32.decode | 6 +++
> target/riscv/insn_trans/trans_rvvk.c.inc | 62 ++++++++++++++++++++++++
> target/riscv/meson.build | 3 +-
> target/riscv/translate.c | 1 +
> target/riscv/vcrypto_helper.c | 59 ++++++++++++++++++++++
> 8 files changed, 146 insertions(+), 1 deletion(-)
> create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
> create mode 100644 target/riscv/vcrypto_helper.c
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 9339c0241d6..d1dc78d483f 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -119,6 +119,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
> ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
> ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
> + ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
> ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
> ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
> ISA_EXT_DATA_ENTRY(zve64d, PRIV_VERSION_1_10_0, ext_zve64d),
> @@ -1268,6 +1269,11 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
> return;
> }
>
> + if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
> + error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
> + return;
> + }
> +
> if (cpu->cfg.ext_zk) {
> cpu->cfg.ext_zkn = true;
> cpu->cfg.ext_zkr = true;
> @@ -1845,6 +1851,9 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("x-zvfbfmin", RISCVCPU, cfg.ext_zvfbfmin, false),
> DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
>
> + /* Vector cryptography extensions */
> + DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
> +
> DEFINE_PROP_END_OF_LIST(),
> };
>
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index 2bd9510ba3e..d25b36a5128 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -85,6 +85,7 @@ struct RISCVCPUConfig {
> bool ext_zve32f;
> bool ext_zve64f;
> bool ext_zve64d;
> + bool ext_zvbc;
> bool ext_zmmul;
> bool ext_zvfbfmin;
> bool ext_zvfbfwma;
> diff --git a/target/riscv/helper.h b/target/riscv/helper.h
> index c95adaf08ac..6776777c4eb 100644
> --- a/target/riscv/helper.h
> +++ b/target/riscv/helper.h
> @@ -1182,3 +1182,9 @@ DEF_HELPER_5(vfwcvtbf16_f_f_v, void, ptr, ptr, ptr, env, i32)
>
> DEF_HELPER_6(vfwmaccbf16_vv, void, ptr, ptr, ptr, ptr, env, i32)
> DEF_HELPER_6(vfwmaccbf16_vf, void, ptr, ptr, i64, ptr, env, i32)
> +
> +/* Vector crypto functions */
> +DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
> diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
> index e341fa92139..dd50d5a48c7 100644
> --- a/target/riscv/insn32.decode
> +++ b/target/riscv/insn32.decode
> @@ -946,3 +946,9 @@ vfwcvtbf16_f_f_v 010010 . ..... 01101 001 ..... 1010111 @r2_vm
> # *** Zvfbfwma Standard Extension ***
> vfwmaccbf16_vv 111011 . ..... ..... 001 ..... 1010111 @r_vm
> vfwmaccbf16_vf 111011 . ..... ..... 101 ..... 1010111 @r_vm
> +
> +# *** Zvbc vector crypto extension ***
> +vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
> +vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
> +vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
> +vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
> diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
> new file mode 100644
> index 00000000000..552b08a2fd1
> --- /dev/null
> +++ b/target/riscv/insn_trans/trans_rvvk.c.inc
> @@ -0,0 +1,62 @@
> +/*
> + * RISC-V translation routines for the vector crypto extension.
> + *
> + * Copyright (C) 2023 SiFive, Inc.
> + * Written by Codethink Ltd and SiFive.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +/*
> + * Zvbc
> + */
> +
> +#define GEN_VV_MASKED_TRANS(NAME, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + return opivv_trans(a->rd, a->rs1, a->rs2, a->vm, \
> + gen_helper_##NAME, s); \
> + } \
> + return false; \
> + }
> +
> +static bool vclmul_vv_check(DisasContext *s, arg_rmrr *a)
> +{
> + return opivv_check(s, a) &&
> + s->cfg_ptr->ext_zvbc == true &&
> + s->sew == MO_64;
> +}
> +
> +GEN_VV_MASKED_TRANS(vclmul_vv, vclmul_vv_check)
> +GEN_VV_MASKED_TRANS(vclmulh_vv, vclmul_vv_check)
> +
> +#define GEN_VX_MASKED_TRANS(NAME, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + return opivx_trans(a->rd, a->rs1, a->rs2, a->vm, \
> + gen_helper_##NAME, s); \
> + } \
> + return false; \
> + }
> +
> +static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
> +{
> + return opivx_check(s, a) &&
> + s->cfg_ptr->ext_zvbc == true &&
> + s->sew == MO_64;
> +}
> +
> +GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
> +GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
> diff --git a/target/riscv/meson.build b/target/riscv/meson.build
> index c3801ee5e04..660078bda1f 100644
> --- a/target/riscv/meson.build
> +++ b/target/riscv/meson.build
> @@ -21,7 +21,8 @@ riscv_ss.add(files(
> 'translate.c',
> 'm128_helper.c',
> 'crypto_helper.c',
> - 'zce_helper.c'
> + 'zce_helper.c',
> + 'vcrypto_helper.c'
> ))
> riscv_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c'), if_false: files('kvm-stub.c'))
>
> diff --git a/target/riscv/translate.c b/target/riscv/translate.c
> index 697df1be9e2..7dbf173adb5 100644
> --- a/target/riscv/translate.c
> +++ b/target/riscv/translate.c
> @@ -1094,6 +1094,7 @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
> #include "insn_trans/trans_rvzfa.c.inc"
> #include "insn_trans/trans_rvzfh.c.inc"
> #include "insn_trans/trans_rvk.c.inc"
> +#include "insn_trans/trans_rvvk.c.inc"
> #include "insn_trans/trans_privileged.c.inc"
> #include "insn_trans/trans_svinval.c.inc"
> #include "insn_trans/trans_rvbf16.c.inc"
> diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
> new file mode 100644
> index 00000000000..8b7c63d4997
> --- /dev/null
> +++ b/target/riscv/vcrypto_helper.c
> @@ -0,0 +1,59 @@
> +/*
> + * RISC-V Vector Crypto Extension Helpers for QEMU.
> + *
> + * Copyright (C) 2023 SiFive, Inc.
> + * Written by Codethink Ltd and SiFive.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2 or later, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/host-utils.h"
> +#include "qemu/bitops.h"
> +#include "cpu.h"
> +#include "exec/memop.h"
> +#include "exec/exec-all.h"
> +#include "exec/helper-proto.h"
> +#include "internals.h"
> +#include "vector_internals.h"
> +
> +static uint64_t clmul64(uint64_t y, uint64_t x)
> +{
> + uint64_t result = 0;
> + for (int j = 63; j >= 0; j--) {
> + if ((y >> j) & 1) {
> + result ^= (x << j);
> + }
> + }
> + return result;
> +}
> +
> +static uint64_t clmulh64(uint64_t y, uint64_t x)
> +{
> + uint64_t result = 0;
> + for (int j = 63; j >= 1; j--) {
> + if ((y >> j) & 1) {
> + result ^= (x >> (64 - j));
> + }
> + }
> + return result;
> +}
> +
> +RVVCALL(OPIVV2, vclmul_vv, OP_UUU_D, H8, H8, H8, clmul64)
> +GEN_VEXT_VV(vclmul_vv, 8)
> +RVVCALL(OPIVX2, vclmul_vx, OP_UUU_D, H8, H8, clmul64)
> +GEN_VEXT_VX(vclmul_vx, 8)
> +RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
> +GEN_VEXT_VV(vclmulh_vv, 8)
> +RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
> +GEN_VEXT_VX(vclmulh_vx, 8)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support
2023-07-11 16:59 ` [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support Max Chou
@ 2023-08-02 14:38 ` Alistair Francis
0 siblings, 0 replies; 23+ messages in thread
From: Alistair Francis @ 2023-08-02 14:38 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, dbarboza, Dickon Hood, Nazar Kazakov,
William Salmon, Kiran Ostrolenk, Palmer Dabbelt, Alistair Francis,
Bin Meng, Weiwei Li, Liu Zhiwei, Lawrence Hunter
On Tue, Jul 11, 2023 at 1:00 PM Max Chou <max.chou@sifive.com> wrote:
>
> From: Dickon Hood <dickon.hood@codethink.co.uk>
>
> This commit adds support for the Zvbb vector-crypto extension, which
> consists of the following instructions:
>
> * vrol.[vv,vx]
> * vror.[vv,vx,vi]
> * vbrev8.v
> * vrev8.v
> * vandn.[vv,vx]
> * vbrev.v
> * vclz.v
> * vctz.v
> * vcpop.v
> * vwsll.[vv,vx,vi]
>
> Translation functions are defined in
> `target/riscv/insn_trans/trans_rvvk.c.inc` and helpers are defined in
> `target/riscv/vcrypto_helper.c`.
>
> Co-authored-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
> Co-authored-by: William Salmon <will.salmon@codethink.co.uk>
> Co-authored-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
> [max.chou@sifive.com: Fix imm mode of vror.vi]
> Signed-off-by: Nazar Kazakov <nazar.kazakov@codethink.co.uk>
> Signed-off-by: William Salmon <will.salmon@codethink.co.uk>
> Signed-off-by: Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
> Signed-off-by: Dickon Hood <dickon.hood@codethink.co.uk>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> [max.chou@sifive.com: Exposed x-zvbb property]
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.c | 12 ++
> target/riscv/cpu_cfg.h | 1 +
> target/riscv/helper.h | 62 +++++++++
> target/riscv/insn32.decode | 20 +++
> target/riscv/insn_trans/trans_rvvk.c.inc | 164 +++++++++++++++++++++++
> target/riscv/vcrypto_helper.c | 138 +++++++++++++++++++
> 6 files changed, 397 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index d1dc78d483f..e39a6118bbc 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -119,6 +119,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zksed, PRIV_VERSION_1_12_0, ext_zksed),
> ISA_EXT_DATA_ENTRY(zksh, PRIV_VERSION_1_12_0, ext_zksh),
> ISA_EXT_DATA_ENTRY(zkt, PRIV_VERSION_1_12_0, ext_zkt),
> + ISA_EXT_DATA_ENTRY(zvbb, PRIV_VERSION_1_12_0, ext_zvbb),
> ISA_EXT_DATA_ENTRY(zvbc, PRIV_VERSION_1_12_0, ext_zvbc),
> ISA_EXT_DATA_ENTRY(zve32f, PRIV_VERSION_1_10_0, ext_zve32f),
> ISA_EXT_DATA_ENTRY(zve64f, PRIV_VERSION_1_10_0, ext_zve64f),
> @@ -1269,6 +1270,16 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
> return;
> }
>
> + /*
> + * In principle Zve*x would also suffice here, were they supported
> + * in qemu
> + */
> + if (cpu->cfg.ext_zvbb && !cpu->cfg.ext_zve32f) {
> + error_setg(errp,
> + "Vector crypto extensions require V or Zve* extensions");
> + return;
> + }
> +
> if (cpu->cfg.ext_zvbc && !cpu->cfg.ext_zve64f) {
> error_setg(errp, "Zvbc extension requires V or Zve64{f,d} extensions");
> return;
> @@ -1852,6 +1863,7 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("x-zvfbfwma", RISCVCPU, cfg.ext_zvfbfwma, false),
>
> /* Vector cryptography extensions */
> + DEFINE_PROP_BOOL("x-zvbb", RISCVCPU, cfg.ext_zvbb, false),
> DEFINE_PROP_BOOL("x-zvbc", RISCVCPU, cfg.ext_zvbc, false),
>
> DEFINE_PROP_END_OF_LIST(),
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index d25b36a5128..0e31ebeed93 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -85,6 +85,7 @@ struct RISCVCPUConfig {
> bool ext_zve32f;
> bool ext_zve64f;
> bool ext_zve64d;
> + bool ext_zvbb;
> bool ext_zvbc;
> bool ext_zmmul;
> bool ext_zvfbfmin;
> diff --git a/target/riscv/helper.h b/target/riscv/helper.h
> index 6776777c4eb..3db25ed2a20 100644
> --- a/target/riscv/helper.h
> +++ b/target/riscv/helper.h
> @@ -1188,3 +1188,65 @@ DEF_HELPER_6(vclmul_vv, void, ptr, ptr, ptr, ptr, env, i32)
> DEF_HELPER_6(vclmul_vx, void, ptr, ptr, tl, ptr, env, i32)
> DEF_HELPER_6(vclmulh_vv, void, ptr, ptr, ptr, ptr, env, i32)
> DEF_HELPER_6(vclmulh_vx, void, ptr, ptr, tl, ptr, env, i32)
> +
> +DEF_HELPER_6(vror_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vror_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vror_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vror_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
> +
> +DEF_HELPER_6(vror_vx_b, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vror_vx_h, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vror_vx_w, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vror_vx_d, void, ptr, ptr, tl, ptr, env, i32)
> +
> +DEF_HELPER_6(vrol_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vrol_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vrol_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vrol_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
> +
> +DEF_HELPER_6(vrol_vx_b, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vrol_vx_h, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vrol_vx_w, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vrol_vx_d, void, ptr, ptr, tl, ptr, env, i32)
> +
> +DEF_HELPER_5(vrev8_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vrev8_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vrev8_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vrev8_v_d, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev8_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev8_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev8_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev8_v_d, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vbrev_v_d, void, ptr, ptr, ptr, env, i32)
> +
> +DEF_HELPER_5(vclz_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vclz_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vclz_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vclz_v_d, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vctz_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vctz_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vctz_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vctz_v_d, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vcpop_v_b, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vcpop_v_h, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vcpop_v_w, void, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_5(vcpop_v_d, void, ptr, ptr, ptr, env, i32)
> +
> +DEF_HELPER_6(vwsll_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vwsll_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vwsll_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vwsll_vx_b, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vwsll_vx_h, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vwsll_vx_w, void, ptr, ptr, tl, ptr, env, i32)
> +
> +DEF_HELPER_6(vandn_vv_b, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vandn_vv_h, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vandn_vv_w, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vandn_vv_d, void, ptr, ptr, ptr, ptr, env, i32)
> +DEF_HELPER_6(vandn_vx_b, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vandn_vx_h, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vandn_vx_w, void, ptr, ptr, tl, ptr, env, i32)
> +DEF_HELPER_6(vandn_vx_d, void, ptr, ptr, tl, ptr, env, i32)
> diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode
> index dd50d5a48c7..b982a8325bb 100644
> --- a/target/riscv/insn32.decode
> +++ b/target/riscv/insn32.decode
> @@ -37,6 +37,7 @@
> %imm_u 12:s20 !function=ex_shift_12
> %imm_bs 30:2 !function=ex_shift_3
> %imm_rnum 20:4
> +%imm_z6 26:1 15:5
>
> # Argument sets:
> &empty
> @@ -82,6 +83,7 @@
> @r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd
> @r_vm_1 ...... . ..... ..... ... ..... ....... &rmrr vm=1 %rs2 %rs1 %rd
> @r_vm_0 ...... . ..... ..... ... ..... ....... &rmrr vm=0 %rs2 %rs1 %rd
> +@r2_zimm6 ..... . vm:1 ..... ..... ... ..... ....... &rmrr %rs2 rs1=%imm_z6 %rd
> @r2_zimm11 . zimm:11 ..... ... ..... ....... %rs1 %rd
> @r2_zimm10 .. zimm:10 ..... ... ..... ....... %rs1 %rd
> @r2_s ....... ..... ..... ... ..... ....... %rs2 %rs1
> @@ -952,3 +954,21 @@ vclmul_vv 001100 . ..... ..... 010 ..... 1010111 @r_vm
> vclmul_vx 001100 . ..... ..... 110 ..... 1010111 @r_vm
> vclmulh_vv 001101 . ..... ..... 010 ..... 1010111 @r_vm
> vclmulh_vx 001101 . ..... ..... 110 ..... 1010111 @r_vm
> +
> +# *** Zvbb vector crypto extension ***
> +vrol_vv 010101 . ..... ..... 000 ..... 1010111 @r_vm
> +vrol_vx 010101 . ..... ..... 100 ..... 1010111 @r_vm
> +vror_vv 010100 . ..... ..... 000 ..... 1010111 @r_vm
> +vror_vx 010100 . ..... ..... 100 ..... 1010111 @r_vm
> +vror_vi 01010. . ..... ..... 011 ..... 1010111 @r2_zimm6
> +vbrev8_v 010010 . ..... 01000 010 ..... 1010111 @r2_vm
> +vrev8_v 010010 . ..... 01001 010 ..... 1010111 @r2_vm
> +vandn_vv 000001 . ..... ..... 000 ..... 1010111 @r_vm
> +vandn_vx 000001 . ..... ..... 100 ..... 1010111 @r_vm
> +vbrev_v 010010 . ..... 01010 010 ..... 1010111 @r2_vm
> +vclz_v 010010 . ..... 01100 010 ..... 1010111 @r2_vm
> +vctz_v 010010 . ..... 01101 010 ..... 1010111 @r2_vm
> +vcpop_v 010010 . ..... 01110 010 ..... 1010111 @r2_vm
> +vwsll_vv 110101 . ..... ..... 000 ..... 1010111 @r_vm
> +vwsll_vx 110101 . ..... ..... 100 ..... 1010111 @r_vm
> +vwsll_vi 110101 . ..... ..... 011 ..... 1010111 @r_vm
> diff --git a/target/riscv/insn_trans/trans_rvvk.c.inc b/target/riscv/insn_trans/trans_rvvk.c.inc
> index 552b08a2fd1..0e4b3376134 100644
> --- a/target/riscv/insn_trans/trans_rvvk.c.inc
> +++ b/target/riscv/insn_trans/trans_rvvk.c.inc
> @@ -60,3 +60,167 @@ static bool vclmul_vx_check(DisasContext *s, arg_rmrr *a)
>
> GEN_VX_MASKED_TRANS(vclmul_vx, vclmul_vx_check)
> GEN_VX_MASKED_TRANS(vclmulh_vx, vclmul_vx_check)
> +
> +/*
> + * Zvbb
> + */
> +
> +#define GEN_OPIVI_GVEC_TRANS_CHECK(NAME, IMM_MODE, OPIVX, SUF, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + static gen_helper_opivx *const fns[4] = { \
> + gen_helper_##OPIVX##_b, \
> + gen_helper_##OPIVX##_h, \
> + gen_helper_##OPIVX##_w, \
> + gen_helper_##OPIVX##_d, \
> + }; \
> + return do_opivi_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew], \
> + IMM_MODE); \
> + } \
> + return false; \
> + }
> +
> +#define GEN_OPIVV_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + static gen_helper_gvec_4_ptr *const fns[4] = { \
> + gen_helper_##NAME##_b, \
> + gen_helper_##NAME##_h, \
> + gen_helper_##NAME##_w, \
> + gen_helper_##NAME##_d, \
> + }; \
> + return do_opivv_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
> + } \
> + return false; \
> + }
> +
> +#define GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(NAME, SUF, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + static gen_helper_opivx *const fns[4] = { \
> + gen_helper_##NAME##_b, \
> + gen_helper_##NAME##_h, \
> + gen_helper_##NAME##_w, \
> + gen_helper_##NAME##_d, \
> + }; \
> + return do_opivx_gvec_shift(s, a, tcg_gen_gvec_##SUF, \
> + fns[s->sew]); \
> + } \
> + return false; \
> + }
> +
> +static bool zvbb_vv_check(DisasContext *s, arg_rmrr *a)
> +{
> + return opivv_check(s, a) && s->cfg_ptr->ext_zvbb == true;
> +}
> +
> +static bool zvbb_vx_check(DisasContext *s, arg_rmrr *a)
> +{
> + return opivx_check(s, a) && s->cfg_ptr->ext_zvbb == true;
> +}
> +
> +/* vrol.v[vx] */
> +GEN_OPIVV_GVEC_TRANS_CHECK(vrol_vv, rotlv, zvbb_vv_check)
> +GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vrol_vx, rotls, zvbb_vx_check)
> +
> +/* vror.v[vxi] */
> +GEN_OPIVV_GVEC_TRANS_CHECK(vror_vv, rotrv, zvbb_vv_check)
> +GEN_OPIVX_GVEC_SHIFT_TRANS_CHECK(vror_vx, rotrs, zvbb_vx_check)
> +GEN_OPIVI_GVEC_TRANS_CHECK(vror_vi, IMM_TRUNC_SEW, vror_vx, rotri, zvbb_vx_check)
> +
> +#define GEN_OPIVX_GVEC_TRANS_CHECK(NAME, SUF, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + static gen_helper_opivx *const fns[4] = { \
> + gen_helper_##NAME##_b, \
> + gen_helper_##NAME##_h, \
> + gen_helper_##NAME##_w, \
> + gen_helper_##NAME##_d, \
> + }; \
> + return do_opivx_gvec(s, a, tcg_gen_gvec_##SUF, fns[s->sew]); \
> + } \
> + return false; \
> + }
> +
> +/* vandn.v[vx] */
> +GEN_OPIVV_GVEC_TRANS_CHECK(vandn_vv, andc, zvbb_vv_check)
> +GEN_OPIVX_GVEC_TRANS_CHECK(vandn_vx, andcs, zvbb_vx_check)
> +
> +#define GEN_OPIV_TRANS(NAME, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + uint32_t data = 0; \
> + static gen_helper_gvec_3_ptr *const fns[4] = { \
> + gen_helper_##NAME##_b, \
> + gen_helper_##NAME##_h, \
> + gen_helper_##NAME##_w, \
> + gen_helper_##NAME##_d, \
> + }; \
> + TCGLabel *over = gen_new_label(); \
> + tcg_gen_brcond_tl(TCG_COND_GEU, cpu_vstart, cpu_vl, over); \
> + \
> + data = FIELD_DP32(data, VDATA, VM, a->vm); \
> + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
> + data = FIELD_DP32(data, VDATA, VTA, s->vta); \
> + data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
> + data = FIELD_DP32(data, VDATA, VMA, s->vma); \
> + tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \
> + vreg_ofs(s, a->rs2), cpu_env, \
> + s->cfg_ptr->vlen / 8, s->cfg_ptr->vlen / 8, \
> + data, fns[s->sew]); \
> + mark_vs_dirty(s); \
> + gen_set_label(over); \
> + return true; \
> + } \
> + return false; \
> + }
> +
> +static bool zvbb_opiv_check(DisasContext *s, arg_rmr *a)
> +{
> + return s->cfg_ptr->ext_zvbb == true &&
> + require_rvv(s) &&
> + vext_check_isa_ill(s) &&
> + vext_check_ss(s, a->rd, a->rs2, a->vm);
> +}
> +
> +GEN_OPIV_TRANS(vbrev8_v, zvbb_opiv_check)
> +GEN_OPIV_TRANS(vrev8_v, zvbb_opiv_check)
> +GEN_OPIV_TRANS(vbrev_v, zvbb_opiv_check)
> +GEN_OPIV_TRANS(vclz_v, zvbb_opiv_check)
> +GEN_OPIV_TRANS(vctz_v, zvbb_opiv_check)
> +GEN_OPIV_TRANS(vcpop_v, zvbb_opiv_check)
> +
> +static bool vwsll_vv_check(DisasContext *s, arg_rmrr *a)
> +{
> + return s->cfg_ptr->ext_zvbb && opivv_widen_check(s, a);
> +}
> +
> +static bool vwsll_vx_check(DisasContext *s, arg_rmrr *a)
> +{
> + return s->cfg_ptr->ext_zvbb && opivx_widen_check(s, a);
> +}
> +
> +/* OPIVI without GVEC IR */
> +#define GEN_OPIVI_WIDEN_TRANS(NAME, IMM_MODE, OPIVX, CHECK) \
> + static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
> + { \
> + if (CHECK(s, a)) { \
> + static gen_helper_opivx *const fns[3] = { \
> + gen_helper_##OPIVX##_b, \
> + gen_helper_##OPIVX##_h, \
> + gen_helper_##OPIVX##_w, \
> + }; \
> + return opivi_trans(a->rd, a->rs1, a->rs2, a->vm, fns[s->sew], s, \
> + IMM_MODE); \
> + } \
> + return false; \
> + }
> +
> +GEN_OPIVV_WIDEN_TRANS(vwsll_vv, vwsll_vv_check)
> +GEN_OPIVX_WIDEN_TRANS(vwsll_vx, vwsll_vx_check)
> +GEN_OPIVI_WIDEN_TRANS(vwsll_vi, IMM_ZX, vwsll_vx, vwsll_vx_check)
> diff --git a/target/riscv/vcrypto_helper.c b/target/riscv/vcrypto_helper.c
> index 8b7c63d4997..11239b59d6f 100644
> --- a/target/riscv/vcrypto_helper.c
> +++ b/target/riscv/vcrypto_helper.c
> @@ -20,6 +20,7 @@
> #include "qemu/osdep.h"
> #include "qemu/host-utils.h"
> #include "qemu/bitops.h"
> +#include "qemu/bswap.h"
> #include "cpu.h"
> #include "exec/memop.h"
> #include "exec/exec-all.h"
> @@ -57,3 +58,140 @@ RVVCALL(OPIVV2, vclmulh_vv, OP_UUU_D, H8, H8, H8, clmulh64)
> GEN_VEXT_VV(vclmulh_vv, 8)
> RVVCALL(OPIVX2, vclmulh_vx, OP_UUU_D, H8, H8, clmulh64)
> GEN_VEXT_VX(vclmulh_vx, 8)
> +
> +RVVCALL(OPIVV2, vror_vv_b, OP_UUU_B, H1, H1, H1, ror8)
> +RVVCALL(OPIVV2, vror_vv_h, OP_UUU_H, H2, H2, H2, ror16)
> +RVVCALL(OPIVV2, vror_vv_w, OP_UUU_W, H4, H4, H4, ror32)
> +RVVCALL(OPIVV2, vror_vv_d, OP_UUU_D, H8, H8, H8, ror64)
> +GEN_VEXT_VV(vror_vv_b, 1)
> +GEN_VEXT_VV(vror_vv_h, 2)
> +GEN_VEXT_VV(vror_vv_w, 4)
> +GEN_VEXT_VV(vror_vv_d, 8)
> +
> +RVVCALL(OPIVX2, vror_vx_b, OP_UUU_B, H1, H1, ror8)
> +RVVCALL(OPIVX2, vror_vx_h, OP_UUU_H, H2, H2, ror16)
> +RVVCALL(OPIVX2, vror_vx_w, OP_UUU_W, H4, H4, ror32)
> +RVVCALL(OPIVX2, vror_vx_d, OP_UUU_D, H8, H8, ror64)
> +GEN_VEXT_VX(vror_vx_b, 1)
> +GEN_VEXT_VX(vror_vx_h, 2)
> +GEN_VEXT_VX(vror_vx_w, 4)
> +GEN_VEXT_VX(vror_vx_d, 8)
> +
> +RVVCALL(OPIVV2, vrol_vv_b, OP_UUU_B, H1, H1, H1, rol8)
> +RVVCALL(OPIVV2, vrol_vv_h, OP_UUU_H, H2, H2, H2, rol16)
> +RVVCALL(OPIVV2, vrol_vv_w, OP_UUU_W, H4, H4, H4, rol32)
> +RVVCALL(OPIVV2, vrol_vv_d, OP_UUU_D, H8, H8, H8, rol64)
> +GEN_VEXT_VV(vrol_vv_b, 1)
> +GEN_VEXT_VV(vrol_vv_h, 2)
> +GEN_VEXT_VV(vrol_vv_w, 4)
> +GEN_VEXT_VV(vrol_vv_d, 8)
> +
> +RVVCALL(OPIVX2, vrol_vx_b, OP_UUU_B, H1, H1, rol8)
> +RVVCALL(OPIVX2, vrol_vx_h, OP_UUU_H, H2, H2, rol16)
> +RVVCALL(OPIVX2, vrol_vx_w, OP_UUU_W, H4, H4, rol32)
> +RVVCALL(OPIVX2, vrol_vx_d, OP_UUU_D, H8, H8, rol64)
> +GEN_VEXT_VX(vrol_vx_b, 1)
> +GEN_VEXT_VX(vrol_vx_h, 2)
> +GEN_VEXT_VX(vrol_vx_w, 4)
> +GEN_VEXT_VX(vrol_vx_d, 8)
> +
> +static uint64_t brev8(uint64_t val)
> +{
> + val = ((val & 0x5555555555555555ull) << 1) |
> + ((val & 0xAAAAAAAAAAAAAAAAull) >> 1);
> + val = ((val & 0x3333333333333333ull) << 2) |
> + ((val & 0xCCCCCCCCCCCCCCCCull) >> 2);
> + val = ((val & 0x0F0F0F0F0F0F0F0Full) << 4) |
> + ((val & 0xF0F0F0F0F0F0F0F0ull) >> 4);
> +
> + return val;
> +}
> +
> +RVVCALL(OPIVV1, vbrev8_v_b, OP_UU_B, H1, H1, brev8)
> +RVVCALL(OPIVV1, vbrev8_v_h, OP_UU_H, H2, H2, brev8)
> +RVVCALL(OPIVV1, vbrev8_v_w, OP_UU_W, H4, H4, brev8)
> +RVVCALL(OPIVV1, vbrev8_v_d, OP_UU_D, H8, H8, brev8)
> +GEN_VEXT_V(vbrev8_v_b, 1)
> +GEN_VEXT_V(vbrev8_v_h, 2)
> +GEN_VEXT_V(vbrev8_v_w, 4)
> +GEN_VEXT_V(vbrev8_v_d, 8)
> +
> +#define DO_IDENTITY(a) (a)
> +RVVCALL(OPIVV1, vrev8_v_b, OP_UU_B, H1, H1, DO_IDENTITY)
> +RVVCALL(OPIVV1, vrev8_v_h, OP_UU_H, H2, H2, bswap16)
> +RVVCALL(OPIVV1, vrev8_v_w, OP_UU_W, H4, H4, bswap32)
> +RVVCALL(OPIVV1, vrev8_v_d, OP_UU_D, H8, H8, bswap64)
> +GEN_VEXT_V(vrev8_v_b, 1)
> +GEN_VEXT_V(vrev8_v_h, 2)
> +GEN_VEXT_V(vrev8_v_w, 4)
> +GEN_VEXT_V(vrev8_v_d, 8)
> +
> +#define DO_ANDN(a, b) ((a) & ~(b))
> +RVVCALL(OPIVV2, vandn_vv_b, OP_UUU_B, H1, H1, H1, DO_ANDN)
> +RVVCALL(OPIVV2, vandn_vv_h, OP_UUU_H, H2, H2, H2, DO_ANDN)
> +RVVCALL(OPIVV2, vandn_vv_w, OP_UUU_W, H4, H4, H4, DO_ANDN)
> +RVVCALL(OPIVV2, vandn_vv_d, OP_UUU_D, H8, H8, H8, DO_ANDN)
> +GEN_VEXT_VV(vandn_vv_b, 1)
> +GEN_VEXT_VV(vandn_vv_h, 2)
> +GEN_VEXT_VV(vandn_vv_w, 4)
> +GEN_VEXT_VV(vandn_vv_d, 8)
> +
> +RVVCALL(OPIVX2, vandn_vx_b, OP_UUU_B, H1, H1, DO_ANDN)
> +RVVCALL(OPIVX2, vandn_vx_h, OP_UUU_H, H2, H2, DO_ANDN)
> +RVVCALL(OPIVX2, vandn_vx_w, OP_UUU_W, H4, H4, DO_ANDN)
> +RVVCALL(OPIVX2, vandn_vx_d, OP_UUU_D, H8, H8, DO_ANDN)
> +GEN_VEXT_VX(vandn_vx_b, 1)
> +GEN_VEXT_VX(vandn_vx_h, 2)
> +GEN_VEXT_VX(vandn_vx_w, 4)
> +GEN_VEXT_VX(vandn_vx_d, 8)
> +
> +RVVCALL(OPIVV1, vbrev_v_b, OP_UU_B, H1, H1, revbit8)
> +RVVCALL(OPIVV1, vbrev_v_h, OP_UU_H, H2, H2, revbit16)
> +RVVCALL(OPIVV1, vbrev_v_w, OP_UU_W, H4, H4, revbit32)
> +RVVCALL(OPIVV1, vbrev_v_d, OP_UU_D, H8, H8, revbit64)
> +GEN_VEXT_V(vbrev_v_b, 1)
> +GEN_VEXT_V(vbrev_v_h, 2)
> +GEN_VEXT_V(vbrev_v_w, 4)
> +GEN_VEXT_V(vbrev_v_d, 8)
> +
> +RVVCALL(OPIVV1, vclz_v_b, OP_UU_B, H1, H1, clz8)
> +RVVCALL(OPIVV1, vclz_v_h, OP_UU_H, H2, H2, clz16)
> +RVVCALL(OPIVV1, vclz_v_w, OP_UU_W, H4, H4, clz32)
> +RVVCALL(OPIVV1, vclz_v_d, OP_UU_D, H8, H8, clz64)
> +GEN_VEXT_V(vclz_v_b, 1)
> +GEN_VEXT_V(vclz_v_h, 2)
> +GEN_VEXT_V(vclz_v_w, 4)
> +GEN_VEXT_V(vclz_v_d, 8)
> +
> +RVVCALL(OPIVV1, vctz_v_b, OP_UU_B, H1, H1, ctz8)
> +RVVCALL(OPIVV1, vctz_v_h, OP_UU_H, H2, H2, ctz16)
> +RVVCALL(OPIVV1, vctz_v_w, OP_UU_W, H4, H4, ctz32)
> +RVVCALL(OPIVV1, vctz_v_d, OP_UU_D, H8, H8, ctz64)
> +GEN_VEXT_V(vctz_v_b, 1)
> +GEN_VEXT_V(vctz_v_h, 2)
> +GEN_VEXT_V(vctz_v_w, 4)
> +GEN_VEXT_V(vctz_v_d, 8)
> +
> +RVVCALL(OPIVV1, vcpop_v_b, OP_UU_B, H1, H1, ctpop8)
> +RVVCALL(OPIVV1, vcpop_v_h, OP_UU_H, H2, H2, ctpop16)
> +RVVCALL(OPIVV1, vcpop_v_w, OP_UU_W, H4, H4, ctpop32)
> +RVVCALL(OPIVV1, vcpop_v_d, OP_UU_D, H8, H8, ctpop64)
> +GEN_VEXT_V(vcpop_v_b, 1)
> +GEN_VEXT_V(vcpop_v_h, 2)
> +GEN_VEXT_V(vcpop_v_w, 4)
> +GEN_VEXT_V(vcpop_v_d, 8)
> +
> +#define DO_SLL(N, M) (N << (M & (sizeof(N) * 8 - 1)))
> +RVVCALL(OPIVV2, vwsll_vv_b, WOP_UUU_B, H2, H1, H1, DO_SLL)
> +RVVCALL(OPIVV2, vwsll_vv_h, WOP_UUU_H, H4, H2, H2, DO_SLL)
> +RVVCALL(OPIVV2, vwsll_vv_w, WOP_UUU_W, H8, H4, H4, DO_SLL)
> +GEN_VEXT_VV(vwsll_vv_b, 2)
> +GEN_VEXT_VV(vwsll_vv_h, 4)
> +GEN_VEXT_VV(vwsll_vv_w, 8)
> +
> +RVVCALL(OPIVX2, vwsll_vx_b, WOP_UUU_B, H2, H1, DO_SLL)
> +RVVCALL(OPIVX2, vwsll_vx_h, WOP_UUU_H, H4, H2, DO_SLL)
> +RVVCALL(OPIVX2, vwsll_vx_w, WOP_UUU_W, H8, H4, DO_SLL)
> +GEN_VEXT_VX(vwsll_vx_b, 2)
> +GEN_VEXT_VX(vwsll_vx_h, 4)
> +GEN_VEXT_VX(vwsll_vx_w, 8)
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 14/15] crypto: Add SM4 constant parameter CK
2023-07-11 16:59 ` [PATCH v8 14/15] crypto: Add SM4 constant parameter CK Max Chou
@ 2023-08-02 14:45 ` Alistair Francis
0 siblings, 0 replies; 23+ messages in thread
From: Alistair Francis @ 2023-08-02 14:45 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, dbarboza, Frank Chang,
Daniel P. Berrangé
On Tue, Jul 11, 2023 at 1:01 PM Max Chou <max.chou@sifive.com> wrote:
>
> Adds sm4_ck constant for use in sm4 cryptography across different targets.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> Reviewed-by: Frank Chang <frank.chang@sifive.com>
> Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> crypto/sm4.c | 10 ++++++++++
> include/crypto/sm4.h | 1 +
> 2 files changed, 11 insertions(+)
>
> diff --git a/crypto/sm4.c b/crypto/sm4.c
> index 9f0cd452c78..2987306cf7a 100644
> --- a/crypto/sm4.c
> +++ b/crypto/sm4.c
> @@ -47,3 +47,13 @@ uint8_t const sm4_sbox[] = {
> 0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
> };
>
> +uint32_t const sm4_ck[] = {
> + 0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
> + 0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
> + 0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
> + 0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
> + 0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
> + 0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
> + 0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
> + 0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
> +};
> diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
> index de8245d8a71..382b26d9224 100644
> --- a/include/crypto/sm4.h
> +++ b/include/crypto/sm4.h
> @@ -2,6 +2,7 @@
> #define QEMU_SM4_H
>
> extern const uint8_t sm4_sbox[256];
> +extern const uint32_t sm4_ck[32];
>
> static inline uint32_t sm4_subword(uint32_t word)
> {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
` (14 preceding siblings ...)
2023-07-11 16:59 ` [PATCH v8 15/15] target/riscv: Add Zvksed ISA extension support Max Chou
@ 2023-08-02 14:54 ` Alistair Francis
2023-08-03 4:16 ` Max Chou
15 siblings, 1 reply; 23+ messages in thread
From: Alistair Francis @ 2023-08-02 14:54 UTC (permalink / raw)
To: Max Chou; +Cc: qemu-devel, qemu-riscv, dbarboza
On Tue, Jul 11, 2023 at 12:59 PM Max Chou <max.chou@sifive.com> wrote:
>
> This patchset provides an implementation for Zvbb, Zvbc, Zvkned, Zvknh,
> Zvksh, Zvkg, and Zvksed of the draft RISC-V vector cryptography
> extensions as per the v20230620 version of the specification(1)(168e7b4).
> This is an update to the patchset submitted to qemu-devel on
> Wed, 12 Jul 2023 00:31:21 +0800
>
> v2:
>
> Squashed commits into one commit per extension with separate
> commits for each refactoring.
> Unified trans_rvzvk*.c.inc files into one trans_rvvk.c.inc.
> Style fixes in insn32.decode and other files.
> Added macros for EGS values in translation functions.
> Updated from v20230303 to v20230407 of the spec:
> Zvkb has been split into Zvbb and Zvbc.
> vbrev, vclz, vctz, vcpop and vwsll have been added to Zvbb.
>
> v3:
>
> New patch 03/19 removes redundant “cpu_vl == 0” checks from
> trans_rvv.c.inc.
> Introduction of new tcg ops has been factored out of patch 11/19
> and into 09/19.
> These ops are now added to non riscv-specific files.
>
> v4:
>
> New patch 08/17 fixes the tcg_gen_gvec_andcs temporary variable
> issue.
> Patch 09/17 fixes imm mode for vror.vi.
> Rebased to riscv-to-apply.next branch (de395bb):
> Replace vstart constraint checking by TCG op.
> Verified by code examples provided by vector crypto spec repository
> (riscv-crypto).
>
> v5:
>
> Imported aes-round.h for Zvkned extension.
> Rebased to 20230620110758.787479-1-richard.henderson@linaro.org
> Exposed the properties of v4 patch 17/17 to the patches that the
> extension was added.
> Removed v4 patch 08/17 that is queued to tcg-next.
>
> v6:
>
> Resent the same content becaue v5 was splitted by broken
> git-send-mail
>
> v7:
>
> Fixed endian issues
> Replace the TCG ops of vstart & vl EGS checking by helper function
> Changed the SEW selection of vsha2c[hl].vv to TCG translation
>
> v8:
>
> Rebased to the riscv-to-apply.next branch
> Fixed cross win32 build issue
> Removed redundent swap in AES key expanding (vaeskf1 & vaeskf2)
>
>
> As v20230620 is a frozen version, we are not expecting any significant
> changes to the specification or this patch series.
>
> Please note that the Zvkt data-independent execution latency extension
> (and all extensions including it) has not been implemented, and we
> would recommend not using these patches in an environment where timing
> attacks are an issue.
>
> Work performed by Dickon, Lawrence, Nazar, Kiran, and William from
> Codethink sponsored by SiFive, as well as Max Chou and Frank Chang
> from SiFive.
>
> https://github.com/riscv/riscv-crypto/releases
>
> Thanks to those who have already reviewed:
>
> Daniel Henrique Barboza dbarboza@ventanamicro.com
> [PATCH v4 09/17] target/riscv: Add Zvbb ISA extension support
> [PATCH v4 10/17] target/riscv: Add Zvkned ISA extension support
> [PATCH v4 11/17] target/riscv: Add Zvknh ISA extension support
> [PATCH v4 12/17] target/riscv: Add Zvksh ISA extension support
> [PATCH v4 13/17] target/riscv: Add Zvkg ISA extension support
> Weiwei Li liweiwei@iscas.ac.cn
> [PATCH v3 01/19] target/riscv: Refactor some of the generic vector functionality
> [PATCH v3 02/19] target/riscv: Refactor vector-vector translation macro
> [PATCH v3 03/19] target/riscv: Remove redundant "cpu_vl == 0" checks
> [PATCH v3 05/19] target/riscv: Move vector translation checks
> [PATCH v3 06/19] target/riscv: Refactor translation of vector-widening instruction
> [PATCH v3 07/19] target/riscv: Refactor some of the generic vector functionality
> [PATCH v3 19/19] target/riscv: Expose Zvk* and Zvb[b, c] cpu properties
> Richard Henderson richard.henderson@linaro.org
> [PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
> [PATCH v2 04/17] target/riscv: Move vector translation checks
> [PATCH v2 05/17] target/riscv: Refactor translation of vector-widening instruction
> [PATCH v2 07/17] qemu/bitops.h: Limit rotate amounts
> [PATCH v2 08/17] qemu/host-utils.h: Add clz and ctz functions for lower-bit integers
> [PATCH v2 14/17] crypto: Create sm4_subword
> Alistair Francis alistair.francis@wdc.com
> [PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
> Philipp Tomsich philipp.tomsich@vrull.eu
> Various v1 reviews
> Christoph Müllner christoph.muellner@vrull.eu
> Various v1 reviews
>
> Dickon Hood (2):
> target/riscv: Refactor translation of vector-widening instruction
> target/riscv: Add Zvbb ISA extension support
>
> Kiran Ostrolenk (4):
> target/riscv: Refactor some of the generic vector functionality
> target/riscv: Refactor vector-vector translation macro
> target/riscv: Refactor some of the generic vector functionality
> target/riscv: Add Zvknh ISA extension support
>
> Lawrence Hunter (2):
> target/riscv: Add Zvbc ISA extension support
> target/riscv: Add Zvksh ISA extension support
>
> Max Chou (3):
> crypto: Create sm4_subword
> crypto: Add SM4 constant parameter CK
> target/riscv: Add Zvksed ISA extension support
>
> Nazar Kazakov (4):
> target/riscv: Remove redundant "cpu_vl == 0" checks
> target/riscv: Move vector translation checks
> target/riscv: Add Zvkned ISA extension support
> target/riscv: Add Zvkg ISA extension support
Thanks!
Applied to riscv-to-apply.next
Alistair
>
> crypto/sm4.c | 10 +
> include/crypto/sm4.h | 9 +
> target/arm/tcg/crypto_helper.c | 10 +-
> target/riscv/cpu.c | 37 +
> target/riscv/cpu_cfg.h | 8 +
> target/riscv/helper.h | 98 +++
> target/riscv/insn32.decode | 58 ++
> target/riscv/insn_trans/trans_rvv.c.inc | 171 ++--
> target/riscv/insn_trans/trans_rvvk.c.inc | 606 ++++++++++++++
> target/riscv/meson.build | 4 +-
> target/riscv/translate.c | 1 +
> target/riscv/vcrypto_helper.c | 970 +++++++++++++++++++++++
> target/riscv/vector_helper.c | 243 +-----
> target/riscv/vector_internals.c | 81 ++
> target/riscv/vector_internals.h | 228 ++++++
> 15 files changed, 2179 insertions(+), 355 deletions(-)
> create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
> create mode 100644 target/riscv/vcrypto_helper.c
> create mode 100644 target/riscv/vector_internals.c
> create mode 100644 target/riscv/vector_internals.h
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support
2023-08-02 14:54 ` [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Alistair Francis
@ 2023-08-03 4:16 ` Max Chou
0 siblings, 0 replies; 23+ messages in thread
From: Max Chou @ 2023-08-03 4:16 UTC (permalink / raw)
To: Alistair Francis; +Cc: qemu-devel, qemu-riscv, dbarboza
On 2023/8/2 10:54 PM, Alistair Francis wrote:
> On Tue, Jul 11, 2023 at 12:59 PM Max Chou <max.chou@sifive.com> wrote:
>> This patchset provides an implementation for Zvbb, Zvbc, Zvkned, Zvknh,
>> Zvksh, Zvkg, and Zvksed of the draft RISC-V vector cryptography
>> extensions as per the v20230620 version of the specification(1)(168e7b4).
>> This is an update to the patchset submitted to qemu-devel on
>> Wed, 12 Jul 2023 00:31:21 +0800
>>
>> v2:
>>
>> Squashed commits into one commit per extension with separate
>> commits for each refactoring.
>> Unified trans_rvzvk*.c.inc files into one trans_rvvk.c.inc.
>> Style fixes in insn32.decode and other files.
>> Added macros for EGS values in translation functions.
>> Updated from v20230303 to v20230407 of the spec:
>> Zvkb has been split into Zvbb and Zvbc.
>> vbrev, vclz, vctz, vcpop and vwsll have been added to Zvbb.
>>
>> v3:
>>
>> New patch 03/19 removes redundant “cpu_vl == 0” checks from
>> trans_rvv.c.inc.
>> Introduction of new tcg ops has been factored out of patch 11/19
>> and into 09/19.
>> These ops are now added to non riscv-specific files.
>>
>> v4:
>>
>> New patch 08/17 fixes the tcg_gen_gvec_andcs temporary variable
>> issue.
>> Patch 09/17 fixes imm mode for vror.vi.
>> Rebased to riscv-to-apply.next branch (de395bb):
>> Replace vstart constraint checking by TCG op.
>> Verified by code examples provided by vector crypto spec repository
>> (riscv-crypto).
>>
>> v5:
>>
>> Imported aes-round.h for Zvkned extension.
>> Rebased to 20230620110758.787479-1-richard.henderson@linaro.org
>> Exposed the properties of v4 patch 17/17 to the patches that the
>> extension was added.
>> Removed v4 patch 08/17 that is queued to tcg-next.
>>
>> v6:
>>
>> Resent the same content becaue v5 was splitted by broken
>> git-send-mail
>>
>> v7:
>>
>> Fixed endian issues
>> Replace the TCG ops of vstart & vl EGS checking by helper function
>> Changed the SEW selection of vsha2c[hl].vv to TCG translation
>>
>> v8:
>>
>> Rebased to the riscv-to-apply.next branch
>> Fixed cross win32 build issue
>> Removed redundent swap in AES key expanding (vaeskf1 & vaeskf2)
>>
>>
>> As v20230620 is a frozen version, we are not expecting any significant
>> changes to the specification or this patch series.
>>
>> Please note that the Zvkt data-independent execution latency extension
>> (and all extensions including it) has not been implemented, and we
>> would recommend not using these patches in an environment where timing
>> attacks are an issue.
>>
>> Work performed by Dickon, Lawrence, Nazar, Kiran, and William from
>> Codethink sponsored by SiFive, as well as Max Chou and Frank Chang
>> from SiFive.
>>
>> https://github.com/riscv/riscv-crypto/releases
>>
>> Thanks to those who have already reviewed:
>>
>> Daniel Henrique Barboza dbarboza@ventanamicro.com
>> [PATCH v4 09/17] target/riscv: Add Zvbb ISA extension support
>> [PATCH v4 10/17] target/riscv: Add Zvkned ISA extension support
>> [PATCH v4 11/17] target/riscv: Add Zvknh ISA extension support
>> [PATCH v4 12/17] target/riscv: Add Zvksh ISA extension support
>> [PATCH v4 13/17] target/riscv: Add Zvkg ISA extension support
>> Weiwei Li liweiwei@iscas.ac.cn
>> [PATCH v3 01/19] target/riscv: Refactor some of the generic vector functionality
>> [PATCH v3 02/19] target/riscv: Refactor vector-vector translation macro
>> [PATCH v3 03/19] target/riscv: Remove redundant "cpu_vl == 0" checks
>> [PATCH v3 05/19] target/riscv: Move vector translation checks
>> [PATCH v3 06/19] target/riscv: Refactor translation of vector-widening instruction
>> [PATCH v3 07/19] target/riscv: Refactor some of the generic vector functionality
>> [PATCH v3 19/19] target/riscv: Expose Zvk* and Zvb[b, c] cpu properties
>> Richard Henderson richard.henderson@linaro.org
>> [PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
>> [PATCH v2 04/17] target/riscv: Move vector translation checks
>> [PATCH v2 05/17] target/riscv: Refactor translation of vector-widening instruction
>> [PATCH v2 07/17] qemu/bitops.h: Limit rotate amounts
>> [PATCH v2 08/17] qemu/host-utils.h: Add clz and ctz functions for lower-bit integers
>> [PATCH v2 14/17] crypto: Create sm4_subword
>> Alistair Francis alistair.francis@wdc.com
>> [PATCH v2 02/17] target/riscv: Refactor vector-vector translation macro
>> Philipp Tomsich philipp.tomsich@vrull.eu
>> Various v1 reviews
>> Christoph Müllner christoph.muellner@vrull.eu
>> Various v1 reviews
>>
>> Dickon Hood (2):
>> target/riscv: Refactor translation of vector-widening instruction
>> target/riscv: Add Zvbb ISA extension support
>>
>> Kiran Ostrolenk (4):
>> target/riscv: Refactor some of the generic vector functionality
>> target/riscv: Refactor vector-vector translation macro
>> target/riscv: Refactor some of the generic vector functionality
>> target/riscv: Add Zvknh ISA extension support
>>
>> Lawrence Hunter (2):
>> target/riscv: Add Zvbc ISA extension support
>> target/riscv: Add Zvksh ISA extension support
>>
>> Max Chou (3):
>> crypto: Create sm4_subword
>> crypto: Add SM4 constant parameter CK
>> target/riscv: Add Zvksed ISA extension support
>>
>> Nazar Kazakov (4):
>> target/riscv: Remove redundant "cpu_vl == 0" checks
>> target/riscv: Move vector translation checks
>> target/riscv: Add Zvkned ISA extension support
>> target/riscv: Add Zvkg ISA extension support
> Thanks!
>
> Applied to riscv-to-apply.next
>
> Alistair
Thanks!
We will send another patch set after the vector cryptographic spec is
ratified for the corresponding update (e.g. remove the `x-` prefix from
the cpu options, etc.).
And with the disassembler support and the `Zvkt` & other shorthand
extensions support.
Max
>
>> crypto/sm4.c | 10 +
>> include/crypto/sm4.h | 9 +
>> target/arm/tcg/crypto_helper.c | 10 +-
>> target/riscv/cpu.c | 37 +
>> target/riscv/cpu_cfg.h | 8 +
>> target/riscv/helper.h | 98 +++
>> target/riscv/insn32.decode | 58 ++
>> target/riscv/insn_trans/trans_rvv.c.inc | 171 ++--
>> target/riscv/insn_trans/trans_rvvk.c.inc | 606 ++++++++++++++
>> target/riscv/meson.build | 4 +-
>> target/riscv/translate.c | 1 +
>> target/riscv/vcrypto_helper.c | 970 +++++++++++++++++++++++
>> target/riscv/vector_helper.c | 243 +-----
>> target/riscv/vector_internals.c | 81 ++
>> target/riscv/vector_internals.h | 228 ++++++
>> 15 files changed, 2179 insertions(+), 355 deletions(-)
>> create mode 100644 target/riscv/insn_trans/trans_rvvk.c.inc
>> create mode 100644 target/riscv/vcrypto_helper.c
>> create mode 100644 target/riscv/vector_internals.c
>> create mode 100644 target/riscv/vector_internals.h
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2023-08-03 4:17 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-11 16:58 [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Max Chou
2023-07-11 16:59 ` [PATCH v8 01/15] target/riscv: Refactor some of the generic vector functionality Max Chou
2023-07-19 0:54 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 02/15] target/riscv: Refactor vector-vector translation macro Max Chou
2023-07-11 16:59 ` [PATCH v8 03/15] target/riscv: Remove redundant "cpu_vl == 0" checks Max Chou
2023-08-02 14:22 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 04/15] target/riscv: Add Zvbc ISA extension support Max Chou
2023-08-02 14:29 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 05/15] target/riscv: Move vector translation checks Max Chou
2023-07-11 16:59 ` [PATCH v8 06/15] target/riscv: Refactor translation of vector-widening instruction Max Chou
2023-07-11 16:59 ` [PATCH v8 07/15] target/riscv: Refactor some of the generic vector functionality Max Chou
2023-07-11 16:59 ` [PATCH v8 08/15] target/riscv: Add Zvbb ISA extension support Max Chou
2023-08-02 14:38 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 09/15] target/riscv: Add Zvkned " Max Chou
2023-07-11 16:59 ` [PATCH v8 10/15] target/riscv: Add Zvknh " Max Chou
2023-07-11 16:59 ` [PATCH v8 11/15] target/riscv: Add Zvksh " Max Chou
2023-07-11 16:59 ` [PATCH v8 12/15] target/riscv: Add Zvkg " Max Chou
2023-07-11 16:59 ` [PATCH v8 13/15] crypto: Create sm4_subword Max Chou
2023-07-11 16:59 ` [PATCH v8 14/15] crypto: Add SM4 constant parameter CK Max Chou
2023-08-02 14:45 ` Alistair Francis
2023-07-11 16:59 ` [PATCH v8 15/15] target/riscv: Add Zvksed ISA extension support Max Chou
2023-08-02 14:54 ` [PATCH v8 00/15] Add RISC-V vector cryptographic instruction set support Alistair Francis
2023-08-03 4:16 ` Max Chou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).