* [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions
@ 2024-09-18 17:14 Max Chou
2024-09-18 17:14 ` [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions Max Chou
` (8 more replies)
0 siblings, 9 replies; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
Hi,
This version fixes several issues in v5
- The cross page bound checking issue
- The mismatch vl comparison in the early exit checking of vext_ldst_us
- The endian issue when host is big endian
Thank for Richard Henderson's suggestions that this version unrolled the
loop in helper functions of unmasked vector unit-stride load/store
instructions, etc.
And this version also extends the optimizations to the unmasked vector
fault-only-first load instruction.
Some performance result of this version
1. Test case provided in
https://gitlab.com/qemu-project/qemu/-/issues/2137#note_1757501369
- QEMU user mode (vlen=128):
- Original: ~11.8 sec
- v5: ~1.3 sec
- v6: ~1.2 sec
- QEMU system mode (vlen=128):
- Original: ~29.4 sec
- v5: ~1.6 sec
- v6: ~1.6 sec
2. SPEC CPU2006 INT (test input)
- QEMU user mode (vlen=128)
- Original: ~459.1 sec
- v5: ~300.0 sec
- v6: ~280.6 sec
3. SPEC CPU2017 intspeed (test input)
- QEMU user mode (vlen=128)
- Original: ~2475.9 sec
- v5: ~1702.6 sec
- v6: ~1663.4 sec
This version is based on the riscv-to-apply.next branch(commit 90d5d3c).
Changes from v5:
- patch 2
- Replace the VSTART_CHECK_EARLY_EXIT function by checking the
correct evl in vext_ldst_us.
- patch 3
- Unroll the memory load/store loop
- Fix the bound checking issue in cross page elements
- Fix the endian issue in GEN_VEXT_LD_ELEM/GEN_VEXT_ST_ELEM
- Pass in mmu_index for vext_page_ldst_us
- Reduce the flag & host checking
- patch 4
- Unroll the memory load/store loop
- Fix the bound checking issue in cross page elements
- patch 5
- Extend optimizations to unmasked vector fault-only-first load
instruction
- patch 6
- Switch to memcpy only when doing byte load/store
- patch 7
- Inline the vext_page_ldst_us function
Previous versions:
- v1: https://lore.kernel.org/all/20240215192823.729209-1-max.chou@sifive.com/
- v2: https://lore.kernel.org/all/20240531174504.281461-1-max.chou@sifive.com/
- v3: https://lore.kernel.org/all/20240613141906.1276105-1-max.chou@sifive.com/
- v4: https://lore.kernel.org/all/20240613175122.1299212-1-max.chou@sifive.com/
- v5: https://lore.kernel.org/all/20240717133936.713642-1-max.chou@sifive.com/
Max Chou (7):
target/riscv: Set vdata.vm field for vector load/store whole register
instructions
target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us
target/riscv: rvv: Provide a fast path using direct access to host ram
for unmasked unit-stride load/store
target/riscv: rvv: Provide a fast path using direct access to host ram
for unit-stride whole register load/store
target/riscv: rvv: Provide a fast path using direct access to host ram
for unit-stride load-only-first load instructions
target/riscv: rvv: Provide group continuous ld/st flow for unit-stride
ld/st instructions
target/riscv: Inline unit-stride ld/st and corresponding functions for
performance
target/riscv/insn_trans/trans_rvv.c.inc | 3 +
target/riscv/vector_helper.c | 598 ++++++++++++++++--------
2 files changed, 400 insertions(+), 201 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-29 18:58 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us Max Chou
` (7 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
The vm field of the vector load/store whole register instruction's
encoding is 1.
The helper function of the vector load/store whole register instructions
may need the vdata.vm field to do some optimizations.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/insn_trans/trans_rvv.c.inc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
index 3a3896ba06c..14e10568bd7 100644
--- a/target/riscv/insn_trans/trans_rvv.c.inc
+++ b/target/riscv/insn_trans/trans_rvv.c.inc
@@ -770,6 +770,7 @@ static bool ld_us_mask_op(DisasContext *s, arg_vlm_v *a, uint8_t eew)
/* Mask destination register are always tail-agnostic */
data = FIELD_DP32(data, VDATA, VTA, s->cfg_vta_all_1s);
data = FIELD_DP32(data, VDATA, VMA, s->vma);
+ data = FIELD_DP32(data, VDATA, VM, 1);
return ldst_us_trans(a->rd, a->rs1, data, fn, s, false);
}
@@ -787,6 +788,7 @@ static bool st_us_mask_op(DisasContext *s, arg_vsm_v *a, uint8_t eew)
/* EMUL = 1, NFIELDS = 1 */
data = FIELD_DP32(data, VDATA, LMUL, 0);
data = FIELD_DP32(data, VDATA, NF, 1);
+ data = FIELD_DP32(data, VDATA, VM, 1);
return ldst_us_trans(a->rd, a->rs1, data, fn, s, true);
}
@@ -1106,6 +1108,7 @@ static bool ldst_whole_trans(uint32_t vd, uint32_t rs1, uint32_t nf,
TCGv_i32 desc;
uint32_t data = FIELD_DP32(0, VDATA, NF, nf);
+ data = FIELD_DP32(data, VDATA, VM, 1);
dest = tcg_temp_new_ptr();
desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlenb,
s->cfg_ptr->vlenb, data));
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
2024-09-18 17:14 ` [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-29 18:59 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store Max Chou
` (6 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
Because the real vl (evl) of vext_ldst_us may be different (e.g.
vlm.v/vsm.v/etc.), so the VSTART_CHECK_EARLY_EXIT checking function
should be replaced by checking evl in vext_ldst_us.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 87d2399c7e3..967bb2687ae 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -276,7 +276,10 @@ vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
uint32_t max_elems = vext_max_elems(desc, log2_esz);
uint32_t esz = 1 << log2_esz;
- VSTART_CHECK_EARLY_EXIT(env);
+ if (env->vstart >= evl) {
+ env->vstart = 0;
+ return;
+ }
/* load bytes from guest memory */
for (i = env->vstart; i < evl; env->vstart = ++i) {
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
2024-09-18 17:14 ` [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions Max Chou
2024-09-18 17:14 ` [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-30 16:26 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store Max Chou
` (5 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
This commit references the sve_ldN_r/sve_stN_r helper functions in ARM
target to optimize the vector unmasked unit-stride load/store
implementation with following optimizations:
* Get the page boundary
* Probing pages/resolving host memory address at the beginning if
possible
* Provide new interface to direct access host memory
* Switch to the original slow TLB access when cross page element/violate
page permission/violate pmp/watchpoints in page
The original element load/store interface is replaced by the new element
load/store functions with _tlb & _host postfix that means doing the
element load/store through the original softmmu flow and the direct
access host memory flow.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 363 +++++++++++++++++++++--------------
1 file changed, 224 insertions(+), 139 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 967bb2687ae..c2fcf8b3a00 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -147,34 +147,47 @@ static inline void vext_set_elem_mask(void *v0, int index,
}
/* elements operations for load and store */
-typedef void vext_ldst_elem_fn(CPURISCVState *env, abi_ptr addr,
- uint32_t idx, void *vd, uintptr_t retaddr);
+typedef void vext_ldst_elem_fn_tlb(CPURISCVState *env, abi_ptr addr,
+ uint32_t idx, void *vd, uintptr_t retaddr);
+typedef void vext_ldst_elem_fn_host(void *vd, uint32_t idx, void *host);
-#define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
-static void NAME(CPURISCVState *env, abi_ptr addr, \
- uint32_t idx, void *vd, uintptr_t retaddr)\
-{ \
- ETYPE *cur = ((ETYPE *)vd + H(idx)); \
- *cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
-} \
-
-GEN_VEXT_LD_ELEM(lde_b, int8_t, H1, ldsb)
-GEN_VEXT_LD_ELEM(lde_h, int16_t, H2, ldsw)
-GEN_VEXT_LD_ELEM(lde_w, int32_t, H4, ldl)
-GEN_VEXT_LD_ELEM(lde_d, int64_t, H8, ldq)
-
-#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
-static void NAME(CPURISCVState *env, abi_ptr addr, \
- uint32_t idx, void *vd, uintptr_t retaddr)\
-{ \
- ETYPE data = *((ETYPE *)vd + H(idx)); \
- cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
+#define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
+static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
+ uint32_t idx, void *vd, uintptr_t retaddr) \
+{ \
+ ETYPE *cur = ((ETYPE *)vd + H(idx)); \
+ *cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
+} \
+ \
+static void NAME##_host(void *vd, uint32_t idx, void *host) \
+{ \
+ ETYPE *cur = ((ETYPE *)vd + H(idx)); \
+ *cur = (ETYPE)LDSUF##_p(host); \
+}
+
+GEN_VEXT_LD_ELEM(lde_b, uint8_t, H1, ldub)
+GEN_VEXT_LD_ELEM(lde_h, uint16_t, H2, lduw)
+GEN_VEXT_LD_ELEM(lde_w, uint32_t, H4, ldl)
+GEN_VEXT_LD_ELEM(lde_d, uint64_t, H8, ldq)
+
+#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
+static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
+ uint32_t idx, void *vd, uintptr_t retaddr) \
+{ \
+ ETYPE data = *((ETYPE *)vd + H(idx)); \
+ cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
+} \
+ \
+static void NAME##_host(void *vd, uint32_t idx, void *host) \
+{ \
+ ETYPE data = *((ETYPE *)vd + H(idx)); \
+ STSUF##_p(host, data); \
}
-GEN_VEXT_ST_ELEM(ste_b, int8_t, H1, stb)
-GEN_VEXT_ST_ELEM(ste_h, int16_t, H2, stw)
-GEN_VEXT_ST_ELEM(ste_w, int32_t, H4, stl)
-GEN_VEXT_ST_ELEM(ste_d, int64_t, H8, stq)
+GEN_VEXT_ST_ELEM(ste_b, uint8_t, H1, stb)
+GEN_VEXT_ST_ELEM(ste_h, uint16_t, H2, stw)
+GEN_VEXT_ST_ELEM(ste_w, uint32_t, H4, stl)
+GEN_VEXT_ST_ELEM(ste_d, uint64_t, H8, stq)
static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
uint32_t desc, uint32_t nf,
@@ -197,11 +210,10 @@ static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
* stride: access vector element from strided memory
*/
static void
-vext_ldst_stride(void *vd, void *v0, target_ulong base,
- target_ulong stride, CPURISCVState *env,
- uint32_t desc, uint32_t vm,
- vext_ldst_elem_fn *ldst_elem,
- uint32_t log2_esz, uintptr_t ra)
+vext_ldst_stride(void *vd, void *v0, target_ulong base, target_ulong stride,
+ CPURISCVState *env, uint32_t desc, uint32_t vm,
+ vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
+ uintptr_t ra)
{
uint32_t i, k;
uint32_t nf = vext_nf(desc);
@@ -241,10 +253,10 @@ void HELPER(NAME)(void *vd, void * v0, target_ulong base, \
ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_LD_STRIDE(vlse8_v, int8_t, lde_b)
-GEN_VEXT_LD_STRIDE(vlse16_v, int16_t, lde_h)
-GEN_VEXT_LD_STRIDE(vlse32_v, int32_t, lde_w)
-GEN_VEXT_LD_STRIDE(vlse64_v, int64_t, lde_d)
+GEN_VEXT_LD_STRIDE(vlse8_v, int8_t, lde_b_tlb)
+GEN_VEXT_LD_STRIDE(vlse16_v, int16_t, lde_h_tlb)
+GEN_VEXT_LD_STRIDE(vlse32_v, int32_t, lde_w_tlb)
+GEN_VEXT_LD_STRIDE(vlse64_v, int64_t, lde_d_tlb)
#define GEN_VEXT_ST_STRIDE(NAME, ETYPE, STORE_FN) \
void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
@@ -256,42 +268,114 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_ST_STRIDE(vsse8_v, int8_t, ste_b)
-GEN_VEXT_ST_STRIDE(vsse16_v, int16_t, ste_h)
-GEN_VEXT_ST_STRIDE(vsse32_v, int32_t, ste_w)
-GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d)
+GEN_VEXT_ST_STRIDE(vsse8_v, int8_t, ste_b_tlb)
+GEN_VEXT_ST_STRIDE(vsse16_v, int16_t, ste_h_tlb)
+GEN_VEXT_ST_STRIDE(vsse32_v, int32_t, ste_w_tlb)
+GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d_tlb)
/*
* unit-stride: access elements stored contiguously in memory
*/
/* unmasked unit-stride load and store operation */
+static void
+vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
+ uint32_t elems, uint32_t nf, uint32_t max_elems,
+ uint32_t log2_esz, bool is_load, int mmu_index,
+ vext_ldst_elem_fn_tlb *ldst_tlb,
+ vext_ldst_elem_fn_host *ldst_host, uintptr_t ra)
+{
+ void *host;
+ int i, k, flags;
+ uint32_t esz = 1 << log2_esz;
+ uint32_t size = (elems * nf) << log2_esz;
+ uint32_t evl = env->vstart + elems;
+ MMUAccessType access_type = is_load ? MMU_DATA_LOAD : MMU_DATA_STORE;
+
+ /* Check page permission/pmp/watchpoint/etc. */
+ flags = probe_access_flags(env, adjust_addr(env, addr), size, access_type,
+ mmu_index, true, &host, ra);
+
+ if (flags == 0) {
+ for (i = env->vstart; i < evl; ++i) {
+ k = 0;
+ while (k < nf) {
+ ldst_host(vd, i + k * max_elems, host);
+ host += esz;
+ k++;
+ }
+ }
+ env->vstart += elems;
+ } else {
+ /* load bytes from guest memory */
+ for (i = env->vstart; i < evl; env->vstart = ++i) {
+ k = 0;
+ while (k < nf) {
+ ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems, vd,
+ ra);
+ addr += esz;
+ k++;
+ }
+ }
+ }
+}
+
static void
vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
- vext_ldst_elem_fn *ldst_elem, uint32_t log2_esz, uint32_t evl,
- uintptr_t ra)
+ vext_ldst_elem_fn_tlb *ldst_tlb,
+ vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
+ uint32_t evl, uintptr_t ra, bool is_load)
{
- uint32_t i, k;
+ uint32_t k;
+ target_ulong page_split, elems, addr;
uint32_t nf = vext_nf(desc);
uint32_t max_elems = vext_max_elems(desc, log2_esz);
uint32_t esz = 1 << log2_esz;
+ uint32_t msize = nf * esz;
+ int mmu_index = riscv_env_mmu_index(env, false);
if (env->vstart >= evl) {
env->vstart = 0;
return;
}
- /* load bytes from guest memory */
- for (i = env->vstart; i < evl; env->vstart = ++i) {
- k = 0;
- while (k < nf) {
- target_ulong addr = base + ((i * nf + k) << log2_esz);
- ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
- k++;
+ /* Calculate the page range of first page */
+ addr = base + ((env->vstart * nf) << log2_esz);
+ page_split = -(addr | TARGET_PAGE_MASK);
+ /* Get number of elements */
+ elems = page_split / msize;
+ if (unlikely(env->vstart + elems >= evl)) {
+ elems = evl - env->vstart;
+ }
+
+ /* Load/store elements in the first page */
+ if (likely(elems)) {
+ vext_page_ldst_us(env, vd, addr, elems, nf, max_elems, log2_esz,
+ is_load, mmu_index, ldst_tlb, ldst_host, ra);
+ }
+
+ /* Load/store elements in the second page */
+ if (unlikely(env->vstart < evl)) {
+ /* Cross page element */
+ if (unlikely(page_split % msize)) {
+ for (k = 0; k < nf; k++) {
+ addr = base + ((env->vstart * nf + k) << log2_esz);
+ ldst_tlb(env, adjust_addr(env, addr),
+ env->vstart + k * max_elems, vd, ra);
+ }
+ env->vstart++;
}
+
+ addr = base + ((env->vstart * nf) << log2_esz);
+ /* Get number of elements of second page */
+ elems = evl - env->vstart;
+
+ /* Load/store elements in the second page */
+ vext_page_ldst_us(env, vd, addr, elems, nf, max_elems, log2_esz,
+ is_load, mmu_index, ldst_tlb, ldst_host, ra);
}
- env->vstart = 0;
+ env->vstart = 0;
vext_set_tail_elems_1s(evl, vd, desc, nf, esz, max_elems);
}
@@ -300,47 +384,47 @@ vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
* stride, stride = NF * sizeof (ETYPE)
*/
-#define GEN_VEXT_LD_US(NAME, ETYPE, LOAD_FN) \
-void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
- vext_ldst_stride(vd, v0, base, stride, env, desc, false, LOAD_FN, \
- ctzl(sizeof(ETYPE)), GETPC()); \
-} \
- \
-void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- vext_ldst_us(vd, base, env, desc, LOAD_FN, \
- ctzl(sizeof(ETYPE)), env->vl, GETPC()); \
+#define GEN_VEXT_LD_US(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
+void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
+ CPURISCVState *env, uint32_t desc) \
+{ \
+ uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
+ vext_ldst_stride(vd, v0, base, stride, env, desc, false, \
+ LOAD_FN_TLB, ctzl(sizeof(ETYPE)), GETPC()); \
+} \
+ \
+void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
+ CPURISCVState *env, uint32_t desc) \
+{ \
+ vext_ldst_us(vd, base, env, desc, LOAD_FN_TLB, LOAD_FN_HOST, \
+ ctzl(sizeof(ETYPE)), env->vl, GETPC(), true); \
}
-GEN_VEXT_LD_US(vle8_v, int8_t, lde_b)
-GEN_VEXT_LD_US(vle16_v, int16_t, lde_h)
-GEN_VEXT_LD_US(vle32_v, int32_t, lde_w)
-GEN_VEXT_LD_US(vle64_v, int64_t, lde_d)
+GEN_VEXT_LD_US(vle8_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LD_US(vle16_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LD_US(vle32_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LD_US(vle64_v, int64_t, lde_d_tlb, lde_d_host)
-#define GEN_VEXT_ST_US(NAME, ETYPE, STORE_FN) \
+#define GEN_VEXT_ST_US(NAME, ETYPE, STORE_FN_TLB, STORE_FN_HOST) \
void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
CPURISCVState *env, uint32_t desc) \
{ \
uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
- vext_ldst_stride(vd, v0, base, stride, env, desc, false, STORE_FN, \
- ctzl(sizeof(ETYPE)), GETPC()); \
+ vext_ldst_stride(vd, v0, base, stride, env, desc, false, \
+ STORE_FN_TLB, ctzl(sizeof(ETYPE)), GETPC()); \
} \
\
void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
CPURISCVState *env, uint32_t desc) \
{ \
- vext_ldst_us(vd, base, env, desc, STORE_FN, \
- ctzl(sizeof(ETYPE)), env->vl, GETPC()); \
+ vext_ldst_us(vd, base, env, desc, STORE_FN_TLB, STORE_FN_HOST, \
+ ctzl(sizeof(ETYPE)), env->vl, GETPC(), false); \
}
-GEN_VEXT_ST_US(vse8_v, int8_t, ste_b)
-GEN_VEXT_ST_US(vse16_v, int16_t, ste_h)
-GEN_VEXT_ST_US(vse32_v, int32_t, ste_w)
-GEN_VEXT_ST_US(vse64_v, int64_t, ste_d)
+GEN_VEXT_ST_US(vse8_v, int8_t, ste_b_tlb, ste_b_host)
+GEN_VEXT_ST_US(vse16_v, int16_t, ste_h_tlb, ste_h_host)
+GEN_VEXT_ST_US(vse32_v, int32_t, ste_w_tlb, ste_w_host)
+GEN_VEXT_ST_US(vse64_v, int64_t, ste_d_tlb, ste_d_host)
/*
* unit stride mask load and store, EEW = 1
@@ -350,8 +434,8 @@ void HELPER(vlm_v)(void *vd, void *v0, target_ulong base,
{
/* evl = ceil(vl/8) */
uint8_t evl = (env->vl + 7) >> 3;
- vext_ldst_us(vd, base, env, desc, lde_b,
- 0, evl, GETPC());
+ vext_ldst_us(vd, base, env, desc, lde_b_tlb, lde_b_host,
+ 0, evl, GETPC(), true);
}
void HELPER(vsm_v)(void *vd, void *v0, target_ulong base,
@@ -359,8 +443,8 @@ void HELPER(vsm_v)(void *vd, void *v0, target_ulong base,
{
/* evl = ceil(vl/8) */
uint8_t evl = (env->vl + 7) >> 3;
- vext_ldst_us(vd, base, env, desc, ste_b,
- 0, evl, GETPC());
+ vext_ldst_us(vd, base, env, desc, ste_b_tlb, ste_b_host,
+ 0, evl, GETPC(), false);
}
/*
@@ -385,7 +469,7 @@ static inline void
vext_ldst_index(void *vd, void *v0, target_ulong base,
void *vs2, CPURISCVState *env, uint32_t desc,
vext_get_index_addr get_index_addr,
- vext_ldst_elem_fn *ldst_elem,
+ vext_ldst_elem_fn_tlb *ldst_elem,
uint32_t log2_esz, uintptr_t ra)
{
uint32_t i, k;
@@ -426,22 +510,22 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
LOAD_FN, ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_LD_INDEX(vlxei8_8_v, int8_t, idx_b, lde_b)
-GEN_VEXT_LD_INDEX(vlxei8_16_v, int16_t, idx_b, lde_h)
-GEN_VEXT_LD_INDEX(vlxei8_32_v, int32_t, idx_b, lde_w)
-GEN_VEXT_LD_INDEX(vlxei8_64_v, int64_t, idx_b, lde_d)
-GEN_VEXT_LD_INDEX(vlxei16_8_v, int8_t, idx_h, lde_b)
-GEN_VEXT_LD_INDEX(vlxei16_16_v, int16_t, idx_h, lde_h)
-GEN_VEXT_LD_INDEX(vlxei16_32_v, int32_t, idx_h, lde_w)
-GEN_VEXT_LD_INDEX(vlxei16_64_v, int64_t, idx_h, lde_d)
-GEN_VEXT_LD_INDEX(vlxei32_8_v, int8_t, idx_w, lde_b)
-GEN_VEXT_LD_INDEX(vlxei32_16_v, int16_t, idx_w, lde_h)
-GEN_VEXT_LD_INDEX(vlxei32_32_v, int32_t, idx_w, lde_w)
-GEN_VEXT_LD_INDEX(vlxei32_64_v, int64_t, idx_w, lde_d)
-GEN_VEXT_LD_INDEX(vlxei64_8_v, int8_t, idx_d, lde_b)
-GEN_VEXT_LD_INDEX(vlxei64_16_v, int16_t, idx_d, lde_h)
-GEN_VEXT_LD_INDEX(vlxei64_32_v, int32_t, idx_d, lde_w)
-GEN_VEXT_LD_INDEX(vlxei64_64_v, int64_t, idx_d, lde_d)
+GEN_VEXT_LD_INDEX(vlxei8_8_v, int8_t, idx_b, lde_b_tlb)
+GEN_VEXT_LD_INDEX(vlxei8_16_v, int16_t, idx_b, lde_h_tlb)
+GEN_VEXT_LD_INDEX(vlxei8_32_v, int32_t, idx_b, lde_w_tlb)
+GEN_VEXT_LD_INDEX(vlxei8_64_v, int64_t, idx_b, lde_d_tlb)
+GEN_VEXT_LD_INDEX(vlxei16_8_v, int8_t, idx_h, lde_b_tlb)
+GEN_VEXT_LD_INDEX(vlxei16_16_v, int16_t, idx_h, lde_h_tlb)
+GEN_VEXT_LD_INDEX(vlxei16_32_v, int32_t, idx_h, lde_w_tlb)
+GEN_VEXT_LD_INDEX(vlxei16_64_v, int64_t, idx_h, lde_d_tlb)
+GEN_VEXT_LD_INDEX(vlxei32_8_v, int8_t, idx_w, lde_b_tlb)
+GEN_VEXT_LD_INDEX(vlxei32_16_v, int16_t, idx_w, lde_h_tlb)
+GEN_VEXT_LD_INDEX(vlxei32_32_v, int32_t, idx_w, lde_w_tlb)
+GEN_VEXT_LD_INDEX(vlxei32_64_v, int64_t, idx_w, lde_d_tlb)
+GEN_VEXT_LD_INDEX(vlxei64_8_v, int8_t, idx_d, lde_b_tlb)
+GEN_VEXT_LD_INDEX(vlxei64_16_v, int16_t, idx_d, lde_h_tlb)
+GEN_VEXT_LD_INDEX(vlxei64_32_v, int32_t, idx_d, lde_w_tlb)
+GEN_VEXT_LD_INDEX(vlxei64_64_v, int64_t, idx_d, lde_d_tlb)
#define GEN_VEXT_ST_INDEX(NAME, ETYPE, INDEX_FN, STORE_FN) \
void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
@@ -452,22 +536,22 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
GETPC()); \
}
-GEN_VEXT_ST_INDEX(vsxei8_8_v, int8_t, idx_b, ste_b)
-GEN_VEXT_ST_INDEX(vsxei8_16_v, int16_t, idx_b, ste_h)
-GEN_VEXT_ST_INDEX(vsxei8_32_v, int32_t, idx_b, ste_w)
-GEN_VEXT_ST_INDEX(vsxei8_64_v, int64_t, idx_b, ste_d)
-GEN_VEXT_ST_INDEX(vsxei16_8_v, int8_t, idx_h, ste_b)
-GEN_VEXT_ST_INDEX(vsxei16_16_v, int16_t, idx_h, ste_h)
-GEN_VEXT_ST_INDEX(vsxei16_32_v, int32_t, idx_h, ste_w)
-GEN_VEXT_ST_INDEX(vsxei16_64_v, int64_t, idx_h, ste_d)
-GEN_VEXT_ST_INDEX(vsxei32_8_v, int8_t, idx_w, ste_b)
-GEN_VEXT_ST_INDEX(vsxei32_16_v, int16_t, idx_w, ste_h)
-GEN_VEXT_ST_INDEX(vsxei32_32_v, int32_t, idx_w, ste_w)
-GEN_VEXT_ST_INDEX(vsxei32_64_v, int64_t, idx_w, ste_d)
-GEN_VEXT_ST_INDEX(vsxei64_8_v, int8_t, idx_d, ste_b)
-GEN_VEXT_ST_INDEX(vsxei64_16_v, int16_t, idx_d, ste_h)
-GEN_VEXT_ST_INDEX(vsxei64_32_v, int32_t, idx_d, ste_w)
-GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d)
+GEN_VEXT_ST_INDEX(vsxei8_8_v, int8_t, idx_b, ste_b_tlb)
+GEN_VEXT_ST_INDEX(vsxei8_16_v, int16_t, idx_b, ste_h_tlb)
+GEN_VEXT_ST_INDEX(vsxei8_32_v, int32_t, idx_b, ste_w_tlb)
+GEN_VEXT_ST_INDEX(vsxei8_64_v, int64_t, idx_b, ste_d_tlb)
+GEN_VEXT_ST_INDEX(vsxei16_8_v, int8_t, idx_h, ste_b_tlb)
+GEN_VEXT_ST_INDEX(vsxei16_16_v, int16_t, idx_h, ste_h_tlb)
+GEN_VEXT_ST_INDEX(vsxei16_32_v, int32_t, idx_h, ste_w_tlb)
+GEN_VEXT_ST_INDEX(vsxei16_64_v, int64_t, idx_h, ste_d_tlb)
+GEN_VEXT_ST_INDEX(vsxei32_8_v, int8_t, idx_w, ste_b_tlb)
+GEN_VEXT_ST_INDEX(vsxei32_16_v, int16_t, idx_w, ste_h_tlb)
+GEN_VEXT_ST_INDEX(vsxei32_32_v, int32_t, idx_w, ste_w_tlb)
+GEN_VEXT_ST_INDEX(vsxei32_64_v, int64_t, idx_w, ste_d_tlb)
+GEN_VEXT_ST_INDEX(vsxei64_8_v, int8_t, idx_d, ste_b_tlb)
+GEN_VEXT_ST_INDEX(vsxei64_16_v, int16_t, idx_d, ste_h_tlb)
+GEN_VEXT_ST_INDEX(vsxei64_32_v, int32_t, idx_d, ste_w_tlb)
+GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d_tlb)
/*
* unit-stride fault-only-fisrt load instructions
@@ -475,7 +559,7 @@ GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d)
static inline void
vext_ldff(void *vd, void *v0, target_ulong base,
CPURISCVState *env, uint32_t desc,
- vext_ldst_elem_fn *ldst_elem,
+ vext_ldst_elem_fn_tlb *ldst_elem,
uint32_t log2_esz, uintptr_t ra)
{
uint32_t i, k, vl = 0;
@@ -561,10 +645,10 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b)
-GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h)
-GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w)
-GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d)
+GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb)
+GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb)
+GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb)
+GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
#define DO_SWAP(N, M) (M)
#define DO_AND(N, M) (N & M)
@@ -581,7 +665,8 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d)
*/
static void
vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
- vext_ldst_elem_fn *ldst_elem, uint32_t log2_esz, uintptr_t ra)
+ vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
+ uintptr_t ra)
{
uint32_t i, k, off, pos;
uint32_t nf = vext_nf(desc);
@@ -625,22 +710,22 @@ void HELPER(NAME)(void *vd, target_ulong base, \
ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b)
-GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h)
-GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w)
-GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d)
-GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b)
-GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h)
-GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w)
-GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d)
-GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b)
-GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h)
-GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w)
-GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d)
-GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b)
-GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h)
-GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w)
-GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d)
+GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb)
+GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb)
+GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb)
+GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb)
+GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb)
+GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb)
+GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb)
+GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb)
+GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb)
+GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb)
+GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb)
+GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb)
+GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb)
+GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb)
+GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb)
+GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb)
#define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN) \
void HELPER(NAME)(void *vd, target_ulong base, \
@@ -650,10 +735,10 @@ void HELPER(NAME)(void *vd, target_ulong base, \
ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b)
-GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b)
-GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b)
-GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
+GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb)
+GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb)
+GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb)
+GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb)
/*
* Vector Integer Arithmetic Instructions
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (2 preceding siblings ...)
2024-09-18 17:14 ` [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-30 16:31 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions Max Chou
` (4 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
The vector unit-stride whole register load/store instructions are
similar to unmasked unit-stride load/store instructions that is suitable
to be optimized by using a direct access to host ram fast path.
Because the vector whole register load/store instructions do not need to
handle the tail agnostic, so remove the vstart early exit checking.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 129 +++++++++++++++++++----------------
1 file changed, 70 insertions(+), 59 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index c2fcf8b3a00..824e6401736 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -665,80 +665,91 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
*/
static void
vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
- vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
- uintptr_t ra)
+ vext_ldst_elem_fn_tlb *ldst_tlb,
+ vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
+ uintptr_t ra, bool is_load)
{
- uint32_t i, k, off, pos;
+ target_ulong page_split, elems, addr;
uint32_t nf = vext_nf(desc);
uint32_t vlenb = riscv_cpu_cfg(env)->vlenb;
uint32_t max_elems = vlenb >> log2_esz;
+ uint32_t evl = nf * max_elems;
+ uint32_t esz = 1 << log2_esz;
+ int mmu_index = riscv_env_mmu_index(env, false);
- if (env->vstart >= ((vlenb * nf) >> log2_esz)) {
- env->vstart = 0;
- return;
+ /* Calculate the page range of first page */
+ addr = base + (env->vstart << log2_esz);
+ page_split = -(addr | TARGET_PAGE_MASK);
+ /* Get number of elements */
+ elems = page_split / esz;
+ if (unlikely(env->vstart + elems >= evl)) {
+ elems = evl - env->vstart;
}
- k = env->vstart / max_elems;
- off = env->vstart % max_elems;
-
- if (off) {
- /* load/store rest of elements of current segment pointed by vstart */
- for (pos = off; pos < max_elems; pos++, env->vstart++) {
- target_ulong addr = base + ((pos + k * max_elems) << log2_esz);
- ldst_elem(env, adjust_addr(env, addr), pos + k * max_elems, vd,
- ra);
- }
- k++;
+ /* Load/store elements in the first page */
+ if (likely(elems)) {
+ vext_page_ldst_us(env, vd, addr, elems, 1, max_elems, log2_esz,
+ is_load, mmu_index, ldst_tlb, ldst_host, ra);
}
- /* load/store elements for rest of segments */
- for (; k < nf; k++) {
- for (i = 0; i < max_elems; i++, env->vstart++) {
- target_ulong addr = base + ((i + k * max_elems) << log2_esz);
- ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
+ /* Load/store elements in the second page */
+ if (unlikely(env->vstart < evl)) {
+ /* Cross page element */
+ if (unlikely(page_split % esz)) {
+ addr = base + (env->vstart << log2_esz);
+ ldst_tlb(env, adjust_addr(env, addr), env->vstart, vd, ra);
+ env->vstart++;
}
+
+ addr = base + (env->vstart << log2_esz);
+ /* Get number of elements of second page */
+ elems = evl - env->vstart;
+
+ /* Load/store elements in the second page */
+ vext_page_ldst_us(env, vd, addr, elems, 1, max_elems, log2_esz,
+ is_load, mmu_index, ldst_tlb, ldst_host, ra);
}
env->vstart = 0;
}
-#define GEN_VEXT_LD_WHOLE(NAME, ETYPE, LOAD_FN) \
-void HELPER(NAME)(void *vd, target_ulong base, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- vext_ldst_whole(vd, base, env, desc, LOAD_FN, \
- ctzl(sizeof(ETYPE)), GETPC()); \
-}
-
-GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb)
-GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb)
-GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb)
-GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb)
-GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb)
-GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb)
-GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb)
-GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb)
-GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb)
-GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb)
-GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb)
-GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb)
-GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb)
-GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb)
-GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb)
-GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb)
-
-#define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN) \
-void HELPER(NAME)(void *vd, target_ulong base, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- vext_ldst_whole(vd, base, env, desc, STORE_FN, \
- ctzl(sizeof(ETYPE)), GETPC()); \
-}
-
-GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb)
-GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb)
-GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb)
-GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb)
+#define GEN_VEXT_LD_WHOLE(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
+void HELPER(NAME)(void *vd, target_ulong base, CPURISCVState *env, \
+ uint32_t desc) \
+{ \
+ vext_ldst_whole(vd, base, env, desc, LOAD_FN_TLB, LOAD_FN_HOST, \
+ ctzl(sizeof(ETYPE)), GETPC(), true); \
+}
+
+GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb, lde_d_host)
+GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb, lde_d_host)
+GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb, lde_d_host)
+GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb, lde_d_host)
+
+#define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN_TLB, STORE_FN_HOST) \
+void HELPER(NAME)(void *vd, target_ulong base, CPURISCVState *env, \
+ uint32_t desc) \
+{ \
+ vext_ldst_whole(vd, base, env, desc, STORE_FN_TLB, STORE_FN_HOST, \
+ ctzl(sizeof(ETYPE)), GETPC(), false); \
+}
+
+GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb, ste_b_host)
+GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb, ste_b_host)
+GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb, ste_b_host)
+GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb, ste_b_host)
/*
* Vector Integer Arithmetic Instructions
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (3 preceding siblings ...)
2024-09-18 17:14 ` [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-30 16:35 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions Max Chou
` (3 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
The unmasked unit-stride fault-only-first load instructions are similar
to the unmasked unit-stride load/store instructions that is suitable to
be optimized by using a direct access to host ram fast path.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 98 ++++++++++++++++++++++++++----------
1 file changed, 71 insertions(+), 27 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 824e6401736..59009a940ff 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -557,18 +557,18 @@ GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d_tlb)
* unit-stride fault-only-fisrt load instructions
*/
static inline void
-vext_ldff(void *vd, void *v0, target_ulong base,
- CPURISCVState *env, uint32_t desc,
- vext_ldst_elem_fn_tlb *ldst_elem,
- uint32_t log2_esz, uintptr_t ra)
+vext_ldff(void *vd, void *v0, target_ulong base, CPURISCVState *env,
+ uint32_t desc, vext_ldst_elem_fn_tlb *ldst_tlb,
+ vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz, uintptr_t ra)
{
uint32_t i, k, vl = 0;
uint32_t nf = vext_nf(desc);
uint32_t vm = vext_vm(desc);
uint32_t max_elems = vext_max_elems(desc, log2_esz);
uint32_t esz = 1 << log2_esz;
+ uint32_t msize = nf * esz;
uint32_t vma = vext_vma(desc);
- target_ulong addr, offset, remain;
+ target_ulong addr, offset, remain, page_split, elems;
int mmu_index = riscv_env_mmu_index(env, false);
VSTART_CHECK_EARLY_EXIT(env);
@@ -617,19 +617,63 @@ ProbeSuccess:
if (vl != 0) {
env->vl = vl;
}
- for (i = env->vstart; i < env->vl; i++) {
- k = 0;
- while (k < nf) {
- if (!vm && !vext_elem_mask(v0, i)) {
- /* set masked-off elements to 1s */
- vext_set_elems_1s(vd, vma, (i + k * max_elems) * esz,
- (i + k * max_elems + 1) * esz);
- k++;
- continue;
+
+ if (env->vstart < env->vl) {
+ if (vm) {
+ /* Calculate the page range of first page */
+ addr = base + ((env->vstart * nf) << log2_esz);
+ page_split = -(addr | TARGET_PAGE_MASK);
+ /* Get number of elements */
+ elems = page_split / msize;
+ if (unlikely(env->vstart + elems >= env->vl)) {
+ elems = env->vl - env->vstart;
+ }
+
+ /* Load/store elements in the first page */
+ if (likely(elems)) {
+ vext_page_ldst_us(env, vd, addr, elems, nf, max_elems,
+ log2_esz, true, mmu_index, ldst_tlb,
+ ldst_host, ra);
+ }
+
+ /* Load/store elements in the second page */
+ if (unlikely(env->vstart < env->vl)) {
+ /* Cross page element */
+ if (unlikely(page_split % msize)) {
+ for (k = 0; k < nf; k++) {
+ addr = base + ((env->vstart * nf + k) << log2_esz);
+ ldst_tlb(env, adjust_addr(env, addr),
+ env->vstart + k * max_elems, vd, ra);
+ }
+ env->vstart++;
+ }
+
+ addr = base + ((env->vstart * nf) << log2_esz);
+ /* Get number of elements of second page */
+ elems = env->vl - env->vstart;
+
+ /* Load/store elements in the second page */
+ vext_page_ldst_us(env, vd, addr, elems, nf, max_elems,
+ log2_esz, true, mmu_index, ldst_tlb,
+ ldst_host, ra);
+ }
+ } else {
+ for (i = env->vstart; i < env->vl; i++) {
+ k = 0;
+ while (k < nf) {
+ if (!vext_elem_mask(v0, i)) {
+ /* set masked-off elements to 1s */
+ vext_set_elems_1s(vd, vma, (i + k * max_elems) * esz,
+ (i + k * max_elems + 1) * esz);
+ k++;
+ continue;
+ }
+ addr = base + ((i * nf + k) << log2_esz);
+ ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems,
+ vd, ra);
+ k++;
+ }
}
- addr = base + ((i * nf + k) << log2_esz);
- ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
- k++;
}
}
env->vstart = 0;
@@ -637,18 +681,18 @@ ProbeSuccess:
vext_set_tail_elems_1s(env->vl, vd, desc, nf, esz, max_elems);
}
-#define GEN_VEXT_LDFF(NAME, ETYPE, LOAD_FN) \
-void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
- CPURISCVState *env, uint32_t desc) \
-{ \
- vext_ldff(vd, v0, base, env, desc, LOAD_FN, \
- ctzl(sizeof(ETYPE)), GETPC()); \
+#define GEN_VEXT_LDFF(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
+void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
+ CPURISCVState *env, uint32_t desc) \
+{ \
+ vext_ldff(vd, v0, base, env, desc, LOAD_FN_TLB, \
+ LOAD_FN_HOST, ctzl(sizeof(ETYPE)), GETPC()); \
}
-GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb)
-GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb)
-GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb)
-GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
+GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb, lde_b_host)
+GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb, lde_h_host)
+GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb, lde_w_host)
+GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb, lde_d_host)
#define DO_SWAP(N, M) (M)
#define DO_AND(N, M) (N & M)
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (4 preceding siblings ...)
2024-09-18 17:14 ` [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-29 19:07 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance Max Chou
` (2 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
The vector unmasked unit-stride and whole register load/store
instructions will load/store continuous memory. If the endian of both
the host and guest architecture are the same, then we can group the
element load/store to load/store more data at a time.
Signed-off-by: Max Chou <max.chou@sifive.com>
---
target/riscv/vector_helper.c | 77 +++++++++++++++++++++++++++++-------
1 file changed, 63 insertions(+), 14 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 59009a940ff..654d5e111f3 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -189,6 +189,45 @@ GEN_VEXT_ST_ELEM(ste_h, uint16_t, H2, stw)
GEN_VEXT_ST_ELEM(ste_w, uint32_t, H4, stl)
GEN_VEXT_ST_ELEM(ste_d, uint64_t, H8, stq)
+static inline QEMU_ALWAYS_INLINE void
+vext_continus_ldst_tlb(CPURISCVState *env, vext_ldst_elem_fn_tlb *ldst_tlb,
+ void *vd, uint32_t evl, target_ulong addr,
+ uint32_t reg_start, uintptr_t ra, uint32_t esz,
+ bool is_load)
+{
+ uint32_t i;
+ for (i = env->vstart; i < evl; env->vstart = ++i, addr += esz) {
+ ldst_tlb(env, adjust_addr(env, addr), i, vd, ra);
+ }
+}
+
+static inline QEMU_ALWAYS_INLINE void
+vext_continus_ldst_host(CPURISCVState *env, vext_ldst_elem_fn_host *ldst_host,
+ void *vd, uint32_t evl, uint32_t reg_start, void *host,
+ uint32_t esz, bool is_load)
+{
+#if HOST_BIG_ENDIAN
+ for (; reg_start < evl; reg_start++, host += esz) {
+ ldst_host(vd, reg_start, host);
+ }
+#else
+ if (esz == 1) {
+ uint32_t byte_offset = reg_start * esz;
+ uint32_t size = (evl - reg_start) * esz;
+
+ if (is_load) {
+ memcpy(vd + byte_offset, host, size);
+ } else {
+ memcpy(host, vd + byte_offset, size);
+ }
+ } else {
+ for (; reg_start < evl; reg_start++, host += esz) {
+ ldst_host(vd, reg_start, host);
+ }
+ }
+#endif
+}
+
static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
uint32_t desc, uint32_t nf,
uint32_t esz, uint32_t max_elems)
@@ -297,24 +336,34 @@ vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
mmu_index, true, &host, ra);
if (flags == 0) {
- for (i = env->vstart; i < evl; ++i) {
- k = 0;
- while (k < nf) {
- ldst_host(vd, i + k * max_elems, host);
- host += esz;
- k++;
+ if (nf == 1) {
+ vext_continus_ldst_host(env, ldst_host, vd, evl, env->vstart, host,
+ esz, is_load);
+ } else {
+ for (i = env->vstart; i < evl; ++i) {
+ k = 0;
+ while (k < nf) {
+ ldst_host(vd, i + k * max_elems, host);
+ host += esz;
+ k++;
+ }
}
}
env->vstart += elems;
} else {
- /* load bytes from guest memory */
- for (i = env->vstart; i < evl; env->vstart = ++i) {
- k = 0;
- while (k < nf) {
- ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems, vd,
- ra);
- addr += esz;
- k++;
+ if (nf == 1) {
+ vext_continus_ldst_tlb(env, ldst_tlb, vd, evl, addr, env->vstart,
+ ra, esz, is_load);
+ } else {
+ /* load bytes from guest memory */
+ for (i = env->vstart; i < evl; env->vstart = ++i) {
+ k = 0;
+ while (k < nf) {
+ ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems,
+ vd, ra);
+ addr += esz;
+ k++;
+ }
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (5 preceding siblings ...)
2024-09-18 17:14 ` [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions Max Chou
@ 2024-09-18 17:14 ` Max Chou
2024-10-30 16:37 ` Daniel Henrique Barboza
2024-10-15 9:15 ` [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
2024-11-05 3:37 ` Alistair Francis
8 siblings, 1 reply; 18+ messages in thread
From: Max Chou @ 2024-09-18 17:14 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge,
Max Chou
In the vector unit-stride load/store helper functions. the vext_ldst_us
& vext_ldst_whole functions corresponding most of the execution time.
Inline the functions can avoid the function call overhead to improve the
helper function performance.
Signed-off-by: Max Chou <max.chou@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
target/riscv/vector_helper.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 654d5e111f3..0d5ed950486 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -152,14 +152,16 @@ typedef void vext_ldst_elem_fn_tlb(CPURISCVState *env, abi_ptr addr,
typedef void vext_ldst_elem_fn_host(void *vd, uint32_t idx, void *host);
#define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
-static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
+static inline QEMU_ALWAYS_INLINE \
+void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
uint32_t idx, void *vd, uintptr_t retaddr) \
{ \
ETYPE *cur = ((ETYPE *)vd + H(idx)); \
*cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
} \
\
-static void NAME##_host(void *vd, uint32_t idx, void *host) \
+static inline QEMU_ALWAYS_INLINE \
+void NAME##_host(void *vd, uint32_t idx, void *host) \
{ \
ETYPE *cur = ((ETYPE *)vd + H(idx)); \
*cur = (ETYPE)LDSUF##_p(host); \
@@ -171,14 +173,16 @@ GEN_VEXT_LD_ELEM(lde_w, uint32_t, H4, ldl)
GEN_VEXT_LD_ELEM(lde_d, uint64_t, H8, ldq)
#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
-static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
+static inline QEMU_ALWAYS_INLINE \
+void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
uint32_t idx, void *vd, uintptr_t retaddr) \
{ \
ETYPE data = *((ETYPE *)vd + H(idx)); \
cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
} \
\
-static void NAME##_host(void *vd, uint32_t idx, void *host) \
+static inline QEMU_ALWAYS_INLINE \
+void NAME##_host(void *vd, uint32_t idx, void *host) \
{ \
ETYPE data = *((ETYPE *)vd + H(idx)); \
STSUF##_p(host, data); \
@@ -317,7 +321,7 @@ GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d_tlb)
*/
/* unmasked unit-stride load and store operation */
-static void
+static inline QEMU_ALWAYS_INLINE void
vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
uint32_t elems, uint32_t nf, uint32_t max_elems,
uint32_t log2_esz, bool is_load, int mmu_index,
@@ -369,7 +373,7 @@ vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
}
}
-static void
+static inline QEMU_ALWAYS_INLINE void
vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
vext_ldst_elem_fn_tlb *ldst_tlb,
vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
@@ -756,7 +760,7 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb, lde_d_host)
/*
* load and store whole register instructions
*/
-static void
+static inline QEMU_ALWAYS_INLINE void
vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
vext_ldst_elem_fn_tlb *ldst_tlb,
vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
--
2.34.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (6 preceding siblings ...)
2024-09-18 17:14 ` [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance Max Chou
@ 2024-10-15 9:15 ` Max Chou
2024-11-05 3:37 ` Alistair Francis
8 siblings, 0 replies; 18+ messages in thread
From: Max Chou @ 2024-10-15 9:15 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, richard.henderson, negge
ping.
On 2024/9/19 1:14 AM, Max Chou wrote:
> Hi,
>
> This version fixes several issues in v5
> - The cross page bound checking issue
> - The mismatch vl comparison in the early exit checking of vext_ldst_us
> - The endian issue when host is big endian
>
> Thank for Richard Henderson's suggestions that this version unrolled the
> loop in helper functions of unmasked vector unit-stride load/store
> instructions, etc.
>
> And this version also extends the optimizations to the unmasked vector
> fault-only-first load instruction.
>
> Some performance result of this version
> 1. Test case provided in
> https://gitlab.com/qemu-project/qemu/-/issues/2137#note_1757501369
> - QEMU user mode (vlen=128):
> - Original: ~11.8 sec
> - v5: ~1.3 sec
> - v6: ~1.2 sec
> - QEMU system mode (vlen=128):
> - Original: ~29.4 sec
> - v5: ~1.6 sec
> - v6: ~1.6 sec
> 2. SPEC CPU2006 INT (test input)
> - QEMU user mode (vlen=128)
> - Original: ~459.1 sec
> - v5: ~300.0 sec
> - v6: ~280.6 sec
> 3. SPEC CPU2017 intspeed (test input)
> - QEMU user mode (vlen=128)
> - Original: ~2475.9 sec
> - v5: ~1702.6 sec
> - v6: ~1663.4 sec
>
>
> This version is based on the riscv-to-apply.next branch(commit 90d5d3c).
>
> Changes from v5:
> - patch 2
> - Replace the VSTART_CHECK_EARLY_EXIT function by checking the
> correct evl in vext_ldst_us.
> - patch 3
> - Unroll the memory load/store loop
> - Fix the bound checking issue in cross page elements
> - Fix the endian issue in GEN_VEXT_LD_ELEM/GEN_VEXT_ST_ELEM
> - Pass in mmu_index for vext_page_ldst_us
> - Reduce the flag & host checking
> - patch 4
> - Unroll the memory load/store loop
> - Fix the bound checking issue in cross page elements
> - patch 5
> - Extend optimizations to unmasked vector fault-only-first load
> instruction
> - patch 6
> - Switch to memcpy only when doing byte load/store
> - patch 7
> - Inline the vext_page_ldst_us function
>
> Previous versions:
> - v1: https://lore.kernel.org/all/20240215192823.729209-1-max.chou@sifive.com/
> - v2: https://lore.kernel.org/all/20240531174504.281461-1-max.chou@sifive.com/
> - v3: https://lore.kernel.org/all/20240613141906.1276105-1-max.chou@sifive.com/
> - v4: https://lore.kernel.org/all/20240613175122.1299212-1-max.chou@sifive.com/
> - v5: https://lore.kernel.org/all/20240717133936.713642-1-max.chou@sifive.com/
>
> Max Chou (7):
> target/riscv: Set vdata.vm field for vector load/store whole register
> instructions
> target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unmasked unit-stride load/store
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unit-stride whole register load/store
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unit-stride load-only-first load instructions
> target/riscv: rvv: Provide group continuous ld/st flow for unit-stride
> ld/st instructions
> target/riscv: Inline unit-stride ld/st and corresponding functions for
> performance
>
> target/riscv/insn_trans/trans_rvv.c.inc | 3 +
> target/riscv/vector_helper.c | 598 ++++++++++++++++--------
> 2 files changed, 400 insertions(+), 201 deletions(-)
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions
2024-09-18 17:14 ` [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions Max Chou
@ 2024-10-29 18:58 ` Daniel Henrique Barboza
2024-10-30 11:30 ` Richard Henderson
0 siblings, 1 reply; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-29 18:58 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> The vm field of the vector load/store whole register instruction's
> encoding is 1.
> The helper function of the vector load/store whole register instructions
> may need the vdata.vm field to do some optimizations.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
I wonder if we should always encode 'vm' in vdata for all insns. Seems like
helpers are passing 'vm' around in the helpers ...
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/insn_trans/trans_rvv.c.inc | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/target/riscv/insn_trans/trans_rvv.c.inc b/target/riscv/insn_trans/trans_rvv.c.inc
> index 3a3896ba06c..14e10568bd7 100644
> --- a/target/riscv/insn_trans/trans_rvv.c.inc
> +++ b/target/riscv/insn_trans/trans_rvv.c.inc
> @@ -770,6 +770,7 @@ static bool ld_us_mask_op(DisasContext *s, arg_vlm_v *a, uint8_t eew)
> /* Mask destination register are always tail-agnostic */
> data = FIELD_DP32(data, VDATA, VTA, s->cfg_vta_all_1s);
> data = FIELD_DP32(data, VDATA, VMA, s->vma);
> + data = FIELD_DP32(data, VDATA, VM, 1);
> return ldst_us_trans(a->rd, a->rs1, data, fn, s, false);
> }
>
> @@ -787,6 +788,7 @@ static bool st_us_mask_op(DisasContext *s, arg_vsm_v *a, uint8_t eew)
> /* EMUL = 1, NFIELDS = 1 */
> data = FIELD_DP32(data, VDATA, LMUL, 0);
> data = FIELD_DP32(data, VDATA, NF, 1);
> + data = FIELD_DP32(data, VDATA, VM, 1);
> return ldst_us_trans(a->rd, a->rs1, data, fn, s, true);
> }
>
> @@ -1106,6 +1108,7 @@ static bool ldst_whole_trans(uint32_t vd, uint32_t rs1, uint32_t nf,
> TCGv_i32 desc;
>
> uint32_t data = FIELD_DP32(0, VDATA, NF, nf);
> + data = FIELD_DP32(data, VDATA, VM, 1);
> dest = tcg_temp_new_ptr();
> desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlenb,
> s->cfg_ptr->vlenb, data));
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us
2024-09-18 17:14 ` [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us Max Chou
@ 2024-10-29 18:59 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-29 18:59 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> Because the real vl (evl) of vext_ldst_us may be different (e.g.
> vlm.v/vsm.v/etc.), so the VSTART_CHECK_EARLY_EXIT checking function
> should be replaced by checking evl in vext_ldst_us.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 87d2399c7e3..967bb2687ae 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -276,7 +276,10 @@ vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> uint32_t max_elems = vext_max_elems(desc, log2_esz);
> uint32_t esz = 1 << log2_esz;
>
> - VSTART_CHECK_EARLY_EXIT(env);
> + if (env->vstart >= evl) {
> + env->vstart = 0;
> + return;
> + }
>
> /* load bytes from guest memory */
> for (i = env->vstart; i < evl; env->vstart = ++i) {
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions
2024-09-18 17:14 ` [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions Max Chou
@ 2024-10-29 19:07 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-29 19:07 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> The vector unmasked unit-stride and whole register load/store
> instructions will load/store continuous memory. If the endian of both
> the host and guest architecture are the same, then we can group the
> element load/store to load/store more data at a time.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 77 +++++++++++++++++++++++++++++-------
> 1 file changed, 63 insertions(+), 14 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 59009a940ff..654d5e111f3 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -189,6 +189,45 @@ GEN_VEXT_ST_ELEM(ste_h, uint16_t, H2, stw)
> GEN_VEXT_ST_ELEM(ste_w, uint32_t, H4, stl)
> GEN_VEXT_ST_ELEM(ste_d, uint64_t, H8, stq)
>
> +static inline QEMU_ALWAYS_INLINE void
> +vext_continus_ldst_tlb(CPURISCVState *env, vext_ldst_elem_fn_tlb *ldst_tlb,
> + void *vd, uint32_t evl, target_ulong addr,
> + uint32_t reg_start, uintptr_t ra, uint32_t esz,
> + bool is_load)
> +{
> + uint32_t i;
> + for (i = env->vstart; i < evl; env->vstart = ++i, addr += esz) {
> + ldst_tlb(env, adjust_addr(env, addr), i, vd, ra);
> + }
> +}
> +
> +static inline QEMU_ALWAYS_INLINE void
> +vext_continus_ldst_host(CPURISCVState *env, vext_ldst_elem_fn_host *ldst_host,
> + void *vd, uint32_t evl, uint32_t reg_start, void *host,
> + uint32_t esz, bool is_load)
> +{
> +#if HOST_BIG_ENDIAN
> + for (; reg_start < evl; reg_start++, host += esz) {
> + ldst_host(vd, reg_start, host);
> + }
> +#else
> + if (esz == 1) {
> + uint32_t byte_offset = reg_start * esz;
> + uint32_t size = (evl - reg_start) * esz;
> +
> + if (is_load) {
> + memcpy(vd + byte_offset, host, size);
> + } else {
> + memcpy(host, vd + byte_offset, size);
> + }
> + } else {
> + for (; reg_start < evl; reg_start++, host += esz) {
> + ldst_host(vd, reg_start, host);
> + }
> + }
> +#endif
> +}
> +
> static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
> uint32_t desc, uint32_t nf,
> uint32_t esz, uint32_t max_elems)
> @@ -297,24 +336,34 @@ vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
> mmu_index, true, &host, ra);
>
> if (flags == 0) {
> - for (i = env->vstart; i < evl; ++i) {
> - k = 0;
> - while (k < nf) {
> - ldst_host(vd, i + k * max_elems, host);
> - host += esz;
> - k++;
> + if (nf == 1) {
> + vext_continus_ldst_host(env, ldst_host, vd, evl, env->vstart, host,
> + esz, is_load);
> + } else {
> + for (i = env->vstart; i < evl; ++i) {
> + k = 0;
> + while (k < nf) {
> + ldst_host(vd, i + k * max_elems, host);
> + host += esz;
> + k++;
> + }
> }
> }
> env->vstart += elems;
> } else {
> - /* load bytes from guest memory */
> - for (i = env->vstart; i < evl; env->vstart = ++i) {
> - k = 0;
> - while (k < nf) {
> - ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems, vd,
> - ra);
> - addr += esz;
> - k++;
> + if (nf == 1) {
> + vext_continus_ldst_tlb(env, ldst_tlb, vd, evl, addr, env->vstart,
> + ra, esz, is_load);
> + } else {
> + /* load bytes from guest memory */
> + for (i = env->vstart; i < evl; env->vstart = ++i) {
> + k = 0;
> + while (k < nf) {
> + ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems,
> + vd, ra);
> + addr += esz;
> + k++;
> + }
> }
> }
> }
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions
2024-10-29 18:58 ` Daniel Henrique Barboza
@ 2024-10-30 11:30 ` Richard Henderson
0 siblings, 0 replies; 18+ messages in thread
From: Richard Henderson @ 2024-10-30 11:30 UTC (permalink / raw)
To: Daniel Henrique Barboza, Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
negge
On 10/29/24 18:58, Daniel Henrique Barboza wrote:
>
>
> On 9/18/24 2:14 PM, Max Chou wrote:
>> The vm field of the vector load/store whole register instruction's
>> encoding is 1.
>> The helper function of the vector load/store whole register instructions
>> may need the vdata.vm field to do some optimizations.
>>
>> Signed-off-by: Max Chou <max.chou@sifive.com>
>> ---
>
> I wonder if we should always encode 'vm' in vdata for all insns. Seems like
> helpers are passing 'vm' around in the helpers ...
The intention there is that 'vm' is a constant within those helpers, so the compiler can
fold away the code blocks.
r~
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store
2024-09-18 17:14 ` [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store Max Chou
@ 2024-10-30 16:26 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-30 16:26 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> This commit references the sve_ldN_r/sve_stN_r helper functions in ARM
> target to optimize the vector unmasked unit-stride load/store
> implementation with following optimizations:
>
> * Get the page boundary
> * Probing pages/resolving host memory address at the beginning if
> possible
> * Provide new interface to direct access host memory
> * Switch to the original slow TLB access when cross page element/violate
> page permission/violate pmp/watchpoints in page
>
> The original element load/store interface is replaced by the new element
> load/store functions with _tlb & _host postfix that means doing the
> element load/store through the original softmmu flow and the direct
> access host memory flow.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 363 +++++++++++++++++++++--------------
> 1 file changed, 224 insertions(+), 139 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 967bb2687ae..c2fcf8b3a00 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -147,34 +147,47 @@ static inline void vext_set_elem_mask(void *v0, int index,
> }
>
> /* elements operations for load and store */
> -typedef void vext_ldst_elem_fn(CPURISCVState *env, abi_ptr addr,
> - uint32_t idx, void *vd, uintptr_t retaddr);
> +typedef void vext_ldst_elem_fn_tlb(CPURISCVState *env, abi_ptr addr,
> + uint32_t idx, void *vd, uintptr_t retaddr);
> +typedef void vext_ldst_elem_fn_host(void *vd, uint32_t idx, void *host);
>
> -#define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
> -static void NAME(CPURISCVState *env, abi_ptr addr, \
> - uint32_t idx, void *vd, uintptr_t retaddr)\
> -{ \
> - ETYPE *cur = ((ETYPE *)vd + H(idx)); \
> - *cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
> -} \
> -
> -GEN_VEXT_LD_ELEM(lde_b, int8_t, H1, ldsb)
> -GEN_VEXT_LD_ELEM(lde_h, int16_t, H2, ldsw)
> -GEN_VEXT_LD_ELEM(lde_w, int32_t, H4, ldl)
> -GEN_VEXT_LD_ELEM(lde_d, int64_t, H8, ldq)
> -
> -#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
> -static void NAME(CPURISCVState *env, abi_ptr addr, \
> - uint32_t idx, void *vd, uintptr_t retaddr)\
> -{ \
> - ETYPE data = *((ETYPE *)vd + H(idx)); \
> - cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
> +#define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
> +static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> + uint32_t idx, void *vd, uintptr_t retaddr) \
> +{ \
> + ETYPE *cur = ((ETYPE *)vd + H(idx)); \
> + *cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
> +} \
> + \
> +static void NAME##_host(void *vd, uint32_t idx, void *host) \
> +{ \
> + ETYPE *cur = ((ETYPE *)vd + H(idx)); \
> + *cur = (ETYPE)LDSUF##_p(host); \
> +}
> +
> +GEN_VEXT_LD_ELEM(lde_b, uint8_t, H1, ldub)
> +GEN_VEXT_LD_ELEM(lde_h, uint16_t, H2, lduw)
> +GEN_VEXT_LD_ELEM(lde_w, uint32_t, H4, ldl)
> +GEN_VEXT_LD_ELEM(lde_d, uint64_t, H8, ldq)
> +
> +#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
> +static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> + uint32_t idx, void *vd, uintptr_t retaddr) \
> +{ \
> + ETYPE data = *((ETYPE *)vd + H(idx)); \
> + cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
> +} \
> + \
> +static void NAME##_host(void *vd, uint32_t idx, void *host) \
> +{ \
> + ETYPE data = *((ETYPE *)vd + H(idx)); \
> + STSUF##_p(host, data); \
> }
>
> -GEN_VEXT_ST_ELEM(ste_b, int8_t, H1, stb)
> -GEN_VEXT_ST_ELEM(ste_h, int16_t, H2, stw)
> -GEN_VEXT_ST_ELEM(ste_w, int32_t, H4, stl)
> -GEN_VEXT_ST_ELEM(ste_d, int64_t, H8, stq)
> +GEN_VEXT_ST_ELEM(ste_b, uint8_t, H1, stb)
> +GEN_VEXT_ST_ELEM(ste_h, uint16_t, H2, stw)
> +GEN_VEXT_ST_ELEM(ste_w, uint32_t, H4, stl)
> +GEN_VEXT_ST_ELEM(ste_d, uint64_t, H8, stq)
>
> static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
> uint32_t desc, uint32_t nf,
> @@ -197,11 +210,10 @@ static void vext_set_tail_elems_1s(target_ulong vl, void *vd,
> * stride: access vector element from strided memory
> */
> static void
> -vext_ldst_stride(void *vd, void *v0, target_ulong base,
> - target_ulong stride, CPURISCVState *env,
> - uint32_t desc, uint32_t vm,
> - vext_ldst_elem_fn *ldst_elem,
> - uint32_t log2_esz, uintptr_t ra)
> +vext_ldst_stride(void *vd, void *v0, target_ulong base, target_ulong stride,
> + CPURISCVState *env, uint32_t desc, uint32_t vm,
> + vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
> + uintptr_t ra)
> {
> uint32_t i, k;
> uint32_t nf = vext_nf(desc);
> @@ -241,10 +253,10 @@ void HELPER(NAME)(void *vd, void * v0, target_ulong base, \
> ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_LD_STRIDE(vlse8_v, int8_t, lde_b)
> -GEN_VEXT_LD_STRIDE(vlse16_v, int16_t, lde_h)
> -GEN_VEXT_LD_STRIDE(vlse32_v, int32_t, lde_w)
> -GEN_VEXT_LD_STRIDE(vlse64_v, int64_t, lde_d)
> +GEN_VEXT_LD_STRIDE(vlse8_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LD_STRIDE(vlse16_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LD_STRIDE(vlse32_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LD_STRIDE(vlse64_v, int64_t, lde_d_tlb)
>
> #define GEN_VEXT_ST_STRIDE(NAME, ETYPE, STORE_FN) \
> void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> @@ -256,42 +268,114 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_ST_STRIDE(vsse8_v, int8_t, ste_b)
> -GEN_VEXT_ST_STRIDE(vsse16_v, int16_t, ste_h)
> -GEN_VEXT_ST_STRIDE(vsse32_v, int32_t, ste_w)
> -GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d)
> +GEN_VEXT_ST_STRIDE(vsse8_v, int8_t, ste_b_tlb)
> +GEN_VEXT_ST_STRIDE(vsse16_v, int16_t, ste_h_tlb)
> +GEN_VEXT_ST_STRIDE(vsse32_v, int32_t, ste_w_tlb)
> +GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d_tlb)
>
> /*
> * unit-stride: access elements stored contiguously in memory
> */
>
> /* unmasked unit-stride load and store operation */
> +static void
> +vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
> + uint32_t elems, uint32_t nf, uint32_t max_elems,
> + uint32_t log2_esz, bool is_load, int mmu_index,
> + vext_ldst_elem_fn_tlb *ldst_tlb,
> + vext_ldst_elem_fn_host *ldst_host, uintptr_t ra)
> +{
> + void *host;
> + int i, k, flags;
> + uint32_t esz = 1 << log2_esz;
> + uint32_t size = (elems * nf) << log2_esz;
> + uint32_t evl = env->vstart + elems;
> + MMUAccessType access_type = is_load ? MMU_DATA_LOAD : MMU_DATA_STORE;
> +
> + /* Check page permission/pmp/watchpoint/etc. */
> + flags = probe_access_flags(env, adjust_addr(env, addr), size, access_type,
> + mmu_index, true, &host, ra);
> +
> + if (flags == 0) {
> + for (i = env->vstart; i < evl; ++i) {
> + k = 0;
> + while (k < nf) {
> + ldst_host(vd, i + k * max_elems, host);
> + host += esz;
> + k++;
> + }
> + }
> + env->vstart += elems;
> + } else {
> + /* load bytes from guest memory */
> + for (i = env->vstart; i < evl; env->vstart = ++i) {
> + k = 0;
> + while (k < nf) {
> + ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems, vd,
> + ra);
> + addr += esz;
> + k++;
> + }
> + }
> + }
> +}
> +
> static void
> vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> - vext_ldst_elem_fn *ldst_elem, uint32_t log2_esz, uint32_t evl,
> - uintptr_t ra)
> + vext_ldst_elem_fn_tlb *ldst_tlb,
> + vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
> + uint32_t evl, uintptr_t ra, bool is_load)
> {
> - uint32_t i, k;
> + uint32_t k;
> + target_ulong page_split, elems, addr;
> uint32_t nf = vext_nf(desc);
> uint32_t max_elems = vext_max_elems(desc, log2_esz);
> uint32_t esz = 1 << log2_esz;
> + uint32_t msize = nf * esz;
> + int mmu_index = riscv_env_mmu_index(env, false);
>
> if (env->vstart >= evl) {
> env->vstart = 0;
> return;
> }
>
> - /* load bytes from guest memory */
> - for (i = env->vstart; i < evl; env->vstart = ++i) {
> - k = 0;
> - while (k < nf) {
> - target_ulong addr = base + ((i * nf + k) << log2_esz);
> - ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
> - k++;
> + /* Calculate the page range of first page */
> + addr = base + ((env->vstart * nf) << log2_esz);
> + page_split = -(addr | TARGET_PAGE_MASK);
> + /* Get number of elements */
> + elems = page_split / msize;
> + if (unlikely(env->vstart + elems >= evl)) {
> + elems = evl - env->vstart;
> + }
> +
> + /* Load/store elements in the first page */
> + if (likely(elems)) {
> + vext_page_ldst_us(env, vd, addr, elems, nf, max_elems, log2_esz,
> + is_load, mmu_index, ldst_tlb, ldst_host, ra);
> + }
> +
> + /* Load/store elements in the second page */
> + if (unlikely(env->vstart < evl)) {
> + /* Cross page element */
> + if (unlikely(page_split % msize)) {
> + for (k = 0; k < nf; k++) {
> + addr = base + ((env->vstart * nf + k) << log2_esz);
> + ldst_tlb(env, adjust_addr(env, addr),
> + env->vstart + k * max_elems, vd, ra);
> + }
> + env->vstart++;
> }
> +
> + addr = base + ((env->vstart * nf) << log2_esz);
> + /* Get number of elements of second page */
> + elems = evl - env->vstart;
> +
> + /* Load/store elements in the second page */
> + vext_page_ldst_us(env, vd, addr, elems, nf, max_elems, log2_esz,
> + is_load, mmu_index, ldst_tlb, ldst_host, ra);
> }
> - env->vstart = 0;
>
> + env->vstart = 0;
> vext_set_tail_elems_1s(evl, vd, desc, nf, esz, max_elems);
> }
>
> @@ -300,47 +384,47 @@ vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> * stride, stride = NF * sizeof (ETYPE)
> */
>
> -#define GEN_VEXT_LD_US(NAME, ETYPE, LOAD_FN) \
> -void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
> - CPURISCVState *env, uint32_t desc) \
> -{ \
> - uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
> - vext_ldst_stride(vd, v0, base, stride, env, desc, false, LOAD_FN, \
> - ctzl(sizeof(ETYPE)), GETPC()); \
> -} \
> - \
> -void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> - CPURISCVState *env, uint32_t desc) \
> -{ \
> - vext_ldst_us(vd, base, env, desc, LOAD_FN, \
> - ctzl(sizeof(ETYPE)), env->vl, GETPC()); \
> +#define GEN_VEXT_LD_US(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
> +void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
> + CPURISCVState *env, uint32_t desc) \
> +{ \
> + uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
> + vext_ldst_stride(vd, v0, base, stride, env, desc, false, \
> + LOAD_FN_TLB, ctzl(sizeof(ETYPE)), GETPC()); \
> +} \
> + \
> +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> + CPURISCVState *env, uint32_t desc) \
> +{ \
> + vext_ldst_us(vd, base, env, desc, LOAD_FN_TLB, LOAD_FN_HOST, \
> + ctzl(sizeof(ETYPE)), env->vl, GETPC(), true); \
> }
>
> -GEN_VEXT_LD_US(vle8_v, int8_t, lde_b)
> -GEN_VEXT_LD_US(vle16_v, int16_t, lde_h)
> -GEN_VEXT_LD_US(vle32_v, int32_t, lde_w)
> -GEN_VEXT_LD_US(vle64_v, int64_t, lde_d)
> +GEN_VEXT_LD_US(vle8_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LD_US(vle16_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LD_US(vle32_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LD_US(vle64_v, int64_t, lde_d_tlb, lde_d_host)
>
> -#define GEN_VEXT_ST_US(NAME, ETYPE, STORE_FN) \
> +#define GEN_VEXT_ST_US(NAME, ETYPE, STORE_FN_TLB, STORE_FN_HOST) \
> void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \
> CPURISCVState *env, uint32_t desc) \
> { \
> uint32_t stride = vext_nf(desc) << ctzl(sizeof(ETYPE)); \
> - vext_ldst_stride(vd, v0, base, stride, env, desc, false, STORE_FN, \
> - ctzl(sizeof(ETYPE)), GETPC()); \
> + vext_ldst_stride(vd, v0, base, stride, env, desc, false, \
> + STORE_FN_TLB, ctzl(sizeof(ETYPE)), GETPC()); \
> } \
> \
> void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> CPURISCVState *env, uint32_t desc) \
> { \
> - vext_ldst_us(vd, base, env, desc, STORE_FN, \
> - ctzl(sizeof(ETYPE)), env->vl, GETPC()); \
> + vext_ldst_us(vd, base, env, desc, STORE_FN_TLB, STORE_FN_HOST, \
> + ctzl(sizeof(ETYPE)), env->vl, GETPC(), false); \
> }
>
> -GEN_VEXT_ST_US(vse8_v, int8_t, ste_b)
> -GEN_VEXT_ST_US(vse16_v, int16_t, ste_h)
> -GEN_VEXT_ST_US(vse32_v, int32_t, ste_w)
> -GEN_VEXT_ST_US(vse64_v, int64_t, ste_d)
> +GEN_VEXT_ST_US(vse8_v, int8_t, ste_b_tlb, ste_b_host)
> +GEN_VEXT_ST_US(vse16_v, int16_t, ste_h_tlb, ste_h_host)
> +GEN_VEXT_ST_US(vse32_v, int32_t, ste_w_tlb, ste_w_host)
> +GEN_VEXT_ST_US(vse64_v, int64_t, ste_d_tlb, ste_d_host)
>
> /*
> * unit stride mask load and store, EEW = 1
> @@ -350,8 +434,8 @@ void HELPER(vlm_v)(void *vd, void *v0, target_ulong base,
> {
> /* evl = ceil(vl/8) */
> uint8_t evl = (env->vl + 7) >> 3;
> - vext_ldst_us(vd, base, env, desc, lde_b,
> - 0, evl, GETPC());
> + vext_ldst_us(vd, base, env, desc, lde_b_tlb, lde_b_host,
> + 0, evl, GETPC(), true);
> }
>
> void HELPER(vsm_v)(void *vd, void *v0, target_ulong base,
> @@ -359,8 +443,8 @@ void HELPER(vsm_v)(void *vd, void *v0, target_ulong base,
> {
> /* evl = ceil(vl/8) */
> uint8_t evl = (env->vl + 7) >> 3;
> - vext_ldst_us(vd, base, env, desc, ste_b,
> - 0, evl, GETPC());
> + vext_ldst_us(vd, base, env, desc, ste_b_tlb, ste_b_host,
> + 0, evl, GETPC(), false);
> }
>
> /*
> @@ -385,7 +469,7 @@ static inline void
> vext_ldst_index(void *vd, void *v0, target_ulong base,
> void *vs2, CPURISCVState *env, uint32_t desc,
> vext_get_index_addr get_index_addr,
> - vext_ldst_elem_fn *ldst_elem,
> + vext_ldst_elem_fn_tlb *ldst_elem,
> uint32_t log2_esz, uintptr_t ra)
> {
> uint32_t i, k;
> @@ -426,22 +510,22 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> LOAD_FN, ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_LD_INDEX(vlxei8_8_v, int8_t, idx_b, lde_b)
> -GEN_VEXT_LD_INDEX(vlxei8_16_v, int16_t, idx_b, lde_h)
> -GEN_VEXT_LD_INDEX(vlxei8_32_v, int32_t, idx_b, lde_w)
> -GEN_VEXT_LD_INDEX(vlxei8_64_v, int64_t, idx_b, lde_d)
> -GEN_VEXT_LD_INDEX(vlxei16_8_v, int8_t, idx_h, lde_b)
> -GEN_VEXT_LD_INDEX(vlxei16_16_v, int16_t, idx_h, lde_h)
> -GEN_VEXT_LD_INDEX(vlxei16_32_v, int32_t, idx_h, lde_w)
> -GEN_VEXT_LD_INDEX(vlxei16_64_v, int64_t, idx_h, lde_d)
> -GEN_VEXT_LD_INDEX(vlxei32_8_v, int8_t, idx_w, lde_b)
> -GEN_VEXT_LD_INDEX(vlxei32_16_v, int16_t, idx_w, lde_h)
> -GEN_VEXT_LD_INDEX(vlxei32_32_v, int32_t, idx_w, lde_w)
> -GEN_VEXT_LD_INDEX(vlxei32_64_v, int64_t, idx_w, lde_d)
> -GEN_VEXT_LD_INDEX(vlxei64_8_v, int8_t, idx_d, lde_b)
> -GEN_VEXT_LD_INDEX(vlxei64_16_v, int16_t, idx_d, lde_h)
> -GEN_VEXT_LD_INDEX(vlxei64_32_v, int32_t, idx_d, lde_w)
> -GEN_VEXT_LD_INDEX(vlxei64_64_v, int64_t, idx_d, lde_d)
> +GEN_VEXT_LD_INDEX(vlxei8_8_v, int8_t, idx_b, lde_b_tlb)
> +GEN_VEXT_LD_INDEX(vlxei8_16_v, int16_t, idx_b, lde_h_tlb)
> +GEN_VEXT_LD_INDEX(vlxei8_32_v, int32_t, idx_b, lde_w_tlb)
> +GEN_VEXT_LD_INDEX(vlxei8_64_v, int64_t, idx_b, lde_d_tlb)
> +GEN_VEXT_LD_INDEX(vlxei16_8_v, int8_t, idx_h, lde_b_tlb)
> +GEN_VEXT_LD_INDEX(vlxei16_16_v, int16_t, idx_h, lde_h_tlb)
> +GEN_VEXT_LD_INDEX(vlxei16_32_v, int32_t, idx_h, lde_w_tlb)
> +GEN_VEXT_LD_INDEX(vlxei16_64_v, int64_t, idx_h, lde_d_tlb)
> +GEN_VEXT_LD_INDEX(vlxei32_8_v, int8_t, idx_w, lde_b_tlb)
> +GEN_VEXT_LD_INDEX(vlxei32_16_v, int16_t, idx_w, lde_h_tlb)
> +GEN_VEXT_LD_INDEX(vlxei32_32_v, int32_t, idx_w, lde_w_tlb)
> +GEN_VEXT_LD_INDEX(vlxei32_64_v, int64_t, idx_w, lde_d_tlb)
> +GEN_VEXT_LD_INDEX(vlxei64_8_v, int8_t, idx_d, lde_b_tlb)
> +GEN_VEXT_LD_INDEX(vlxei64_16_v, int16_t, idx_d, lde_h_tlb)
> +GEN_VEXT_LD_INDEX(vlxei64_32_v, int32_t, idx_d, lde_w_tlb)
> +GEN_VEXT_LD_INDEX(vlxei64_64_v, int64_t, idx_d, lde_d_tlb)
>
> #define GEN_VEXT_ST_INDEX(NAME, ETYPE, INDEX_FN, STORE_FN) \
> void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> @@ -452,22 +536,22 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> GETPC()); \
> }
>
> -GEN_VEXT_ST_INDEX(vsxei8_8_v, int8_t, idx_b, ste_b)
> -GEN_VEXT_ST_INDEX(vsxei8_16_v, int16_t, idx_b, ste_h)
> -GEN_VEXT_ST_INDEX(vsxei8_32_v, int32_t, idx_b, ste_w)
> -GEN_VEXT_ST_INDEX(vsxei8_64_v, int64_t, idx_b, ste_d)
> -GEN_VEXT_ST_INDEX(vsxei16_8_v, int8_t, idx_h, ste_b)
> -GEN_VEXT_ST_INDEX(vsxei16_16_v, int16_t, idx_h, ste_h)
> -GEN_VEXT_ST_INDEX(vsxei16_32_v, int32_t, idx_h, ste_w)
> -GEN_VEXT_ST_INDEX(vsxei16_64_v, int64_t, idx_h, ste_d)
> -GEN_VEXT_ST_INDEX(vsxei32_8_v, int8_t, idx_w, ste_b)
> -GEN_VEXT_ST_INDEX(vsxei32_16_v, int16_t, idx_w, ste_h)
> -GEN_VEXT_ST_INDEX(vsxei32_32_v, int32_t, idx_w, ste_w)
> -GEN_VEXT_ST_INDEX(vsxei32_64_v, int64_t, idx_w, ste_d)
> -GEN_VEXT_ST_INDEX(vsxei64_8_v, int8_t, idx_d, ste_b)
> -GEN_VEXT_ST_INDEX(vsxei64_16_v, int16_t, idx_d, ste_h)
> -GEN_VEXT_ST_INDEX(vsxei64_32_v, int32_t, idx_d, ste_w)
> -GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d)
> +GEN_VEXT_ST_INDEX(vsxei8_8_v, int8_t, idx_b, ste_b_tlb)
> +GEN_VEXT_ST_INDEX(vsxei8_16_v, int16_t, idx_b, ste_h_tlb)
> +GEN_VEXT_ST_INDEX(vsxei8_32_v, int32_t, idx_b, ste_w_tlb)
> +GEN_VEXT_ST_INDEX(vsxei8_64_v, int64_t, idx_b, ste_d_tlb)
> +GEN_VEXT_ST_INDEX(vsxei16_8_v, int8_t, idx_h, ste_b_tlb)
> +GEN_VEXT_ST_INDEX(vsxei16_16_v, int16_t, idx_h, ste_h_tlb)
> +GEN_VEXT_ST_INDEX(vsxei16_32_v, int32_t, idx_h, ste_w_tlb)
> +GEN_VEXT_ST_INDEX(vsxei16_64_v, int64_t, idx_h, ste_d_tlb)
> +GEN_VEXT_ST_INDEX(vsxei32_8_v, int8_t, idx_w, ste_b_tlb)
> +GEN_VEXT_ST_INDEX(vsxei32_16_v, int16_t, idx_w, ste_h_tlb)
> +GEN_VEXT_ST_INDEX(vsxei32_32_v, int32_t, idx_w, ste_w_tlb)
> +GEN_VEXT_ST_INDEX(vsxei32_64_v, int64_t, idx_w, ste_d_tlb)
> +GEN_VEXT_ST_INDEX(vsxei64_8_v, int8_t, idx_d, ste_b_tlb)
> +GEN_VEXT_ST_INDEX(vsxei64_16_v, int16_t, idx_d, ste_h_tlb)
> +GEN_VEXT_ST_INDEX(vsxei64_32_v, int32_t, idx_d, ste_w_tlb)
> +GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d_tlb)
>
> /*
> * unit-stride fault-only-fisrt load instructions
> @@ -475,7 +559,7 @@ GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d)
> static inline void
> vext_ldff(void *vd, void *v0, target_ulong base,
> CPURISCVState *env, uint32_t desc,
> - vext_ldst_elem_fn *ldst_elem,
> + vext_ldst_elem_fn_tlb *ldst_elem,
> uint32_t log2_esz, uintptr_t ra)
> {
> uint32_t i, k, vl = 0;
> @@ -561,10 +645,10 @@ void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b)
> -GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h)
> -GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w)
> -GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d)
> +GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
>
> #define DO_SWAP(N, M) (M)
> #define DO_AND(N, M) (N & M)
> @@ -581,7 +665,8 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d)
> */
> static void
> vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> - vext_ldst_elem_fn *ldst_elem, uint32_t log2_esz, uintptr_t ra)
> + vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
> + uintptr_t ra)
> {
> uint32_t i, k, off, pos;
> uint32_t nf = vext_nf(desc);
> @@ -625,22 +710,22 @@ void HELPER(NAME)(void *vd, target_ulong base, \
> ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b)
> -GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h)
> -GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w)
> -GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d)
> -GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b)
> -GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h)
> -GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w)
> -GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d)
> -GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b)
> -GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h)
> -GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w)
> -GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d)
> -GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b)
> -GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h)
> -GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w)
> -GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d)
> +GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb)
> +GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb)
> +GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb)
> +GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb)
> +GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb)
> +GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb)
> +GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb)
>
> #define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN) \
> void HELPER(NAME)(void *vd, target_ulong base, \
> @@ -650,10 +735,10 @@ void HELPER(NAME)(void *vd, target_ulong base, \
> ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b)
> -GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b)
> -GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b)
> -GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b)
> +GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb)
> +GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb)
> +GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb)
> +GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb)
>
> /*
> * Vector Integer Arithmetic Instructions
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store
2024-09-18 17:14 ` [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store Max Chou
@ 2024-10-30 16:31 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-30 16:31 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> The vector unit-stride whole register load/store instructions are
> similar to unmasked unit-stride load/store instructions that is suitable
> to be optimized by using a direct access to host ram fast path.
>
> Because the vector whole register load/store instructions do not need to
> handle the tail agnostic, so remove the vstart early exit checking.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 129 +++++++++++++++++++----------------
> 1 file changed, 70 insertions(+), 59 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index c2fcf8b3a00..824e6401736 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -665,80 +665,91 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
> */
> static void
> vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> - vext_ldst_elem_fn_tlb *ldst_elem, uint32_t log2_esz,
> - uintptr_t ra)
> + vext_ldst_elem_fn_tlb *ldst_tlb,
> + vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
> + uintptr_t ra, bool is_load)
> {
> - uint32_t i, k, off, pos;
> + target_ulong page_split, elems, addr;
> uint32_t nf = vext_nf(desc);
> uint32_t vlenb = riscv_cpu_cfg(env)->vlenb;
> uint32_t max_elems = vlenb >> log2_esz;
> + uint32_t evl = nf * max_elems;
> + uint32_t esz = 1 << log2_esz;
> + int mmu_index = riscv_env_mmu_index(env, false);
>
> - if (env->vstart >= ((vlenb * nf) >> log2_esz)) {
> - env->vstart = 0;
> - return;
> + /* Calculate the page range of first page */
> + addr = base + (env->vstart << log2_esz);
> + page_split = -(addr | TARGET_PAGE_MASK);
> + /* Get number of elements */
> + elems = page_split / esz;
> + if (unlikely(env->vstart + elems >= evl)) {
> + elems = evl - env->vstart;
> }
>
> - k = env->vstart / max_elems;
> - off = env->vstart % max_elems;
> -
> - if (off) {
> - /* load/store rest of elements of current segment pointed by vstart */
> - for (pos = off; pos < max_elems; pos++, env->vstart++) {
> - target_ulong addr = base + ((pos + k * max_elems) << log2_esz);
> - ldst_elem(env, adjust_addr(env, addr), pos + k * max_elems, vd,
> - ra);
> - }
> - k++;
> + /* Load/store elements in the first page */
> + if (likely(elems)) {
> + vext_page_ldst_us(env, vd, addr, elems, 1, max_elems, log2_esz,
> + is_load, mmu_index, ldst_tlb, ldst_host, ra);
> }
>
> - /* load/store elements for rest of segments */
> - for (; k < nf; k++) {
> - for (i = 0; i < max_elems; i++, env->vstart++) {
> - target_ulong addr = base + ((i + k * max_elems) << log2_esz);
> - ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
> + /* Load/store elements in the second page */
> + if (unlikely(env->vstart < evl)) {
> + /* Cross page element */
> + if (unlikely(page_split % esz)) {
> + addr = base + (env->vstart << log2_esz);
> + ldst_tlb(env, adjust_addr(env, addr), env->vstart, vd, ra);
> + env->vstart++;
> }
> +
> + addr = base + (env->vstart << log2_esz);
> + /* Get number of elements of second page */
> + elems = evl - env->vstart;
> +
> + /* Load/store elements in the second page */
> + vext_page_ldst_us(env, vd, addr, elems, 1, max_elems, log2_esz,
> + is_load, mmu_index, ldst_tlb, ldst_host, ra);
> }
>
> env->vstart = 0;
> }
>
> -#define GEN_VEXT_LD_WHOLE(NAME, ETYPE, LOAD_FN) \
> -void HELPER(NAME)(void *vd, target_ulong base, \
> - CPURISCVState *env, uint32_t desc) \
> -{ \
> - vext_ldst_whole(vd, base, env, desc, LOAD_FN, \
> - ctzl(sizeof(ETYPE)), GETPC()); \
> -}
> -
> -GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb)
> -GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb)
> -GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb)
> -GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb)
> -GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb)
> -GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb)
> -GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb)
> -GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb)
> -GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb)
> -GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb)
> -GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb)
> -GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb)
> -GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb)
> -GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb)
> -GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb)
> -GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb)
> -
> -#define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN) \
> -void HELPER(NAME)(void *vd, target_ulong base, \
> - CPURISCVState *env, uint32_t desc) \
> -{ \
> - vext_ldst_whole(vd, base, env, desc, STORE_FN, \
> - ctzl(sizeof(ETYPE)), GETPC()); \
> -}
> -
> -GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb)
> -GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb)
> -GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb)
> -GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb)
> +#define GEN_VEXT_LD_WHOLE(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
> +void HELPER(NAME)(void *vd, target_ulong base, CPURISCVState *env, \
> + uint32_t desc) \
> +{ \
> + vext_ldst_whole(vd, base, env, desc, LOAD_FN_TLB, LOAD_FN_HOST, \
> + ctzl(sizeof(ETYPE)), GETPC(), true); \
> +}
> +
> +GEN_VEXT_LD_WHOLE(vl1re8_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LD_WHOLE(vl1re16_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LD_WHOLE(vl1re32_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LD_WHOLE(vl1re64_v, int64_t, lde_d_tlb, lde_d_host)
> +GEN_VEXT_LD_WHOLE(vl2re8_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LD_WHOLE(vl2re16_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LD_WHOLE(vl2re32_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LD_WHOLE(vl2re64_v, int64_t, lde_d_tlb, lde_d_host)
> +GEN_VEXT_LD_WHOLE(vl4re8_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LD_WHOLE(vl4re16_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LD_WHOLE(vl4re32_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LD_WHOLE(vl4re64_v, int64_t, lde_d_tlb, lde_d_host)
> +GEN_VEXT_LD_WHOLE(vl8re8_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LD_WHOLE(vl8re16_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LD_WHOLE(vl8re32_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LD_WHOLE(vl8re64_v, int64_t, lde_d_tlb, lde_d_host)
> +
> +#define GEN_VEXT_ST_WHOLE(NAME, ETYPE, STORE_FN_TLB, STORE_FN_HOST) \
> +void HELPER(NAME)(void *vd, target_ulong base, CPURISCVState *env, \
> + uint32_t desc) \
> +{ \
> + vext_ldst_whole(vd, base, env, desc, STORE_FN_TLB, STORE_FN_HOST, \
> + ctzl(sizeof(ETYPE)), GETPC(), false); \
> +}
> +
> +GEN_VEXT_ST_WHOLE(vs1r_v, int8_t, ste_b_tlb, ste_b_host)
> +GEN_VEXT_ST_WHOLE(vs2r_v, int8_t, ste_b_tlb, ste_b_host)
> +GEN_VEXT_ST_WHOLE(vs4r_v, int8_t, ste_b_tlb, ste_b_host)
> +GEN_VEXT_ST_WHOLE(vs8r_v, int8_t, ste_b_tlb, ste_b_host)
>
> /*
> * Vector Integer Arithmetic Instructions
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions
2024-09-18 17:14 ` [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions Max Chou
@ 2024-10-30 16:35 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-30 16:35 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> The unmasked unit-stride fault-only-first load instructions are similar
> to the unmasked unit-stride load/store instructions that is suitable to
> be optimized by using a direct access to host ram fast path.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 98 ++++++++++++++++++++++++++----------
> 1 file changed, 71 insertions(+), 27 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 824e6401736..59009a940ff 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -557,18 +557,18 @@ GEN_VEXT_ST_INDEX(vsxei64_64_v, int64_t, idx_d, ste_d_tlb)
> * unit-stride fault-only-fisrt load instructions
> */
> static inline void
> -vext_ldff(void *vd, void *v0, target_ulong base,
> - CPURISCVState *env, uint32_t desc,
> - vext_ldst_elem_fn_tlb *ldst_elem,
> - uint32_t log2_esz, uintptr_t ra)
> +vext_ldff(void *vd, void *v0, target_ulong base, CPURISCVState *env,
> + uint32_t desc, vext_ldst_elem_fn_tlb *ldst_tlb,
> + vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz, uintptr_t ra)
> {
> uint32_t i, k, vl = 0;
> uint32_t nf = vext_nf(desc);
> uint32_t vm = vext_vm(desc);
> uint32_t max_elems = vext_max_elems(desc, log2_esz);
> uint32_t esz = 1 << log2_esz;
> + uint32_t msize = nf * esz;
> uint32_t vma = vext_vma(desc);
> - target_ulong addr, offset, remain;
> + target_ulong addr, offset, remain, page_split, elems;
> int mmu_index = riscv_env_mmu_index(env, false);
>
> VSTART_CHECK_EARLY_EXIT(env);
> @@ -617,19 +617,63 @@ ProbeSuccess:
> if (vl != 0) {
> env->vl = vl;
> }
> - for (i = env->vstart; i < env->vl; i++) {
> - k = 0;
> - while (k < nf) {
> - if (!vm && !vext_elem_mask(v0, i)) {
> - /* set masked-off elements to 1s */
> - vext_set_elems_1s(vd, vma, (i + k * max_elems) * esz,
> - (i + k * max_elems + 1) * esz);
> - k++;
> - continue;
> +
> + if (env->vstart < env->vl) {
> + if (vm) {
> + /* Calculate the page range of first page */
> + addr = base + ((env->vstart * nf) << log2_esz);
> + page_split = -(addr | TARGET_PAGE_MASK);
> + /* Get number of elements */
> + elems = page_split / msize;
> + if (unlikely(env->vstart + elems >= env->vl)) {
> + elems = env->vl - env->vstart;
> + }
> +
> + /* Load/store elements in the first page */
> + if (likely(elems)) {
> + vext_page_ldst_us(env, vd, addr, elems, nf, max_elems,
> + log2_esz, true, mmu_index, ldst_tlb,
> + ldst_host, ra);
> + }
> +
> + /* Load/store elements in the second page */
> + if (unlikely(env->vstart < env->vl)) {
> + /* Cross page element */
> + if (unlikely(page_split % msize)) {
> + for (k = 0; k < nf; k++) {
> + addr = base + ((env->vstart * nf + k) << log2_esz);
> + ldst_tlb(env, adjust_addr(env, addr),
> + env->vstart + k * max_elems, vd, ra);
> + }
> + env->vstart++;
> + }
> +
> + addr = base + ((env->vstart * nf) << log2_esz);
> + /* Get number of elements of second page */
> + elems = env->vl - env->vstart;
> +
> + /* Load/store elements in the second page */
> + vext_page_ldst_us(env, vd, addr, elems, nf, max_elems,
> + log2_esz, true, mmu_index, ldst_tlb,
> + ldst_host, ra);
> + }
> + } else {
> + for (i = env->vstart; i < env->vl; i++) {
> + k = 0;
> + while (k < nf) {
> + if (!vext_elem_mask(v0, i)) {
> + /* set masked-off elements to 1s */
> + vext_set_elems_1s(vd, vma, (i + k * max_elems) * esz,
> + (i + k * max_elems + 1) * esz);
> + k++;
> + continue;
> + }
> + addr = base + ((i * nf + k) << log2_esz);
> + ldst_tlb(env, adjust_addr(env, addr), i + k * max_elems,
> + vd, ra);
> + k++;
> + }
> }
> - addr = base + ((i * nf + k) << log2_esz);
> - ldst_elem(env, adjust_addr(env, addr), i + k * max_elems, vd, ra);
> - k++;
> }
> }
> env->vstart = 0;
> @@ -637,18 +681,18 @@ ProbeSuccess:
> vext_set_tail_elems_1s(env->vl, vd, desc, nf, esz, max_elems);
> }
>
> -#define GEN_VEXT_LDFF(NAME, ETYPE, LOAD_FN) \
> -void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> - CPURISCVState *env, uint32_t desc) \
> -{ \
> - vext_ldff(vd, v0, base, env, desc, LOAD_FN, \
> - ctzl(sizeof(ETYPE)), GETPC()); \
> +#define GEN_VEXT_LDFF(NAME, ETYPE, LOAD_FN_TLB, LOAD_FN_HOST) \
> +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \
> + CPURISCVState *env, uint32_t desc) \
> +{ \
> + vext_ldff(vd, v0, base, env, desc, LOAD_FN_TLB, \
> + LOAD_FN_HOST, ctzl(sizeof(ETYPE)), GETPC()); \
> }
>
> -GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb)
> -GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb)
> -GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb)
> -GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb)
> +GEN_VEXT_LDFF(vle8ff_v, int8_t, lde_b_tlb, lde_b_host)
> +GEN_VEXT_LDFF(vle16ff_v, int16_t, lde_h_tlb, lde_h_host)
> +GEN_VEXT_LDFF(vle32ff_v, int32_t, lde_w_tlb, lde_w_host)
> +GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb, lde_d_host)
>
> #define DO_SWAP(N, M) (M)
> #define DO_AND(N, M) (N & M)
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance
2024-09-18 17:14 ` [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance Max Chou
@ 2024-10-30 16:37 ` Daniel Henrique Barboza
0 siblings, 0 replies; 18+ messages in thread
From: Daniel Henrique Barboza @ 2024-10-30 16:37 UTC (permalink / raw)
To: Max Chou, qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Bin Meng, Weiwei Li, Liu Zhiwei,
richard.henderson, negge
On 9/18/24 2:14 PM, Max Chou wrote:
> In the vector unit-stride load/store helper functions. the vext_ldst_us
> & vext_ldst_whole functions corresponding most of the execution time.
> Inline the functions can avoid the function call overhead to improve the
> helper function performance.
>
> Signed-off-by: Max Chou <max.chou@sifive.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/vector_helper.c | 18 +++++++++++-------
> 1 file changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 654d5e111f3..0d5ed950486 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -152,14 +152,16 @@ typedef void vext_ldst_elem_fn_tlb(CPURISCVState *env, abi_ptr addr,
> typedef void vext_ldst_elem_fn_host(void *vd, uint32_t idx, void *host);
>
> #define GEN_VEXT_LD_ELEM(NAME, ETYPE, H, LDSUF) \
> -static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> +static inline QEMU_ALWAYS_INLINE \
> +void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> uint32_t idx, void *vd, uintptr_t retaddr) \
> { \
> ETYPE *cur = ((ETYPE *)vd + H(idx)); \
> *cur = cpu_##LDSUF##_data_ra(env, addr, retaddr); \
> } \
> \
> -static void NAME##_host(void *vd, uint32_t idx, void *host) \
> +static inline QEMU_ALWAYS_INLINE \
> +void NAME##_host(void *vd, uint32_t idx, void *host) \
> { \
> ETYPE *cur = ((ETYPE *)vd + H(idx)); \
> *cur = (ETYPE)LDSUF##_p(host); \
> @@ -171,14 +173,16 @@ GEN_VEXT_LD_ELEM(lde_w, uint32_t, H4, ldl)
> GEN_VEXT_LD_ELEM(lde_d, uint64_t, H8, ldq)
>
> #define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \
> -static void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> +static inline QEMU_ALWAYS_INLINE \
> +void NAME##_tlb(CPURISCVState *env, abi_ptr addr, \
> uint32_t idx, void *vd, uintptr_t retaddr) \
> { \
> ETYPE data = *((ETYPE *)vd + H(idx)); \
> cpu_##STSUF##_data_ra(env, addr, data, retaddr); \
> } \
> \
> -static void NAME##_host(void *vd, uint32_t idx, void *host) \
> +static inline QEMU_ALWAYS_INLINE \
> +void NAME##_host(void *vd, uint32_t idx, void *host) \
> { \
> ETYPE data = *((ETYPE *)vd + H(idx)); \
> STSUF##_p(host, data); \
> @@ -317,7 +321,7 @@ GEN_VEXT_ST_STRIDE(vsse64_v, int64_t, ste_d_tlb)
> */
>
> /* unmasked unit-stride load and store operation */
> -static void
> +static inline QEMU_ALWAYS_INLINE void
> vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
> uint32_t elems, uint32_t nf, uint32_t max_elems,
> uint32_t log2_esz, bool is_load, int mmu_index,
> @@ -369,7 +373,7 @@ vext_page_ldst_us(CPURISCVState *env, void *vd, target_ulong addr,
> }
> }
>
> -static void
> +static inline QEMU_ALWAYS_INLINE void
> vext_ldst_us(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> vext_ldst_elem_fn_tlb *ldst_tlb,
> vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
> @@ -756,7 +760,7 @@ GEN_VEXT_LDFF(vle64ff_v, int64_t, lde_d_tlb, lde_d_host)
> /*
> * load and store whole register instructions
> */
> -static void
> +static inline QEMU_ALWAYS_INLINE void
> vext_ldst_whole(void *vd, target_ulong base, CPURISCVState *env, uint32_t desc,
> vext_ldst_elem_fn_tlb *ldst_tlb,
> vext_ldst_elem_fn_host *ldst_host, uint32_t log2_esz,
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
` (7 preceding siblings ...)
2024-10-15 9:15 ` [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
@ 2024-11-05 3:37 ` Alistair Francis
8 siblings, 0 replies; 18+ messages in thread
From: Alistair Francis @ 2024-11-05 3:37 UTC (permalink / raw)
To: Max Chou
Cc: qemu-devel, qemu-riscv, Palmer Dabbelt, Alistair Francis,
Bin Meng, Weiwei Li, Daniel Henrique Barboza, Liu Zhiwei,
richard.henderson, negge
On Thu, Sep 19, 2024 at 3:15 AM Max Chou <max.chou@sifive.com> wrote:
>
> Hi,
>
> This version fixes several issues in v5
> - The cross page bound checking issue
> - The mismatch vl comparison in the early exit checking of vext_ldst_us
> - The endian issue when host is big endian
>
> Thank for Richard Henderson's suggestions that this version unrolled the
> loop in helper functions of unmasked vector unit-stride load/store
> instructions, etc.
>
> And this version also extends the optimizations to the unmasked vector
> fault-only-first load instruction.
>
> Some performance result of this version
> 1. Test case provided in
> https://gitlab.com/qemu-project/qemu/-/issues/2137#note_1757501369
> - QEMU user mode (vlen=128):
> - Original: ~11.8 sec
> - v5: ~1.3 sec
> - v6: ~1.2 sec
> - QEMU system mode (vlen=128):
> - Original: ~29.4 sec
> - v5: ~1.6 sec
> - v6: ~1.6 sec
> 2. SPEC CPU2006 INT (test input)
> - QEMU user mode (vlen=128)
> - Original: ~459.1 sec
> - v5: ~300.0 sec
> - v6: ~280.6 sec
> 3. SPEC CPU2017 intspeed (test input)
> - QEMU user mode (vlen=128)
> - Original: ~2475.9 sec
> - v5: ~1702.6 sec
> - v6: ~1663.4 sec
>
>
> This version is based on the riscv-to-apply.next branch(commit 90d5d3c).
>
> Changes from v5:
> - patch 2
> - Replace the VSTART_CHECK_EARLY_EXIT function by checking the
> correct evl in vext_ldst_us.
> - patch 3
> - Unroll the memory load/store loop
> - Fix the bound checking issue in cross page elements
> - Fix the endian issue in GEN_VEXT_LD_ELEM/GEN_VEXT_ST_ELEM
> - Pass in mmu_index for vext_page_ldst_us
> - Reduce the flag & host checking
> - patch 4
> - Unroll the memory load/store loop
> - Fix the bound checking issue in cross page elements
> - patch 5
> - Extend optimizations to unmasked vector fault-only-first load
> instruction
> - patch 6
> - Switch to memcpy only when doing byte load/store
> - patch 7
> - Inline the vext_page_ldst_us function
>
> Previous versions:
> - v1: https://lore.kernel.org/all/20240215192823.729209-1-max.chou@sifive.com/
> - v2: https://lore.kernel.org/all/20240531174504.281461-1-max.chou@sifive.com/
> - v3: https://lore.kernel.org/all/20240613141906.1276105-1-max.chou@sifive.com/
> - v4: https://lore.kernel.org/all/20240613175122.1299212-1-max.chou@sifive.com/
> - v5: https://lore.kernel.org/all/20240717133936.713642-1-max.chou@sifive.com/
>
> Max Chou (7):
> target/riscv: Set vdata.vm field for vector load/store whole register
> instructions
> target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unmasked unit-stride load/store
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unit-stride whole register load/store
> target/riscv: rvv: Provide a fast path using direct access to host ram
> for unit-stride load-only-first load instructions
> target/riscv: rvv: Provide group continuous ld/st flow for unit-stride
> ld/st instructions
> target/riscv: Inline unit-stride ld/st and corresponding functions for
> performance
Thanks!
Applied to riscv-to-apply.next
Alistair
>
> target/riscv/insn_trans/trans_rvv.c.inc | 3 +
> target/riscv/vector_helper.c | 598 ++++++++++++++++--------
> 2 files changed, 400 insertions(+), 201 deletions(-)
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2024-11-05 3:38 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-18 17:14 [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
2024-09-18 17:14 ` [PATCH v6 1/7] target/riscv: Set vdata.vm field for vector load/store whole register instructions Max Chou
2024-10-29 18:58 ` Daniel Henrique Barboza
2024-10-30 11:30 ` Richard Henderson
2024-09-18 17:14 ` [PATCH v6 2/7] target/riscv: rvv: Replace VSTART_CHECK_EARLY_EXIT in vext_ldst_us Max Chou
2024-10-29 18:59 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 3/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unmasked unit-stride load/store Max Chou
2024-10-30 16:26 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 4/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride whole register load/store Max Chou
2024-10-30 16:31 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 5/7] target/riscv: rvv: Provide a fast path using direct access to host ram for unit-stride load-only-first load instructions Max Chou
2024-10-30 16:35 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 6/7] target/riscv: rvv: Provide group continuous ld/st flow for unit-stride ld/st instructions Max Chou
2024-10-29 19:07 ` Daniel Henrique Barboza
2024-09-18 17:14 ` [PATCH v6 7/7] target/riscv: Inline unit-stride ld/st and corresponding functions for performance Max Chou
2024-10-30 16:37 ` Daniel Henrique Barboza
2024-10-15 9:15 ` [PATCH v6 0/7] Improve the performance of RISC-V vector unit-stride/whole register ld/st instructions Max Chou
2024-11-05 3:37 ` Alistair Francis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).