From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
To: Richard Henderson <richard.henderson@linaro.org>,
Paolo Savini <paolo.savini@embecosm.com>,
qemu-devel@nongnu.org, qemu-riscv@nongnu.org
Cc: Palmer Dabbelt <palmer@dabbelt.com>,
Alistair Francis <alistair.francis@wdc.com>,
Bin Meng <bmeng.cn@gmail.com>, Weiwei Li <liwei1518@gmail.com>,
Liu Zhiwei <zhiwei_liu@linux.alibaba.com>,
Helene Chelin <helene.chelin@embecosm.com>,
Nathan Egge <negge@google.com>, Max Chou <max.chou@sifive.com>
Subject: Re: [RFC v4 2/2] target/riscv: rvv: improve performance of RISC-V vector loads and stores on large amounts of data.
Date: Thu, 7 Nov 2024 09:58:42 -0300 [thread overview]
Message-ID: <6b06b532-c53f-4b5b-b65d-d54d7c746ffc@ventanamicro.com> (raw)
In-Reply-To: <54c99505-21ef-422c-a7fe-a2d7dabc3d6c@linaro.org>
On 11/4/24 9:48 AM, Richard Henderson wrote:
> On 10/30/24 15:25, Paolo Savini wrote:
>> Thanks for the review Richard.
>>
>> On 10/30/24 11:40, Richard Henderson wrote:
>>> On 10/29/24 19:43, Paolo Savini wrote:
>>>> This patch optimizes the emulation of unit-stride load/store RVV instructions
>>>> when the data being loaded/stored per iteration amounts to 16 bytes or more.
>>>> The optimization consists of calling __builtin_memcpy on chunks of data of 16
>>>> bytes between the memory address of the simulated vector register and the
>>>> destination memory address and vice versa.
>>>> This is done only if we have direct access to the RAM of the host machine,
>>>> if the host is little endiand and if it supports atomic 128 bit memory
>>>> operations.
>>>>
>>>> Signed-off-by: Paolo Savini <paolo.savini@embecosm.com>
>>>> ---
>>>> target/riscv/vector_helper.c | 17 ++++++++++++++++-
>>>> target/riscv/vector_internals.h | 12 ++++++++++++
>>>> 2 files changed, 28 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
>>>> index 75c24653f0..e1c100e907 100644
>>>> --- a/target/riscv/vector_helper.c
>>>> +++ b/target/riscv/vector_helper.c
>>>> @@ -488,7 +488,22 @@ vext_group_ldst_host(CPURISCVState *env, void *vd, uint32_t byte_end,
>>>> }
>>>> fn = fns[is_load][group_size];
>>>> - fn(vd, byte_offset, host + byte_offset);
>>>> +
>>>> + /* __builtin_memcpy uses host 16 bytes vector loads and stores if supported.
>>>> + * We need to make sure that these instructions have guarantees of atomicity.
>>>> + * E.g. x86 processors provide strong guarantees of atomicity for 16-byte
>>>> + * memory operations if the memory operands are 16-byte aligned */
>>>> + if (!HOST_BIG_ENDIAN && (byte_offset + 16 < byte_end) &&
>>>> + ((byte_offset % 16) == 0) && HOST_128_ATOMIC_MEM_OP) {
>>>> + group_size = MO_128;
>>>> + if (is_load) {
>>>> + __builtin_memcpy((uint8_t *)(vd + byte_offset), (uint8_t *)(host + byte_offset), 16);
>>>> + } else {
>>>> + __builtin_memcpy((uint8_t *)(host + byte_offset), (uint8_t *)(vd + byte_offset), 16);
>>>> + }
>>>
>>> I said this last time and I'll say it again:
>>>
>>> __builtin_memcpy DOES NOT equal VMOVDQA
>> I am aware of this. I took __builtin_memcpy as a generic enough way to emulate loads and stores that should allow several hosts to generate the widest load/store instructions they can and on x86 I see this generates instructions vmovdpu/movdqu that are not always guaranteed to be atomic. x86 though guarantees them to be atomic if the memory address is aligned to 16 bytes.
>
> No, AMD guarantees MOVDQU is atomic if aligned, Intel does not.
> See the comment in util/cpuinfo-i386.c, and the two CPUINFO_ATOMIC_VMOVDQ[AU] bits.
>
> See also host/include/*/host/atomic128-ldst.h, HAVE_ATOMIC128_RO, and atomic16_read_ro.
> Not that I think you should use that here; it's complicated, and I think you're better off relying on the code in accel/tcg/ when more than byte atomicity is required.
>
Not sure if that's what you meant but I didn't find any clear example of
multi-byte atomicity using qatomic_read() and friends that would be closer
to what memcpy() is doing here. I found one example in bdrv_graph_co_rdlock()
that seems to use a mem barrier via smp_mb() and qatomic_read() inside a
loop, but I don't understand that code enough to say.
I'm also wondering if a common pthread_lock() wrapping up these memcpy() calls
would suffice in this case. Even if we can't guarantee that __builtin_memcpy()
will use arch specific vector insns in the host it would already be a faster
path than falling back to fn(...).
In a quick detour, I'm not sure if we really considered how ARM SVE implements these
helpers. E.g gen_sve_str():
https://gitlab.com/qemu-project/qemu/-/blob/master/target/arm/tcg/translate-sve.c#L4182
I'm curious to see if this form of front end optimization, use TCG ops
instead of a for() loop with ldst_elem() like we're doing today, would yield more
performance with less complication with backend specifics, atomicity and whatnot.
In fact I have a feeling this is not the first time we talk about using ideas from
SVE too.
Thanks,
Daniel
>
> r~
next prev parent reply other threads:[~2024-11-07 12:59 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-29 19:43 [RFC v4 0/2] target/riscv: add wrapper for target specific macros in atomicity check Paolo Savini
2024-10-29 19:43 ` [RFC v4 1/2] target/riscv: rvv: reduce the overhead for simple RISC-V vector unit-stride loads and stores Paolo Savini
2024-11-06 16:08 ` Daniel Henrique Barboza
2024-10-29 19:43 ` [RFC v4 2/2] target/riscv: rvv: improve performance of RISC-V vector loads and stores on large amounts of data Paolo Savini
2024-10-30 11:40 ` Richard Henderson
2024-10-30 15:25 ` Paolo Savini
2024-11-04 12:48 ` Richard Henderson
2024-11-07 12:58 ` Daniel Henrique Barboza [this message]
2024-11-08 9:11 ` Richard Henderson
2024-11-11 16:04 ` Paolo Savini
2024-11-14 16:09 ` Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6b06b532-c53f-4b5b-b65d-d54d7c746ffc@ventanamicro.com \
--to=dbarboza@ventanamicro.com \
--cc=alistair.francis@wdc.com \
--cc=bmeng.cn@gmail.com \
--cc=helene.chelin@embecosm.com \
--cc=liwei1518@gmail.com \
--cc=max.chou@sifive.com \
--cc=negge@google.com \
--cc=palmer@dabbelt.com \
--cc=paolo.savini@embecosm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-riscv@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=zhiwei_liu@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).