qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Savini <paolo.savini@embecosm.com>
To: Richard Henderson <richard.henderson@linaro.org>,
	Daniel Henrique Barboza <dbarboza@ventanamicro.com>,
	qemu-devel@nongnu.org, qemu-riscv@nongnu.org
Cc: Palmer Dabbelt <palmer@dabbelt.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bin Meng <bmeng.cn@gmail.com>, Weiwei Li <liwei1518@gmail.com>,
	Liu Zhiwei <zhiwei_liu@linux.alibaba.com>,
	Helene Chelin <helene.chelin@embecosm.com>,
	Nathan Egge <negge@google.com>, Max Chou <max.chou@sifive.com>
Subject: Re: [RFC v4 2/2] target/riscv: rvv: improve performance of RISC-V vector loads and stores on large amounts of data.
Date: Mon, 11 Nov 2024 16:04:35 +0000	[thread overview]
Message-ID: <a9f51b76-1cd7-405f-b4a7-384c7447ff88@embecosm.com> (raw)
In-Reply-To: <230f448b-07f4-413c-9be6-e10a8e55be73@linaro.org>

Hi Richard, Daniel,

This might be a silly question, but why do we need to ensure atomicity 
when emulating these guest instructions? I might be wrong but I didn't 
see an explicit requirement for the vector instructions to be atomic in 
the documentation of the RISC-V V extension.

Anyway the patches from Max have landed and since one of them already 
uses memcpy() where this patch does and achieves a similar performance 
improvement we should probably drop this particular patch. I'm wondering 
whether we should be concerned about atomicity there too?

https://github.com/qemu/qemu/blob/134b443512825bed401b6e141447b8cdc22d2efe/target/riscv/vector_helper.c#L224

Thanks

Paolo

On 11/8/24 09:11, Richard Henderson wrote:
> On 11/7/24 12:58, Daniel Henrique Barboza wrote:
>> On 11/4/24 9:48 AM, Richard Henderson wrote:
>>> On 10/30/24 15:25, Paolo Savini wrote:
>>>> On 10/30/24 11:40, Richard Henderson wrote:
>>>>>     __builtin_memcpy DOES NOT equal VMOVDQA
>>>> I am aware of this. I took __builtin_memcpy as a generic enough way 
>>>> to emulate loads and stores that should allow several hosts to 
>>>> generate the widest load/store instructions they can and on x86 I 
>>>> see this generates instructions vmovdpu/movdqu that are not always 
>>>> guaranteed to be atomic. x86 though guarantees them to be atomic if 
>>>> the memory address is aligned to 16 bytes.
>>>
>>> No, AMD guarantees MOVDQU is atomic if aligned, Intel does not.
>>> See the comment in util/cpuinfo-i386.c, and the two 
>>> CPUINFO_ATOMIC_VMOVDQ[AU] bits.
>>>
>>> See also host/include/*/host/atomic128-ldst.h, HAVE_ATOMIC128_RO, 
>>> and atomic16_read_ro.
>>> Not that I think you should use that here; it's complicated, and I 
>>> think you're better off relying on the code in accel/tcg/ when more 
>>> than byte atomicity is required.
>>>
>>
>> Not sure if that's what you meant but I didn't find any clear example of
>> multi-byte atomicity using qatomic_read() and friends that would be 
>> closer
>> to what memcpy() is doing here. I found one example in 
>> bdrv_graph_co_rdlock()
>> that seems to use a mem barrier via smp_mb() and qatomic_read() inside a
>> loop, but I don't understand that code enough to say.
>
> Memory barriers provide ordering between loads and stores, but they 
> cannot be used to address atomicity of individual loads and stores.
>
>
>> I'm also wondering if a common pthread_lock() wrapping up these 
>> memcpy() calls
>> would suffice in this case. Even if we can't guarantee that 
>> __builtin_memcpy()
>> will use arch specific vector insns in the host it would already be a 
>> faster
>> path than falling back to fn(...).
>
> Locks would certainly not be faster than calling the accel/tcg function.
>
>
>> In a quick detour, I'm not sure if we really considered how ARM SVE 
>> implements these
>> helpers. E.g gen_sve_str():
>>
>> https://gitlab.com/qemu-project/qemu/-/blob/master/target/arm/tcg/translate-sve.c#L4182 
>>
>
> Note that ARM SVE defines these instructions to have byte atomicity.
>
>
> r~


  reply	other threads:[~2024-11-11 16:05 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-29 19:43 [RFC v4 0/2] target/riscv: add wrapper for target specific macros in atomicity check Paolo Savini
2024-10-29 19:43 ` [RFC v4 1/2] target/riscv: rvv: reduce the overhead for simple RISC-V vector unit-stride loads and stores Paolo Savini
2024-11-06 16:08   ` Daniel Henrique Barboza
2024-10-29 19:43 ` [RFC v4 2/2] target/riscv: rvv: improve performance of RISC-V vector loads and stores on large amounts of data Paolo Savini
2024-10-30 11:40   ` Richard Henderson
2024-10-30 15:25     ` Paolo Savini
2024-11-04 12:48       ` Richard Henderson
2024-11-07 12:58         ` Daniel Henrique Barboza
2024-11-08  9:11           ` Richard Henderson
2024-11-11 16:04             ` Paolo Savini [this message]
2024-11-14 16:09               ` Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a9f51b76-1cd7-405f-b4a7-384c7447ff88@embecosm.com \
    --to=paolo.savini@embecosm.com \
    --cc=alistair.francis@wdc.com \
    --cc=bmeng.cn@gmail.com \
    --cc=dbarboza@ventanamicro.com \
    --cc=helene.chelin@embecosm.com \
    --cc=liwei1518@gmail.com \
    --cc=max.chou@sifive.com \
    --cc=negge@google.com \
    --cc=palmer@dabbelt.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-riscv@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=zhiwei_liu@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).