From: Yonghong Song <yonghong.song@linux.dev>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: "Jose E. Marchesi" <jose.marchesi@oracle.com>,
bpf <bpf@vger.kernel.org>, Alexei Starovoitov <ast@kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Kernel Team <kernel-team@fb.com>,
Martin KaFai Lau <martin.lau@kernel.org>
Subject: Re: [PATCH bpf-next v2] selftests/bpf: Fix arena_atomics selftest failure due to llvm change
Date: Mon, 12 Aug 2024 13:42:16 -0700 [thread overview]
Message-ID: <7b941f53-2a05-48ec-9032-8f106face3a3@linux.dev> (raw)
In-Reply-To: <CAADnVQ+gxrq5O2N168xYZa3UGWx_kNyPQihFPt=FLC56j9KOnA@mail.gmail.com>
On 8/8/24 9:26 AM, Alexei Starovoitov wrote:
> On Mon, Aug 5, 2024 at 10:26 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>>>> Maybe we could special process the above to generate
>>>> a locked insn if
>>>> - atomicrmw operator
>>>> - monotonic (related) consistency
>>>> - return value is not used
> This sounds like a good idea, but...
>
>>>> So this will not violate original program semantics.
>>>> Does this sound a reasonable apporach?
>>> Whether monotonic consistency is desired (ordered writes) can be
>>> probably deduced from the memory_order_* flag of the built-ins, but I
>>> don't know what atomiccrmw is... what is it in non-llvm terms?
>> The llvm language reference for atomicrmw:
>>
>> https://llvm.org/docs/LangRef.html#atomicrmw-instruction
> I read it back and forth, but couldn't find whether it's ok
> for the backend to use stronger ordering insn when weaker ordering
> is specified in atomicrmw.
> It's probably ok.
> Otherwise atomicrmw with monotnic (memory_order_relaxed) and
> return value is used cannot be mapped to any bpf insn.
> x86 doesn't have monotonic either, but I suspect the backend
> still generates the code without any warnings.
I did a little bit experiment for x86,
$ cat t.c
int foo;
void bar1(void) {
__sync_fetch_and_add(&foo, 10);
}
int bar2(void) {
return __sync_fetch_and_add(&foo, 10);
}
$ clang -O2 -I. -c t.c && llvm-objdump -d t.o
t.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <bar1>:
0: f0 lock
1: 83 05 00 00 00 00 0a addl $0xa, (%rip) # 0x8 <bar1+0x8>
8: c3 retq
9: 0f 1f 80 00 00 00 00 nopl (%rax)
0000000000000010 <bar2>:
10: b8 0a 00 00 00 movl $0xa, %eax
15: f0 lock
16: 0f c1 05 00 00 00 00 xaddl %eax, (%rip) # 0x1d <bar2+0xd>
1d: c3 retq
So without return value, __sync_fetch_and_add will generated locked add insn.
With return value, 'lock xaddl' is used which should be similar to bpf atomic_fetch_add.
Another example,
$ cat t1.c
#include <stdatomic.h>
void f1(_Atomic int *i) {
__c11_atomic_fetch_and(i, 10, memory_order_relaxed);
}
void f2(_Atomic int *i) {
__c11_atomic_fetch_and(i, 10, memory_order_seq_cst);
}
void f3(_Atomic int *i) {
__c11_atomic_fetch_and(i, 10, memory_order_release);
}
_Atomic int f4(_Atomic int *i) {
return __c11_atomic_fetch_and(i, 10, memory_order_release);
}
$ clang -O2 -I. -c t1.c && llvm-objdump -d t1.o
t1.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <f1>:
0: f0 lock
1: 83 27 0a andl $0xa, (%rdi)
4: c3 retq
5: 66 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:(%rax,%rax)
0000000000000010 <f2>:
10: f0 lock
11: 83 27 0a andl $0xa, (%rdi)
14: c3 retq
15: 66 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:(%rax,%rax)
0000000000000020 <f3>:
20: f0 lock
21: 83 27 0a andl $0xa, (%rdi)
24: c3 retq
25: 66 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:(%rax,%rax)
0000000000000030 <f4>:
30: 8b 07 movl (%rdi), %eax
32: 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:(%rax,%rax)
40: 89 c1 movl %eax, %ecx
42: 83 e1 0a andl $0xa, %ecx
45: f0 lock
46: 0f b1 0f cmpxchgl %ecx, (%rdi)
49: 75 f5 jne 0x40 <f4+0x10>
4b: c3 retq
$
In all cases without return value, for x86, 'lock andl' insn is generated
regardless of the memory order constraint. This is similar to
what we had before.
Although previous 'lock' encoding matches to x86 nicely.
But for ARM64 and for 'lock ...' bpf insn,
the generated insn won't have a barrier (acquire or release or both)
semantics. So I guess that ARM64 wants to preserve the current hehavior:
- for 'lock ...' insn, no barrier,
- for 'atomic_fetch_add ...' insn, proper barrier.
For x86, for both 'lock ...' and 'atomic_fetch_add ...' will have proper barrier.
To keep 'lock ...' no-barrier required, user needs to specify
memory_order_relaxed constraint. This will permit bpf backend
to generate 'lock ...' insn.
Looks like x86 does check memory_order_relaxed to do some
special processing:
X86CompressEVEX.cpp: if (!TableChecked.load(std::memory_order_relaxed)) {
X86CompressEVEX.cpp: TableChecked.store(true, std::memory_order_relaxed);
X86FloatingPoint.cpp: if (!TABLE##Checked.load(std::memory_order_relaxed)) { \
X86FloatingPoint.cpp: TABLE##Checked.store(true, std::memory_order_relaxed); \
X86InstrFMA3Info.cpp: if (!TableChecked.load(std::memory_order_relaxed)) {
X86InstrFMA3Info.cpp: TableChecked.store(true, std::memory_order_relaxed);
X86InstrFoldTables.cpp: if (!FoldTablesChecked.load(std::memory_order_relaxed)) {
X86InstrFoldTables.cpp: FoldTablesChecked.store(true, std::memory_order_relaxed);
So I guess that bpf can do similar thing to check memory_order_relaxed in IR
and may generate different (more efficient insns) as needed.
> Would be good to clarify before we proceed with the above plan.
prev parent reply other threads:[~2024-08-12 20:42 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-03 2:59 [PATCH bpf-next v2] selftests/bpf: Fix arena_atomics selftest failure due to llvm change Yonghong Song
2024-08-05 17:53 ` Alexei Starovoitov
2024-08-05 22:36 ` Yonghong Song
2024-08-06 4:48 ` Jose E. Marchesi
2024-08-06 5:26 ` Yonghong Song
2024-08-08 16:26 ` Alexei Starovoitov
2024-08-12 20:42 ` Yonghong Song [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7b941f53-2a05-48ec-9032-8f106face3a3@linux.dev \
--to=yonghong.song@linux.dev \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=jose.marchesi@oracle.com \
--cc=kernel-team@fb.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox