From: John Fastabend <john.fastabend@gmail.com>
To: Hou Tao <houtao1@huawei.com>, Alexei Starovoitov <ast@kernel.org>
Cc: Martin KaFai Lau <kafai@fb.com>, Yonghong Song <yhs@fb.com>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Song Liu <songliubraving@fb.com>,
"David S . Miller" <davem@davemloft.net>,
John Fastabend <john.fastabend@gmail.com>,
netdev@vger.kernel.org, bpf@vger.kernel.org, houtao1@huawei.com,
Zi Shen Lim <zlim.lnx@gmail.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Julien Thierry <jthierry@redhat.com>,
Mark Rutland <mark.rutland@arm.com>,
Ard Biesheuvel <ardb@kernel.org>,
linux-arm-kernel@lists.infradead.org
Subject: RE: [PATCH bpf-next 2/2] arm64, bpf: support more atomic operations
Date: Wed, 26 Jan 2022 22:06:13 -0800 [thread overview]
Message-ID: <61f23655411bc_57f032084@john.notmuch> (raw)
In-Reply-To: <20220121135632.136976-3-houtao1@huawei.com>
Hou Tao wrote:
> Atomics for eBPF patch series adds support for atomic[64]_fetch_add,
> atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}, but
> it only add support for x86-64, so support these atomic operations
> for arm64 as well.
>
> Basically the implementation procedure is almost mechanical translation
> of code snippets in atomic_ll_sc.h & atomic_lse.h & cmpxchg.h located
> under arch/arm64/include/asm. An extra temporary register is needed
> for (BPF_ADD | BPF_FETCH) to save the value of src register, instead of
> adding TMP_REG_4 just use BPF_REG_AX instead.
>
> For cpus_have_cap(ARM64_HAS_LSE_ATOMICS) case and no-LSE-ATOMICS case,
> both ./test_verifier and "./test_progs -t atomic" are exercised and
> passed correspondingly.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
>
[...]
> +static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
> +{
> + const u8 code = insn->code;
> + const u8 dst = bpf2a64[insn->dst_reg];
> + const u8 src = bpf2a64[insn->src_reg];
> + const u8 tmp = bpf2a64[TMP_REG_1];
> + const u8 tmp2 = bpf2a64[TMP_REG_2];
> + const bool isdw = BPF_SIZE(code) == BPF_DW;
> + const s16 off = insn->off;
> + u8 reg;
> +
> + if (!off) {
> + reg = dst;
> + } else {
> + emit_a64_mov_i(1, tmp, off, ctx);
> + emit(A64_ADD(1, tmp, tmp, dst), ctx);
> + reg = tmp;
> + }
> +
> + switch (insn->imm) {
Diff'ing X86 implementation which has a BPF_SUB case how is it avoided
here?
> + /* lock *(u32/u64 *)(dst_reg + off) <op>= src_reg */
> + case BPF_ADD:
> + emit(A64_STADD(isdw, reg, src), ctx);
> + break;
> + case BPF_AND:
> + emit(A64_MVN(isdw, tmp2, src), ctx);
> + emit(A64_STCLR(isdw, reg, tmp2), ctx);
> + break;
> + case BPF_OR:
> + emit(A64_STSET(isdw, reg, src), ctx);
> + break;
> + case BPF_XOR:
> + emit(A64_STEOR(isdw, reg, src), ctx);
> + break;
> + /* src_reg = atomic_fetch_add(dst_reg + off, src_reg) */
> + case BPF_ADD | BPF_FETCH:
> + emit(A64_LDADDAL(isdw, src, reg, src), ctx);
> + break;
> + case BPF_AND | BPF_FETCH:
> + emit(A64_MVN(isdw, tmp2, src), ctx);
> + emit(A64_LDCLRAL(isdw, src, reg, tmp2), ctx);
> + break;
> + case BPF_OR | BPF_FETCH:
> + emit(A64_LDSETAL(isdw, src, reg, src), ctx);
> + break;
> + case BPF_XOR | BPF_FETCH:
> + emit(A64_LDEORAL(isdw, src, reg, src), ctx);
> + break;
> + /* src_reg = atomic_xchg(dst_reg + off, src_reg); */
> + case BPF_XCHG:
> + emit(A64_SWPAL(isdw, src, reg, src), ctx);
> + break;
> + /* r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg); */
> + case BPF_CMPXCHG:
> + emit(A64_CASAL(isdw, src, reg, bpf2a64[BPF_REG_0]), ctx);
> + break;
> + default:
> + pr_err_once("unknown atomic op code %02x\n", insn->imm);
> + return -EINVAL;
Was about to suggest maybe EFAULT to align with x86, but on second
thought seems arm jit uses EINVAL more universally so best to be
self consistent. Just an observation.
> + }
> +
> + return 0;
> +}
> +
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-01-27 6:07 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-21 13:56 [PATCH bpf-next 0/2] arm64, bpf: support more atomic ops Hou Tao
2022-01-21 13:56 ` [PATCH bpf-next 1/2] selftests/bpf: use raw_tp program for atomic test Hou Tao
2022-01-21 17:43 ` Andrii Nakryiko
2022-01-21 13:56 ` [PATCH bpf-next 2/2] arm64, bpf: support more atomic operations Hou Tao
2022-01-27 6:06 ` John Fastabend [this message]
2022-01-27 8:10 ` Hou Tao
2022-01-28 2:43 ` John Fastabend
2022-01-28 9:18 ` Hou Tao
2022-01-28 10:06 ` Daniel Borkmann
2022-01-28 10:16 ` Mark Rutland
2022-01-28 16:15 ` Yonghong Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=61f23655411bc_57f032084@john.notmuch \
--to=john.fastabend@gmail.com \
--cc=andrii@kernel.org \
--cc=ardb@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=catalin.marinas@arm.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=houtao1@huawei.com \
--cc=jthierry@redhat.com \
--cc=kafai@fb.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mark.rutland@arm.com \
--cc=netdev@vger.kernel.org \
--cc=songliubraving@fb.com \
--cc=will@kernel.org \
--cc=yhs@fb.com \
--cc=zlim.lnx@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).