From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1242C433EF for ; Thu, 27 Jan 2022 06:07:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Mime-Version:Subject:References: In-Reply-To:Message-ID:Cc:To:From:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=k8Z5jgD+TscE+fC5fbxzbxpm8MD6QOvkNTqrDel5nRQ=; b=TXyRSD2BTD3q/CKEGH+xE3Z6GG gDrlK0ajTn2zz96dlW3iQcbiPpDL+ipflN8o52oo/4EqiswbIRrGm25cX7MdLYqo/FtdRyfUEVuQV OTFjfpPEVrdNk9UqYCWt0ojpfA3UnKgm/Wegvsicvovajhv4b8zr6h6nlbotYsUgfY3L0xWTLrzxN 7reyY/rsl00OQelDlqusjvzsWBMSTTx1EvnVBrAv1qP0ZfcXxqBy0nTwKa+vA03j/q0kMZC9EGq1H oVBK2iSvkiKFfOjjc8DRU1tWopRxeFwdeiY0eOq7mWTw3TvF90haIRJ602mwg/htfaSRu1f2i13BP zb3QQJBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCxvQ-00ESwv-5d; Thu, 27 Jan 2022 06:06:28 +0000 Received: from mail-ot1-x331.google.com ([2607:f8b0:4864:20::331]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCxvM-00ESvh-9S for linux-arm-kernel@lists.infradead.org; Thu, 27 Jan 2022 06:06:25 +0000 Received: by mail-ot1-x331.google.com with SMTP id z25-20020a0568301db900b005946f536d85so1551076oti.9 for ; Wed, 26 Jan 2022 22:06:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:message-id:in-reply-to:references:subject :mime-version:content-transfer-encoding; bh=cOfoUhoFx3jKlhD91yxDstyBmmtQrEA0QvMamiSos8A=; b=bgpkoroeWbjXup7dfsQ6y1qhz4a9tQ0Ea7HEinBfWJjNW/cOK+4W3H/g8Jjw004k01 vOatqsNNK8bMxuDycGTIE2YeaCw4LbizYZEZ2IXYjvTeoYvdOSSC3kNdcxg7XUZvug7e R2k8BTVi0/GCyazcE6gj0nBK/bHAfQvT0T7l24RSfe6iCCPsZ+uVV/YCb7PoraRdOgFy TO32yZOwNky55DBZV3qY9xu6SWEurAiclYcYalL0aLErJZvMtOnsr9P7TSntjzloYdRX CZys8fib++e1z2aTUQGbNGj2L3r6se8JH8JRMr6GVQb5ekukZ3ALZiKqsqxSvJyGpxlF kchg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:message-id:in-reply-to :references:subject:mime-version:content-transfer-encoding; bh=cOfoUhoFx3jKlhD91yxDstyBmmtQrEA0QvMamiSos8A=; b=HPI+1/STgtTmBswnA91Q/pCINUH49kncYRBz/Ti8ea2SwEev2gRQ6xlpcaqwUTROm9 UB3PGgSeHBFef4trCFWWY4JnvJL8CJvsb/5tWbm2IJDb5SMtfVYz4nZUjuPJAv4/dT7E Oq0b2MKJzLGcmpaU7XKQWGTmLfO+fMAWRLRdmLMtERlJa/K32S0WPBsJU1fZx6+q4iQB OSLbT5knanjvwFiv94WUJRIDdn2g3JZwZEfTBiv2Xf4kqkP7DsP8iZNnXXOngU/K3IQ/ jTT2md3NLdGWYQY3O2e2/Naqmf+Me5zN7CVqpXKAkrtDma0o5znhZyjJvzLGAvSCcimx ST5Q== X-Gm-Message-State: AOAM533i6kioGII8ebL+cwT8+VSmdOrZbAH7rRqeE5koBGC5QZJsf6a+ zV0gjQE3qvoalVOKBDo16mc= X-Google-Smtp-Source: ABdhPJzthtDDVq5A55A2CfKyw2Y8wVNKgQAyDsBSu7233h9Br8CMh73tnllAPgizN+LnmoCE4RiAcA== X-Received: by 2002:a05:6830:3148:: with SMTP id c8mr1407385ots.380.1643263582464; Wed, 26 Jan 2022 22:06:22 -0800 (PST) Received: from localhost ([99.197.200.79]) by smtp.gmail.com with ESMTPSA id q188sm7466800oig.15.2022.01.26.22.06.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 22:06:22 -0800 (PST) Date: Wed, 26 Jan 2022 22:06:13 -0800 From: John Fastabend To: Hou Tao , Alexei Starovoitov Cc: Martin KaFai Lau , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Song Liu , "David S . Miller" , John Fastabend , netdev@vger.kernel.org, bpf@vger.kernel.org, houtao1@huawei.com, Zi Shen Lim , Catalin Marinas , Will Deacon , Julien Thierry , Mark Rutland , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org Message-ID: <61f23655411bc_57f032084@john.notmuch> In-Reply-To: <20220121135632.136976-3-houtao1@huawei.com> References: <20220121135632.136976-1-houtao1@huawei.com> <20220121135632.136976-3-houtao1@huawei.com> Subject: RE: [PATCH bpf-next 2/2] arm64, bpf: support more atomic operations Mime-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220126_220624_380131_BAD1ECE0 X-CRM114-Status: GOOD ( 20.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hou Tao wrote: > Atomics for eBPF patch series adds support for atomic[64]_fetch_add, > atomic[64]_[fetch_]{and,or,xor} and atomic[64]_{xchg|cmpxchg}, but > it only add support for x86-64, so support these atomic operations > for arm64 as well. > > Basically the implementation procedure is almost mechanical translation > of code snippets in atomic_ll_sc.h & atomic_lse.h & cmpxchg.h located > under arch/arm64/include/asm. An extra temporary register is needed > for (BPF_ADD | BPF_FETCH) to save the value of src register, instead of > adding TMP_REG_4 just use BPF_REG_AX instead. > > For cpus_have_cap(ARM64_HAS_LSE_ATOMICS) case and no-LSE-ATOMICS case, > both ./test_verifier and "./test_progs -t atomic" are exercised and > passed correspondingly. > > Signed-off-by: Hou Tao > --- > [...] > +static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) > +{ > + const u8 code = insn->code; > + const u8 dst = bpf2a64[insn->dst_reg]; > + const u8 src = bpf2a64[insn->src_reg]; > + const u8 tmp = bpf2a64[TMP_REG_1]; > + const u8 tmp2 = bpf2a64[TMP_REG_2]; > + const bool isdw = BPF_SIZE(code) == BPF_DW; > + const s16 off = insn->off; > + u8 reg; > + > + if (!off) { > + reg = dst; > + } else { > + emit_a64_mov_i(1, tmp, off, ctx); > + emit(A64_ADD(1, tmp, tmp, dst), ctx); > + reg = tmp; > + } > + > + switch (insn->imm) { Diff'ing X86 implementation which has a BPF_SUB case how is it avoided here? > + /* lock *(u32/u64 *)(dst_reg + off) = src_reg */ > + case BPF_ADD: > + emit(A64_STADD(isdw, reg, src), ctx); > + break; > + case BPF_AND: > + emit(A64_MVN(isdw, tmp2, src), ctx); > + emit(A64_STCLR(isdw, reg, tmp2), ctx); > + break; > + case BPF_OR: > + emit(A64_STSET(isdw, reg, src), ctx); > + break; > + case BPF_XOR: > + emit(A64_STEOR(isdw, reg, src), ctx); > + break; > + /* src_reg = atomic_fetch_add(dst_reg + off, src_reg) */ > + case BPF_ADD | BPF_FETCH: > + emit(A64_LDADDAL(isdw, src, reg, src), ctx); > + break; > + case BPF_AND | BPF_FETCH: > + emit(A64_MVN(isdw, tmp2, src), ctx); > + emit(A64_LDCLRAL(isdw, src, reg, tmp2), ctx); > + break; > + case BPF_OR | BPF_FETCH: > + emit(A64_LDSETAL(isdw, src, reg, src), ctx); > + break; > + case BPF_XOR | BPF_FETCH: > + emit(A64_LDEORAL(isdw, src, reg, src), ctx); > + break; > + /* src_reg = atomic_xchg(dst_reg + off, src_reg); */ > + case BPF_XCHG: > + emit(A64_SWPAL(isdw, src, reg, src), ctx); > + break; > + /* r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg); */ > + case BPF_CMPXCHG: > + emit(A64_CASAL(isdw, src, reg, bpf2a64[BPF_REG_0]), ctx); > + break; > + default: > + pr_err_once("unknown atomic op code %02x\n", insn->imm); > + return -EINVAL; Was about to suggest maybe EFAULT to align with x86, but on second thought seems arm jit uses EINVAL more universally so best to be self consistent. Just an observation. > + } > + > + return 0; > +} > + _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel