From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E307CC43219 for ; Fri, 3 May 2019 10:44:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B76322081C for ; Fri, 3 May 2019 10:44:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="B8l1o+Rj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727638AbfECKoE (ORCPT ); Fri, 3 May 2019 06:44:04 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:44981 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727626AbfECKoC (ORCPT ); Fri, 3 May 2019 06:44:02 -0400 Received: by mail-wr1-f68.google.com with SMTP id c5so7230238wrs.11 for ; Fri, 03 May 2019 03:44:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BnVwXJBAqCiXgdYZn9E37jMtQL+HdDJKMN/6RaQWHWo=; b=B8l1o+RjIJMTzycb6cTh+0X7NlfllMw51rmqD3PgGu9zdm/K3FPY69v/Vme+RB787m 7rql/AbsV1uzxCsH9Zu6zTYWOG1mjZJI498q7euvj1MwAbfQzgcFWgQ4fFmrNacYqqSd u2CbYdkrGU+C3ST+Ly8rGrnc/eFNUx7RtBRgZiS7+UFRBmiWNsrQHugGIXKhBiMCshi3 SEJCyzDxiMrYQsAT2K7cnwRTWbE+98lyyExiYYGUJ+XHVMpcrkW2BKdjryfbNfXrq8gh 1l1VqDQ0o+4P5DIFXH39EbzJYR2UUQ53en5jUvGIrKY2uKhDBA3jRxmEwdXJTdrPio75 5T3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BnVwXJBAqCiXgdYZn9E37jMtQL+HdDJKMN/6RaQWHWo=; b=GqodeTa8GLFOY+EcPh25TiLacCFiYr3Gfhd9G0KURjcj7/2PsgOCUiCGzaGmU09aLP rm8tOdme1FCPOHhqteASKWBuzM17cezNfH/6mh95t0lk5rAtvIhwoMGdEiuLdEZerSBS DCK1GjT6pn9qF9/pUX2d3KS/yl3rtbAsqKMZ5y2oOY8YUlO7tq+L9MpPNfsfNsOM7QDn AoXNSwf1m8NNtS86SO8fZg7m7z3gCIJL9lFJln2zRc2t6xQPn6D+mSmGysLycUz3ebrr V2/5+oCUTSVB81kHXRWuNfeby+iBnWHivIWTXQLhT+2Rfeoh89GOOpx8NxBuEVY58Cqp nvfw== X-Gm-Message-State: APjAAAV+o2LNeVWQXmW6PNDO4kWShXhSxwDDbz7yBNxUzA0PMLR4CiVv Q646dafCMg5v5GDz8aixxpHN4A== X-Google-Smtp-Source: APXvYqwk9GL5ui0BbKyQ9fBK1P+no3ubDfLj8FMZtEljeBFDdNAGGmiOKp4txb8VA+aeLHiKb+K5ow== X-Received: by 2002:a5d:6ad1:: with SMTP id u17mr6292505wrw.200.1556880240525; Fri, 03 May 2019 03:44:00 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:59 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 16/17] riscv: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:43 +0100 Message-Id: <1556880164-10689-17-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Acked-by: Björn Töpel Signed-off-by: Jiong Wang --- arch/riscv/net/bpf_jit_comp.c | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c index 80b12aa..3074c9b 100644 --- a/arch/riscv/net/bpf_jit_comp.c +++ b/arch/riscv/net/bpf_jit_comp.c @@ -731,6 +731,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, { bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 || BPF_CLASS(insn->code) == BPF_JMP; + struct bpf_prog_aux *aux = ctx->prog->aux; int rvoff, i = insn - ctx->prog->insnsi; u8 rd = -1, rs = -1, code = insn->code; s16 off = insn->off; @@ -739,11 +740,15 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, init_regs(&rd, &rs, insn, ctx); switch (code) { + /* dst = (u32) dst */ + case BPF_ALU | BPF_ZEXT: + emit_zext_32(rd, ctx); + break; /* dst = src */ case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X: emit(is64 ? rv_addi(rd, rs, 0) : rv_addiw(rd, rs, 0), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; @@ -771,19 +776,19 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X: emit(is64 ? rv_mul(rd, rd, rs) : rv_mulw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: emit(is64 ? rv_divu(rd, rd, rs) : rv_divuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X: emit(is64 ? rv_remu(rd, rd, rs) : rv_remuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_X: @@ -867,7 +872,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU64 | BPF_MOV | BPF_K: emit_imm(rd, imm, ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; @@ -882,7 +887,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_add(rd, rd, RV_REG_T1) : rv_addw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_SUB | BPF_K: @@ -895,7 +900,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_sub(rd, rd, RV_REG_T1) : rv_subw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_AND | BPF_K: @@ -906,7 +911,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_and(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_OR | BPF_K: @@ -917,7 +922,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_or(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_XOR | BPF_K: @@ -928,7 +933,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_xor(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MUL | BPF_K: @@ -936,7 +941,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_mul(rd, rd, RV_REG_T1) : rv_mulw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_K: @@ -944,7 +949,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_divu(rd, rd, RV_REG_T1) : rv_divuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_K: @@ -952,7 +957,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_remu(rd, rd, RV_REG_T1) : rv_remuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_K: @@ -1503,6 +1508,11 @@ static void bpf_flush_icache(void *start, void *end) flush_icache_range((unsigned long)start, (unsigned long)end); } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { bool tmp_blinded = false, extra_pass = false; -- 2.7.4