From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3rZVP36fFbzDqMC for ; Thu, 23 Jun 2016 02:27:19 +1000 (AEST) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u5MGOdxT034823 for ; Wed, 22 Jun 2016 12:27:18 -0400 Received: from e28smtp09.in.ibm.com (e28smtp09.in.ibm.com [125.16.236.9]) by mx0a-001b2d01.pphosted.com with ESMTP id 23qaeexhpp-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 22 Jun 2016 12:27:18 -0400 Received: from localhost by e28smtp09.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 22 Jun 2016 21:57:15 +0530 Received: from d28relay09.in.ibm.com (d28relay09.in.ibm.com [9.184.220.160]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 34B62E005B for ; Wed, 22 Jun 2016 22:00:56 +0530 (IST) Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay09.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u5MGRC9Z25231426 for ; Wed, 22 Jun 2016 21:57:12 +0530 Received: from d28av01.in.ibm.com (localhost [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u5MGR9V7025395 for ; Wed, 22 Jun 2016 21:57:12 +0530 From: "Naveen N. Rao" To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, Michael Ellerman Cc: Matt Evans , Denis Kirjanov , Paul Mackerras , Alexei Starovoitov , Daniel Borkmann , "David S. Miller" , Ananth N Mavinakayanahalli , Thadeu Lima de Souza Cascardo Subject: [PATCHv2 4/7] ppc: bpf/jit: Introduce rotate immediate instructions Date: Wed, 22 Jun 2016 21:55:04 +0530 In-Reply-To: References: In-Reply-To: References: Message-Id: <8264e2692c8d0b2a019827e052cdda29c4ec2fbc.1466612260.git.naveen.n.rao@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Since we will be using the rotate immediate instructions for extended BPF JIT, let's introduce macros for the same. And since the shift immediate operations use the rotate immediate instructions, let's redo those macros to use the newly introduced instructions. Cc: Matt Evans Cc: Denis Kirjanov Cc: Michael Ellerman Cc: Paul Mackerras Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: "David S. Miller" Cc: Ananth N Mavinakayanahalli Cc: Thadeu Lima de Souza Cascardo Acked-by: Alexei Starovoitov Signed-off-by: Naveen N. Rao --- arch/powerpc/include/asm/ppc-opcode.h | 2 ++ arch/powerpc/net/bpf_jit.h | 20 +++++++++++--------- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h index 1d035c1..fd8d640 100644 --- a/arch/powerpc/include/asm/ppc-opcode.h +++ b/arch/powerpc/include/asm/ppc-opcode.h @@ -272,6 +272,8 @@ #define __PPC_SH(s) __PPC_WS(s) #define __PPC_MB(s) (((s) & 0x1f) << 6) #define __PPC_ME(s) (((s) & 0x1f) << 1) +#define __PPC_MB64(s) (__PPC_MB(s) | ((s) & 0x20)) +#define __PPC_ME64(s) __PPC_MB64(s) #define __PPC_BI(s) (((s) & 0x1f) << 16) #define __PPC_CT(t) (((t) & 0x0f) << 21) diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h index 4c1e055..95d0e38 100644 --- a/arch/powerpc/net/bpf_jit.h +++ b/arch/powerpc/net/bpf_jit.h @@ -210,18 +210,20 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh); ___PPC_RS(a) | ___PPC_RB(s)) #define PPC_SRW(d, a, s) EMIT(PPC_INST_SRW | ___PPC_RA(d) | \ ___PPC_RS(a) | ___PPC_RB(s)) +#define PPC_RLWINM(d, a, i, mb, me) EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \ + ___PPC_RS(a) | __PPC_SH(i) | \ + __PPC_MB(mb) | __PPC_ME(me)) +#define PPC_RLDICR(d, a, i, me) EMIT(PPC_INST_RLDICR | ___PPC_RA(d) | \ + ___PPC_RS(a) | __PPC_SH(i) | \ + __PPC_ME64(me) | (((i) & 0x20) >> 4)) + /* slwi = rlwinm Rx, Ry, n, 0, 31-n */ -#define PPC_SLWI(d, a, i) EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \ - ___PPC_RS(a) | __PPC_SH(i) | \ - __PPC_MB(0) | __PPC_ME(31-(i))) +#define PPC_SLWI(d, a, i) PPC_RLWINM(d, a, i, 0, 31-(i)) /* srwi = rlwinm Rx, Ry, 32-n, n, 31 */ -#define PPC_SRWI(d, a, i) EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \ - ___PPC_RS(a) | __PPC_SH(32-(i)) | \ - __PPC_MB(i) | __PPC_ME(31)) +#define PPC_SRWI(d, a, i) PPC_RLWINM(d, a, 32-(i), i, 31) /* sldi = rldicr Rx, Ry, n, 63-n */ -#define PPC_SLDI(d, a, i) EMIT(PPC_INST_RLDICR | ___PPC_RA(d) | \ - ___PPC_RS(a) | __PPC_SH(i) | \ - __PPC_MB(63-(i)) | (((i) & 0x20) >> 4)) +#define PPC_SLDI(d, a, i) PPC_RLDICR(d, a, i, 63-(i)) + #define PPC_NEG(d, a) EMIT(PPC_INST_NEG | ___PPC_RT(d) | ___PPC_RA(a)) /* Long jump; (unconditional 'branch') */ -- 2.8.2