From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3rxMJB6vKxzDqZS for ; Sat, 23 Jul 2016 19:13:30 +1000 (AEST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u6N9ApmJ039354 for ; Sat, 23 Jul 2016 05:13:29 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0b-001b2d01.pphosted.com with ESMTP id 24c1h0wdj2-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Sat, 23 Jul 2016 05:13:28 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 23 Jul 2016 03:13:28 -0600 From: "Aneesh Kumar K.V" To: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au Cc: linuxppc-dev@lists.ozlabs.org, Kevin Hao , "Aneesh Kumar K . V" Subject: [PATCH for-4.8 V2 08/10] powerpc: use the jump label for cpu_has_feature Date: Sat, 23 Jul 2016 14:42:41 +0530 In-Reply-To: <1469265163-1491-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> References: <1469265163-1491-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> Message-Id: <1469265163-1491-9-git-send-email-aneesh.kumar@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Kevin Hao The cpu features are fixed once the probe of cpu features are done. And the function cpu_has_feature() does be used in some hot path. The checking of the cpu features for each time of invoking of cpu_has_feature() seems suboptimal. This tries to reduce this overhead of this check by using jump label. The generated assemble code of the following c program: if (cpu_has_feature(CPU_FTR_XXX)) xxx() Before: lis r9,-16230 lwz r9,12324(r9) lwz r9,12(r9) andi. r10,r9,512 beqlr- After: nop if CPU_FTR_XXX is enabled b xxx if CPU_FTR_XXX is not enabled Signed-off-by: Kevin Hao Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/cpufeatures.h | 21 +++++++++++++++++++++ arch/powerpc/include/asm/cputable.h | 8 ++++++++ arch/powerpc/kernel/cputable.c | 20 ++++++++++++++++++++ arch/powerpc/lib/feature-fixups.c | 1 + 4 files changed, 50 insertions(+) diff --git a/arch/powerpc/include/asm/cpufeatures.h b/arch/powerpc/include/asm/cpufeatures.h index bfa6cb8f5629..4a4a0b898463 100644 --- a/arch/powerpc/include/asm/cpufeatures.h +++ b/arch/powerpc/include/asm/cpufeatures.h @@ -13,10 +13,31 @@ static inline bool __cpu_has_feature(unsigned long feature) return !!(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature); } +#ifdef CONFIG_JUMP_LABEL +#include + +extern struct static_key_true cpu_feat_keys[MAX_CPU_FEATURES]; + +static __always_inline bool cpu_has_feature(unsigned long feature) +{ + int i; + + if (CPU_FTRS_ALWAYS & feature) + return true; + + if (!(CPU_FTRS_POSSIBLE & feature)) + return false; + + i = __builtin_ctzl(feature); + return static_branch_likely(&cpu_feat_keys[i]); +} +#else static inline bool cpu_has_feature(unsigned long feature) { return __cpu_has_feature(feature); } +#endif + #endif /* __ASSEMBLY__ */ #endif /* __ASM_POWERPC_CPUFEATURE_H */ diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h index a49ea95849f8..6c161e456759 100644 --- a/arch/powerpc/include/asm/cputable.h +++ b/arch/powerpc/include/asm/cputable.h @@ -122,6 +122,12 @@ extern void do_feature_fixups(unsigned long value, void *fixup_start, extern const char *powerpc_base_platform; +#ifdef CONFIG_JUMP_LABEL +extern void cpu_feat_keys_init(void); +#else +static inline void cpu_feat_keys_init(void) { } +#endif + /* TLB flush actions. Used as argument to cpu_spec.flush_tlb() hook */ enum { TLB_INVAL_SCOPE_GLOBAL = 0, /* invalidate all TLBs */ @@ -132,6 +138,8 @@ enum { /* CPU kernel features */ +#define MAX_CPU_FEATURES (8 * sizeof(((struct cpu_spec *)0)->cpu_features)) + /* Retain the 32b definitions all use bottom half of word */ #define CPU_FTR_COHERENT_ICACHE ASM_CONST(0x00000001) #define CPU_FTR_L2CR ASM_CONST(0x00000002) diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c index d81f826d1029..67ce4816998e 100644 --- a/arch/powerpc/kernel/cputable.c +++ b/arch/powerpc/kernel/cputable.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -2224,3 +2225,22 @@ struct cpu_spec * __init identify_cpu(unsigned long offset, unsigned int pvr) return NULL; } + +#ifdef CONFIG_JUMP_LABEL +struct static_key_true cpu_feat_keys[MAX_CPU_FEATURES] = { + [0 ... MAX_CPU_FEATURES - 1] = STATIC_KEY_TRUE_INIT +}; +EXPORT_SYMBOL_GPL(cpu_feat_keys); + +void __init cpu_feat_keys_init(void) +{ + int i; + + for (i = 0; i < MAX_CPU_FEATURES; i++) { + unsigned long f = 1ul << i; + + if (!(cur_cpu_spec->cpu_features & f)) + static_branch_disable(&cpu_feat_keys[i]); + } +} +#endif diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c index 8b0b0b51e8aa..ec698b9e6238 100644 --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -183,6 +183,7 @@ void apply_feature_fixups(void) * by now. */ jump_label_init(); + cpu_feat_keys_init(); } #ifdef CONFIG_FTR_FIXUP_SELFTEST -- 2.7.4