From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40jBC75c9tzF2Tm for ; Fri, 11 May 2018 23:43:57 +1000 (AEST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w4BDf3M3048352 for ; Fri, 11 May 2018 09:43:50 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2hw95b07bt-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 11 May 2018 09:43:50 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 11 May 2018 14:43:48 +0100 From: Anju T Sudhakar To: linuxppc-dev@lists.ozlabs.org Cc: anju@linux.vnet.ibm.com, mpe@ellerman.id.au, maddy@linux.vnet.ibm.com, ppaidipe@linux.vnet.ibm.com Subject: [PATCH] powerpc/perf: Fix memory allocation for core-imc based on num_possible_cpus() Date: Fri, 11 May 2018 19:13:42 +0530 Message-Id: <1526046222-17842-1-git-send-email-anju@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Currently memory is allocated for core-imc based on cpu_present_mask, which has bit 'cpu' set iff cpu is populated. We use (cpu number / threads per core) as as array index to access the memory. So in a system with guarded cores, since allocation happens based on cpu_present_mask, (cpu number / threads per core) bounds the index and leads to memory overflow. The issue is exposed in a guard test. The guard test will make some CPU's as un-available to the system during boot time as well as at runtime. So when the cpu is unavailable to the system during boot time, the memory allocation happens depending on the number of available cpus. And when we access the memory using (cpu number / threads per core) as the index the system crashes due to memory overflow. Allocating memory for core-imc based on cpu_possible_mask, which has bit 'cpu' set iff cpu is populatable, will fix this issue. Reported-by: Pridhiviraj Paidipeddi Signed-off-by: Anju T Sudhakar --- arch/powerpc/perf/imc-pmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c index d7532e7..75fb23c 100644 --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -1146,7 +1146,7 @@ static int init_nest_pmu_ref(void) static void cleanup_all_core_imc_memory(void) { - int i, nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core); + int i, nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core); struct imc_mem_info *ptr = core_imc_pmu->mem_info; int size = core_imc_pmu->counter_mem_size; @@ -1264,7 +1264,7 @@ static int imc_mem_init(struct imc_pmu *pmu_ptr, struct device_node *parent, if (!pmu_ptr->pmu.name) return -ENOMEM; - nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core); + nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core); pmu_ptr->mem_info = kcalloc(nr_cores, sizeof(struct imc_mem_info), GFP_KERNEL); -- 2.7.4