From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934698AbcA1GkR (ORCPT ); Thu, 28 Jan 2016 01:40:17 -0500 Received: from mail-bn1on0053.outbound.protection.outlook.com ([157.56.110.53]:9760 "EHLO na01-bn1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933905AbcA1Gjt (ORCPT ); Thu, 28 Jan 2016 01:39:49 -0500 Authentication-Results: spf=none (sender IP is 165.204.84.222) smtp.mailfrom=amd.com; alien8.de; dkim=none (message not signed) header.d=none;alien8.de; dmarc=permerror action=none header.from=amd.com; X-WSS-ID: 0O1NH65-08-CR6-02 X-M-MSG: From: Huang Rui To: Borislav Petkov , Peter Zijlstra , "Ingo Molnar" , Andy Lutomirski , "Thomas Gleixner" , Robert Richter , Jacob Shin , John Stultz , =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Weisbecker?= CC: , , , Guenter Roeck , Andreas Herrmann , Suravee Suthikulpanit , Aravind Gopalakrishnan , Borislav Petkov , "Fengguang Wu" , Aaron Lu , Huang Rui Subject: [PATCH v4] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Date: Thu, 28 Jan 2016 14:38:51 +0800 Message-ID: <1453963131-2013-1-git-send-email-ray.huang@amd.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:165.204.84.222;CTRY:US;IPV:NLI;EFV:NLI;SFV:NSPM;SFS:(10009020)(6009001)(2980300002)(428002)(164054003)(189002)(199003)(15975445007)(106466001)(11100500001)(229853001)(77096005)(189998001)(1096002)(5003600100002)(47776003)(1220700001)(551984002)(5008740100001)(105586002)(50986999)(101416001)(19580395003)(586003)(5003940100001)(97736004)(36756003)(53416004)(19580405001)(48376002)(50466002)(4326007)(50226001)(87936001)(3470700001)(86362001)(33646002)(92566002)(2906002)(5001770100001)(2004002);DIR:OUT;SFP:1101;SCL:1;SRVR:CY1PR12MB0855;H:atltwp02.amd.com;FPR:;SPF:None;PTR:InfoDomainNonexistent;MX:1;A:1;LANG:en; X-Microsoft-Exchange-Diagnostics: 1;CY1PR12MB0855;2:VCR3NPE0sPbEz15HZSui4aoZGhqnM0dH3EhmM9LcRQ+QUl7ZtCtAHj0iHdttubzg6u3Z4im86HZnGelmVkbEliFJCsDsoeUkWZCawIXr6H3SAnwGpuza8qT1+i4nA/0eZRQv/0qUuwukeof394NxIw==;3:1SEwGsuUGLi5uSIY8z8boctnqRQQ/lapXQPpWf2init8VlFIN1YIy3FMpnLnDQJYNdc1B4aPV/igB9lzFOwy6RMgZ9jU9VeNnI46Lo1FWUWBo1LnbvoSxNxd1wa9sG4HSVkuuRXNMxIwrYI7nfGaHXBPIHKMou/OhiJStkN26sOM72seMIR5r6Y6X8ifnytnRXmeRgMJ5A/S5X0Lnx6ahDjYUUXRB/Ko7pzZ60Ban6o=;25:6mGvpJF2UFPcIQzjj8L3d2o6OfmrnAQH9bDWMRgBgY0Zq/LynpxIosIcOv5C5nxASU9KaZTT5REg9UOZz7yA/23R7yd41BWN12Q7yTJTf7v2TE3HQFK89vxkkircFvmhIbWNZLCHmSzJAgNWlNIAqI+rymQoou7qNnv18SFHPhFXE1hPQ1Yu3wSPNZeTt+h1MJ/ewK1Nrmb2hRZx4sGsB4GxMgbKlAWRfY8xHEfoXH9tWZnJPEeR2PZqtEbrlyCv X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR12MB0855; X-MS-Office365-Filtering-Correlation-Id: 47edb550-4cd3-4d49-227e-08d327adcfe9 X-Microsoft-Exchange-Diagnostics: 1;CY1PR12MB0855;20:n3jdts2JtdkVmCHp0x35ojt1Adm7oxIWdAJzs2KwH2Enpz1YT5AXzqhTer9YFiWCRhXot5P0DVo0y5TfH2VEVrAWjMZAjEyEQ4Q7DzEEqoLRnNpPynlnPeeEWP8mXsNQDJl8vDiamcn1DaZhuzdRbU9Glojj8/6sYxpCwSEUQBpJz/SqBVvmNlYHC/bS6apV6mWuwu5NLxI7cFotp/0Y8T+LyW8N0QHF9+S+2ju0mDKXPadrZCgV2MsyhP3gmhRkoHAkpRIBP3VKcr0irNHwPUYgW4yubF/v5nsgFePqokcpGyHpqWcJ8ECq+1zS+qc1P3Whn/fDp8T5hRm5jn1B2iOEVF/eHwRJ2fW2zwspH1BeKaVq8LgTDpY5tTd/ahoplNexehwtEEQmCjx1wtH6Ykhg0m+eUU6aisGVGFoqMmucc2jG7rfHl+gG2ali10UJgNDi+gGiPoetPQNMo6x1GFr/ttSZ2ufAxlfUCU4v6Oy2jdMAnwPzeGkZ1jj8mbpf X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(2401047)(13017025)(8121501046)(13023025)(13024025)(5005006)(13015025)(13018025)(3002001)(10201501046);SRVR:CY1PR12MB0855;BCL:0;PCL:0;RULEID:;SRVR:CY1PR12MB0855; X-Microsoft-Exchange-Diagnostics: 1;CY1PR12MB0855;4:qSHMkzmlNKaJ98fnCFUuJU68/kYlQWEXqfRFFUoEMZsrCS5Ip9vSnnirPgW6kBdgiag6dYq0iicVkdlE0bT8/UIo54Xr5fdCRARSXIqLBZ3jEaNchwRTDjodgGu+c1vJsdjNRiTHZQX37cS1gjm5SFDIyB9wrVkm94th++PRUVLXQNzwllJeOvsqNSuTkH4ZyB2mVBuS8znwkB76/223xGTY6AaPhTXnhJj0X3ZcP18k31R1Ap69b0IsXOly6Nmpp1LKNvJeE9tE3erG189NePQZ1QNE9wTEKRkuAHdSCb9RATTBg8wYRNXLnfSISBsrhd7c2O4YNMEhp1tIUTRwisH8NV79yiw37BNICEJczvRFTRA0KChoGGiqcwwczWSq1ZX7+Z7q5kCWko+nCmT48+H6xy3LiM+jzufKjd1HnUwDkMrfTp/Nq7BucV6bmPj/pgqGN52B5MxbhxrWR+OA2M5ktxeezcmEa07ifvhAfL5s6gqn4o1StHkjGsv6cdpZ X-Forefront-PRVS: 083526BF8A X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;CY1PR12MB0855;23:NEvALAscZpsE/UD/526pjg6f4L7QPsCkX0pjKzYlM?= =?us-ascii?Q?Hua3ohLyF+i/evE3LoYHCyPn6qgZMlbh5Y62NlwY7drB4hEgvdkANj/1ipbG?= =?us-ascii?Q?tpYbqKxf1/winy6wYFV9iGuxuA9DP0uKdYobulyBmPWOjP0m2Oj+ZCFqjZ6i?= =?us-ascii?Q?PnrgsPRJxvFWuuHgEpPqO9FyfeY6pMOxw57aHAMNu4Z8hYoqLR44Kn25rFY1?= =?us-ascii?Q?9EBYOWPi5Qo+mdIwFgP8KFhTe8s5ZuAZFUD5fNdaf0DAuvSrd4wxMdZDEzGA?= =?us-ascii?Q?1hDFazKWACgE6jTFdfvTMQea9VGmi1rfbco/cx3wrtyb/BUl/ZU9oI1n/KVF?= =?us-ascii?Q?I4ESEiwCZSfOVNUsF97JjzpVkmVF61hKmfpnfpPC9xAuEKI6/i+Qlmz+wXnA?= =?us-ascii?Q?D6ZuEVIGkmHRnOuSFx1cMLC6I8X2nknc5OCwXgAC7qS1CkxbWhFIlVUSPptw?= =?us-ascii?Q?o7zwCxz4sHcHkspVfeSJo/uKdrqax+1v76WhJzwL3xPur84ojZqGYTyerG8S?= =?us-ascii?Q?/BaPE1Z4S1yuWrb+m88grehSm/WbhG5XkQZW56vRk8zlV/N3U3276sEJmuMQ?= =?us-ascii?Q?Bv8L1nB5MDKNpqnuuDdi8rpvu4NoQlVGlBbFv7eM3/n8rufSpPF8RUgvinCL?= =?us-ascii?Q?/yWRSMfVxir8Iuh2nE0XEWmr5dvaoiBP37qRrME+V/H1OynopIY/4DPOXesb?= =?us-ascii?Q?nqX2nYepWDK9nKKZPlotq8kifl56QyX3+lNeoX0GYl7Hs3yPkf8Q4K0jdW7g?= =?us-ascii?Q?XdWlrSqB3p5iqd+U2kXOr5ePSuTInXnJpE9hTB1XTeQ+RPy9Y7MGHxQ1/cLi?= =?us-ascii?Q?GwWu5CNxpLSjKvFpoiTAE9Yp0P6dzvJVNZnKf69/o6dW4y6p2+u3fcHh0atG?= =?us-ascii?Q?UEmIRQAHqr3r6bvttD7wPMbM/NOnOJ85zlMf2bfFNy/isOs+A9VQDBmw2fSt?= =?us-ascii?Q?PI4MZRvzqoy7jUwB3jBP5H8jwpZcKjwsqGZsGEiHJXuC55g1mfo5aYcvs9HW?= =?us-ascii?Q?BpRNiNoulzjOy6jAT7P+JG26nKv6+HPNgUfusghmtUL/r6GoCNAoaNW69cvz?= =?us-ascii?Q?LOILl/J7YQMfRi2cOHt0Y96y2Xh?= X-Microsoft-Exchange-Diagnostics: 1;CY1PR12MB0855;5:v4IO9oeGuuBLX6X7yFBRGqLaUP3Horbc108DN2f42Rvz1J3/QVrOu6Zc7RptfPGG93TEiqQlNbm6dDk0+TLiKuQBwLS84sYZDXDzp2ZjN8uoITO5KXifV2tX99u8NHXDT6Y+KEIrFP5UBOhq641ESg==;24:t+FaOyRAu5s9ktBc/s/lCDo0Pb5n3+M2+KbNTZuKfDOOOCDHXLoYn92ZZ6BNF7XcLXqSpgmZlKksH3QMAMi5g50u74NEoU+QJ5fNOZzmRCM=;20:Tmnx5VZhVWxnEDa0KO0qKWLJ5lX/l1dn8kC5QgP0hK8sl9UAnR2LIAARYVteeeAd+uSEvsoveUkOVsKFwTIHfWMM82Ey3X3gp/XozHNBkqiS5iz6hsjqzGZ1k4oqdZWqI1pVZLbIYGLUaKvQA3T8ww4l7TJASjAH92i+hjGIXhYHmeHnl66yUJ6moB6uTlSD/stBsKENUWZV8rhwJUu3FhuGVc4jDv5pReFDnnStCtJEn+2oSeBagBW6yJtefgfU SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jan 2016 06:39:44.1296 (UTC) X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.222];Helo=[atltwp02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR12MB0855 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduce an AMD accumlated power reporting mechanism for Carrizo (Family 15h, Model 60h) processor that should be used to calculate the average power consumed by a processor during a measurement interval. The feature of accumulated power mechanism is indicated by CPUID Fn8000_0007_EDX[12]. --------------------------------------------------------------------- * Tsample: compute unit power accumulator sample period * Tref: the PTSC counter period * PTSC: performance timestamp counter * N: the ratio of compute unit power accumulator sample period to the PTSC period * Jmax: max compute unit accumulated power which is indicated by MaxCpuSwPwrAcc MSR C001007b * Jx/Jy: compute unit accumulated power which is indicated by CpuSwPwrAcc MSR C001007a * Tx/Ty: the value of performance timestamp counter which is indicated by CU_PTSC MSR C0010280 * PwrCPUave: CPU average power i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007. N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]]. ii. Read the full range of the cumulative energy value from the new MSR MaxCpuSwPwrAcc. Jmax = value returned. iii. At time x, SW reads CpuSwPwrAcc MSR and samples the PTSC. Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC. iv. At time y, SW reads CpuSwPwrAcc MSR and samples the PTSC. Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC. v. Calculate the average power consumption for a compute unit over time period (y-x). Unit of result is uWatt. if (Jy < Jx) // Rollover has occurred Jdelta = (Jy + Jmax) - Jx else Jdelta = Jy - Jx PwrCPUave = N * Jdelta * 1000 / (Ty - Tx) ---------------------------------------------------------------------- This feature will be implemented both on hwmon and perf that discussed in mail list before. At current design, it provides one event to report per package/processor power consumption by counting each compute unit power value. Simple example: root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4 CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CHK include/generated/timeconst.h CHK include/generated/bounds.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h SKIPPED include/generated/compile.h Building modules, stage 2. Kernel: arch/x86/boot/bzImage is ready (#40) MODPOST 4225 modules Performance counter stats for 'system wide': 183.44 mWatts power/power-pkg/ 341.837270111 seconds time elapsed root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10 Performance counter stats for 'system wide': 0.18 mWatts power/power-pkg/ 10.012551815 seconds time elapsed Reference: http://lkml.kernel.org/r/20150831160622.GA29830@nazgul.tnic Suggested-by: Peter Zijlstra Suggested-by: Ingo Molnar Suggested-by: Borislav Petkov Signed-off-by: Huang Rui Cc: Guenter Roeck --- Hi, This series of patches introduces the perf implementation of accumulated power reporting algorithm. It will calculate the average power consumption for the processor. The CPU feature flag is CPUID.8000_0007H:EDX[12]. Changes from v1 -> v2: - Add a patch to fix the build issue which is reported by kbuild test robot. Changes from v2 -> v3: - Use raw_spinlock_t instead of spinlock_t, because it need meet the -rt mode use case. - Use topology_sibling_cpumask to make the cpumask operation easier. Changes from v3 -> v4: - Remove active_list, because it is not iterated. - Capitalize sentences consistently and fix some typos. - Fix some code style issues. - Initialize structures in a vertically aligned manner. - Remove unnecessary comment. - Fix the runtime bug, and do some testing on CPU-hotplug scenario. Thanks, Rui --- arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/perf_event_amd_power.c | 498 +++++++++++++++++++++++++++++ 2 files changed, 499 insertions(+) create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 5803130..97f3413 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_event.o ifdef CONFIG_PERF_EVENTS obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd.o perf_event_amd_uncore.o +obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_power.o ifdef CONFIG_AMD_IOMMU obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_iommu.o endif diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c new file mode 100644 index 0000000..01630ec --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c @@ -0,0 +1,498 @@ +/* + * Performance events - AMD Processor Power Reporting Mechanism + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Huang Rui + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include "perf_event.h" + +#define MSR_F15H_CU_PWR_ACCUMULATOR 0xc001007a +#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b +#define MSR_F15H_PTSC 0xc0010280 + +/* + * Event code: LSB 8 bits, passed in attr->config + * any other bit is reserved. + */ +#define AMD_POWER_EVENT_MASK 0xFFULL + +#define MAX_CUS 8 + +/* + * Accumulated power status counters. + */ +#define AMD_POWER_PKG_ID 0 +#define AMD_POWER_EVENTSEL_PKG 1 + +/* + * The ratio of compute unit power accumulator sample period to the + * PTSC period. + */ +static unsigned int cpu_pwr_sample_ratio; +static unsigned int cores_per_cu; +static unsigned int cu_num; + +/* Maximum accumulated power of a compute unit. */ +static u64 max_cu_acc_power; + +struct power_pmu { + raw_spinlock_t lock; + struct pmu *pmu; /* pointer to power_pmu_class */ + local64_t cpu_sw_pwr_ptsc; + /* + * These two cpumasks are used for avoiding the allocations on + * CPU_STARTING phase. Because power_cpu_prepare will be + * called on IRQs disabled status. + */ + cpumask_var_t mask; + cpumask_var_t tmp_mask; +}; + +static struct pmu pmu_class; + +/* + * Accumulated power is to measure the sum of each compute unit's + * power consumption. So it picks only one core from each compute unit + * to get the power with MSR_F15H_CU_PWR_ACCUMULATOR. The cpu_mask + * represents CPU bit map of all cores which are picked to measure the + * power for the compute units that they belong to. + */ +static cpumask_t cpu_mask; + +static DEFINE_PER_CPU(struct power_pmu *, amd_power_pmu); + +static u64 event_update(struct perf_event *event, struct power_pmu *pmu) +{ + struct hw_perf_event *hwc = &event->hw; + u64 prev_raw_count, new_raw_count, prev_ptsc, new_ptsc; + u64 delta, tdelta; + +again: + prev_raw_count = local64_read(&hwc->prev_count); + prev_ptsc = local64_read(&pmu->cpu_sw_pwr_ptsc); + rdmsrl(event->hw.event_base, new_raw_count); + rdmsrl(MSR_F15H_PTSC, new_ptsc); + + if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, + new_raw_count) != prev_raw_count) { + cpu_relax(); + goto again; + } + + /* + * Calculate the power consumption for each compute unit over + * a time period, the unit of final value (delta) is + * micro-Watts. Then add it into event count. + */ + if (new_raw_count < prev_raw_count) { + delta = max_cu_acc_power + new_raw_count; + delta -= prev_raw_count; + } else + delta = new_raw_count - prev_raw_count; + + delta *= cpu_pwr_sample_ratio * 1000; + tdelta = new_ptsc - prev_ptsc; + + do_div(delta, tdelta); + local64_add(delta, &event->count); + + return new_raw_count; +} + +static void +__pmu_event_start(struct power_pmu *pmu, struct perf_event *event) +{ + u64 ptsc, counts; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + event->hw.state = 0; + + rdmsrl(MSR_F15H_PTSC, ptsc); + local64_set(&pmu->cpu_sw_pwr_ptsc, ptsc); + rdmsrl(event->hw.event_base, counts); + local64_set(&event->hw.prev_count, counts); +} + +static void pmu_event_start(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + + raw_spin_lock(&pmu->lock); + __pmu_event_start(pmu, event); + raw_spin_unlock(&pmu->lock); +} + +static void pmu_event_stop(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + struct hw_perf_event *hwc = &event->hw; + + raw_spin_lock(&pmu->lock); + + /* Mark event as deactivated and stopped. */ + if (!(hwc->state & PERF_HES_STOPPED)) + hwc->state |= PERF_HES_STOPPED; + + /* Check if update of SW counter is necessary. */ + if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + /* + * Drain the remaining delta count out of an event + * that we are disabling: + */ + event_update(event, pmu); + hwc->state |= PERF_HES_UPTODATE; + } + + raw_spin_unlock(&pmu->lock); +} + +static int pmu_event_add(struct perf_event *event, int mode) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + struct hw_perf_event *hwc = &event->hw; + + raw_spin_lock(&pmu->lock); + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + + if (mode & PERF_EF_START) + __pmu_event_start(pmu, event); + + raw_spin_unlock(&pmu->lock); + + return 0; +} + +static void pmu_event_del(struct perf_event *event, int flags) +{ + pmu_event_stop(event, PERF_EF_UPDATE); +} + +static int pmu_event_init(struct perf_event *event) +{ + u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK; + int ret = 0; + + /* Only look at AMD power events. */ + if (event->attr.type != pmu_class.type) + return -ENOENT; + + /* Unsupported modes and filters. */ + if (event->attr.exclude_user || + event->attr.exclude_kernel || + event->attr.exclude_hv || + event->attr.exclude_idle || + event->attr.exclude_host || + event->attr.exclude_guest || + event->attr.sample_period) /* no sampling */ + return -EINVAL; + + if (cfg != AMD_POWER_EVENTSEL_PKG) + return -EINVAL; + + event->hw.event_base = MSR_F15H_CU_PWR_ACCUMULATOR; + event->hw.config = cfg; + event->hw.idx = AMD_POWER_PKG_ID; + + return ret; +} + +static void pmu_event_read(struct perf_event *event) +{ + struct power_pmu *pmu = __this_cpu_read(amd_power_pmu); + + event_update(event, pmu); +} + +static ssize_t +get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &cpu_mask); +} + +static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL); + +static struct attribute *pmu_attrs[] = { + &dev_attr_cpumask.attr, + NULL, +}; + +static struct attribute_group pmu_attr_group = { + .attrs = pmu_attrs, +}; + + +/* + * Currently it only supports to report the power of each + * processor/package. + */ +EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01"); + +EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts"); + +/* Convert the count from micro-Watts to milli-Watts. */ +EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3"); + + +static struct attribute *events_attr[] = { + EVENT_PTR(power_pkg), + EVENT_PTR(power_pkg_unit), + EVENT_PTR(power_pkg_scale), + NULL, +}; + +static struct attribute_group pmu_events_group = { + .name = "events", + .attrs = events_attr, +}; + +PMU_FORMAT_ATTR(event, "config:0-7"); + +static struct attribute *formats_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group pmu_format_group = { + .name = "format", + .attrs = formats_attr, +}; + +static const struct attribute_group *attr_groups[] = { + &pmu_attr_group, + &pmu_format_group, + &pmu_events_group, + NULL, +}; + +static struct pmu pmu_class = { + .attr_groups = attr_groups, + .task_ctx_nr = perf_invalid_context, /* system-wide only */ + .event_init = pmu_event_init, + .add = pmu_event_add, + .del = pmu_event_del, + .start = pmu_event_start, + .stop = pmu_event_stop, + .read = pmu_event_read, +}; + + +static int power_cpu_exit(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + int ret = 0; + int target = nr_cpumask_bits; + + cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu)); + + cpumask_clear_cpu(cpu, &cpu_mask); + cpumask_clear_cpu(cpu, pmu->mask); + + if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask)) + goto out; + + /* + * Find a new CPU on same compute unit, if was set in cpumask + * and still some CPUs on compute unit. Then move on to the + * new CPU. + */ + target = cpumask_any(pmu->tmp_mask); + if (target < nr_cpumask_bits && target != cpu) + cpumask_set_cpu(target, &cpu_mask); + + WARN_ON(cpumask_empty(&cpu_mask)); + +out: + /* + * Migrate event and context to new CPU. + */ + if (target < nr_cpumask_bits) + perf_pmu_migrate_context(pmu->pmu, cpu, target); + + return ret; + +} + +static int power_cpu_init(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return 0; + + if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask)) + cpumask_set_cpu(cpu, &cpu_mask); + + return 0; +} + +static int power_cpu_prepare(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + int phys_id = topology_physical_package_id(cpu); + int ret = 0; + + if (pmu) + return 0; + + if (phys_id < 0) + return -EINVAL; + + pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu)); + if (!pmu) + return -ENOMEM; + + if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out; + } + + if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out1; + } + + raw_spin_lock_init(&pmu->lock); + + pmu->pmu = &pmu_class; + + per_cpu(amd_power_pmu, cpu) = pmu; + + return 0; + +out1: + free_cpumask_var(pmu->mask); +out: + kfree(pmu); + + return ret; +} + +static void power_cpu_kfree(int cpu) +{ + struct power_pmu *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return; + + free_cpumask_var(pmu->mask); + free_cpumask_var(pmu->tmp_mask); + kfree(pmu); + + per_cpu(amd_power_pmu, cpu) = NULL; +} + +static int +power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu) +{ + unsigned int cpu = (long)hcpu; + + switch (action & ~CPU_TASKS_FROZEN) { + case CPU_UP_PREPARE: + if (power_cpu_prepare(cpu)) + return NOTIFY_BAD; + break; + case CPU_STARTING: + if (power_cpu_init(cpu)) + return NOTIFY_BAD; + break; + case CPU_DEAD: + power_cpu_kfree(cpu); + break; + case CPU_DOWN_PREPARE: + if (power_cpu_exit(cpu)) + return NOTIFY_BAD; + break; + default: + break; + } + + return NOTIFY_OK; +} + +static const struct x86_cpu_id cpu_match[] = { + { .vendor = X86_VENDOR_AMD, .family = 0x15 }, + {}, +}; + +static int __init amd_power_pmu_init(void) +{ + int i, ret; + u64 tmp; + + if (!x86_match_cpu(cpu_match)) + return 0; + + if (!boot_cpu_has(X86_FEATURE_ACC_POWER)) + return -ENODEV; + + cores_per_cu = amd_get_cores_per_cu(); + cu_num = boot_cpu_data.x86_max_cores / cores_per_cu; + + if (WARN_ON_ONCE(cu_num > MAX_CUS)) + return -EINVAL; + + cpu_pwr_sample_ratio = cpuid_ecx(0x80000007); + + if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) { + pr_err("Failed to read max compute unit power accumulator MSR\n"); + return -ENODEV; + } + max_cu_acc_power = tmp; + + cpu_notifier_register_begin(); + + /* + * Choose the one online core of each compute unit. + */ + for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) { + /* WARN_ON for empty CU masks */ + WARN_ON(cpumask_empty(topology_sibling_cpumask(i))); + cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask); + } + + for_each_present_cpu(i) { + ret = power_cpu_prepare(i); + if (ret) { + /* Unwind on [0 ... i-1] CPUs. */ + while (i--) + power_cpu_kfree(i); + goto out; + } + ret = power_cpu_init(i); + if (ret) { + /* Unwind on [0 ... i] CPUs. */ + while (i >= 0) + power_cpu_kfree(i--); + goto out; + } + } + + __perf_cpu_notifier(power_cpu_notifier); + + ret = perf_pmu_register(&pmu_class, "power", -1); + if (WARN_ON(ret)) { + pr_warn("AMD Power PMU registration failed\n"); + goto out; + } + + pr_info("AMD Power PMU detected, %d compute units\n", cu_num); + +out: + cpu_notifier_register_done(); + + return ret; +} +device_initcall(amd_power_pmu_init); -- 1.9.1