From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932162AbcBZJkQ (ORCPT ); Fri, 26 Feb 2016 04:40:16 -0500 Received: from mail-by2on0053.outbound.protection.outlook.com ([207.46.100.53]:44622 "EHLO na01-by2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752443AbcBZJkH (ORCPT ); Fri, 26 Feb 2016 04:40:07 -0500 Authentication-Results: spf=none (sender IP is 165.204.84.221) smtp.mailfrom=amd.com; alien8.de; dkim=none (message not signed) header.d=none;alien8.de; dmarc=permerror action=none header.from=amd.com; X-WSS-ID: 0O35EUP-07-GU8-02 X-M-MSG: From: Huang Rui To: Borislav Petkov , Peter Zijlstra , "Ingo Molnar" , Andy Lutomirski , "Thomas Gleixner" , Robert Richter , Jacob Shin , Arnaldo Carvalho de Melo , "Kan Liang" CC: , , , Suravee Suthikulpanit , Aravind Gopalakrishnan , Borislav Petkov , Fengguang Wu , Huang Rui , Guenter Roeck Subject: [PATCH v5] perf/x86/amd/power: Add AMD accumulated power reporting mechanism Date: Fri, 26 Feb 2016 17:40:50 +0800 Message-ID: <1456479650-4942-1-git-send-email-ray.huang@amd.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:165.204.84.221;CTRY:US;IPV:NLI;EFV:NLI;SFV:NSPM;SFS:(10009020)(6009001)(2980300002)(428002)(199003)(164054003)(189002)(87936001)(53416004)(50986999)(105586002)(11100500001)(1096002)(36756003)(86362001)(189998001)(47776003)(1220700001)(4326007)(5008740100001)(2906002)(586003)(33646002)(48376002)(101416001)(551984002)(92566002)(229853001)(5003940100001)(5003600100002)(19580405001)(106466001)(19580395003)(77096005)(5001770100001)(2004002);DIR:OUT;SFP:1101;SCL:1;SRVR:DM3PR12MB0858;H:atltwp01.amd.com;FPR:;SPF:None;MLV:sfv;A:1;MX:1;LANG:en; X-MS-Office365-Filtering-Correlation-Id: f7296f98-d128-47b1-b9b9-08d33e90cd63 X-Microsoft-Exchange-Diagnostics: 1;DM3PR12MB0858;2:qf1f2bFlUXsWsgdJ35M4KlWQc9LWGqP5yQKtJQweXK5Pua/9jDZnVx/RUnNrsZdnPGn4ZiuXrK36dyJQzmHpOw/1vEx2TqwofsRArJOU/ZUaxwiAY1zp9fOJ/taUtyPkTr6u/0tJeLzrlwKv/o54pQZzT2J9iiZ+WgArZeXjqH7NLCUoioKv1ualp8ne6vuf;3:ZZkSn3XomBMRFEQeVLUdHZ/qkfIrGir9UI5+2h9cAoutbCxjmxjXaQihxFAv8X6DcZmuc+6+IxsEoiQRZ5KgacLb0b9rFmvbhn3lNrco3dHfpUTG6zVZVeLj7ge/T0k8GkGfvP5BE2JlaDFBxvbZqXtr/+/4B/5D7oCYzNKAn5dFbkdULAm+0EiacVAqUkwHmfwTpMuLHKtnAqxpkMA9t0ZDZzSJr+eZYkW2xqLLsbo=;25:cOvrfDSpjAsuJPsLp9r4HgAcM47UTjH1BRnGbB8mEx6aEjdnInsBqIAt5iPH9bfvfdKCUlQSk1BkCPmBjTdnRFNxEmSU+eEDbl76XQk2P4Aayr+x9hVaaCovLioeOyYTs5pp+/x9IfM3JysDhnhgZHwozo86gfXxGnWxxxdxTrXLAG4O4K0zA8Ht1nvTIux/XquicSwU85izbkEki4udMqj7qacFn1TbaTRVL5AN7xZvu2DWQDjIoZY/hCBxBCcbQlnyVpxkSrATvXFsc/BKAkhLD7mKhTtS+EY1eIPmN6E9BnARQZlfPoKJVtvw/7z+JGFcNsWcvzIMB5wuw03VCg== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM3PR12MB0858; X-Microsoft-Exchange-Diagnostics: 1;DM3PR12MB0858;20:x4CqbKhPH21uCmFlORhN3JFJQiD/pyAYNgtx6aWg4GPlZE9sN9n36SeRCl09mDi4WTDXdx/oBozsd9ZZi5pTjlfOiRoCwB67HpJB0mn0CcOBPD8k1O6k6gfDvw9YwW1Rtcl/3PsVHNfL434zbBbUvAF8311bW12IPabBXyvik4pGRTg7/hm8hyjiB9qbDcPKL7j8bNdGR40tEhB/IEjHu+Mr/QLQjeHDVQKFH/CowOQdQc2byQvBFxLcmWQ5ySFKn57I7QTFA7B/ocVrjVnPmmPs8hdDSBKK1A+pa+jHvl02gFkwypnHbuwRWNe06J1TPi8jAZrF9K3Ac1u1SjZzpWwS5vmWFyJH1y2Pzl16Npkj2NBC7p7BPppahyXlEn/aAjAL1U32I3TVrKEJmMD/mXQrLJhMJQgM41Ti2WzvMYIKto2K/7rTLcNP/9QRaPkc1O7Qn2zoZvYpET44hDjpEfNcXb6hsoGkpgaENSAsVyy9QJvrutAOGa5meakV4Szk X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(2401047)(5005006)(13018025)(13017025)(13023025)(13024025)(8121501046)(13015025)(10201501046)(3002001);SRVR:DM3PR12MB0858;BCL:0;PCL:0;RULEID:;SRVR:DM3PR12MB0858; X-Microsoft-Exchange-Diagnostics: 1;DM3PR12MB0858;4:QtFLrhWs1JtjXamnPaj/sIMmp9r35CvSdnwZWQ7RzRC3FKStcd4pUp3gG89LNsCDT2OYUIBmfr0ywx/G3O4N1/CfAab2JBon6R+9ewXBbiY46y6+YgQR8HIFnMo9x8jd39VvkFElzeDF+0KjwpUCnKFCmn2QDMbH98UGcMZzCmLaQ5FpcItxKqYxU7MJWxAVi+h9vRnaBSiAmDlCaV77/jMtb5paismqf1yr4yrae7RZu5q07EJgLprAOPkVse0xTN9tD0kgqRpx62h6hHOZVulsW8VX5Bj+JxbY2im/1HtyIVjPrqO8MNqM35VT+lwZixrWgBW+LR7kmzy59jgB8CDpHlPXC+Xtms7rfopbqtlYrSvAXOtI3EI+jRwJbWNLrAjwFNVXtxlsMm+ROvyXQmxL4rrlNMvk+PmmBieZzmxGPgow0iXlygS6xCeOH4XKw9qrSPZ/zvUBaVTxO4k1kQ== X-Forefront-PRVS: 0864A36BBF X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;DM3PR12MB0858;23:ySWaVEZ/ChXhQmzIAV0s+kqMY7UOLhbSM0ipKrkSk?= =?us-ascii?Q?GseGu47taRzpAjC80KqGs/j4wtpgHjzt2eAPli44e6nKug8CqCx/hKe/lBat?= =?us-ascii?Q?BzdEorSFtQTSePyrH9p8I+QpdvO3XTgBXix2zxKxt3diALTwV/hjMHfMrcHX?= =?us-ascii?Q?SmsXu51JLThjlZKqVvUaDHKyGO3PS29eBK3tZibHyIo7BJyL3IvdO+MY1kbs?= =?us-ascii?Q?d6eGYiFGEiok8mTI8zQwuMuG18hJCHaptLrwo9YIBvG6UlQdDRzK8Vqdc5Xx?= =?us-ascii?Q?VhdOdIVZKVxs02WRXXYSkd6QtX9Dntu/UbktbYtC9CK/H+LDkOFkKEPQPsJD?= =?us-ascii?Q?y5FGzxlrotQ/flrvPnCxTVykXDatdfnsT1DRDrsyLB3z8Fq8KqTR5axQXXtN?= =?us-ascii?Q?EsPPY7Jo61eDWSBRKvfIUFd0+klH8q9DrDlzY3Qabv8Sm6IXxKm8KZQv4Bj1?= =?us-ascii?Q?AKmBdHeekpfVXSSpYbW4FyrERJovuhIB1MwLbXPrFP9stmKAKK6DizRG5UG+?= =?us-ascii?Q?YMerujjATd9RyhLqJSHaqYkx1//VtuWgfKmHvJg7QwJzByaX7bXtUZoqQLtA?= =?us-ascii?Q?VCVGM6VMmynx0LKLBviPA1OiWwLmC5QVKAUhgfJqoO+YoqSZLBUNnJ3cuU+d?= =?us-ascii?Q?rwLAys+It9i6Bo8mhjj6XHvon/qosQgP7DBw7VR70J0GEUfi8+PiGzDYY1by?= =?us-ascii?Q?MoQ+AHcPtGLFr8ovNuzfFJYHX/i2rKPlhMUx04DUCOTDaWWu4SgLjijrHR21?= =?us-ascii?Q?HR/sLU32zbdckmsratkccKMJNJedhpKg0Zs1+l7xjppChL2AUyLSSps0PFhi?= =?us-ascii?Q?tN5EeLx0VrlfhMXEexYgGStlvg/qYQwNFC3OqnfGB8knxm9JQAcIqFNDXQ50?= =?us-ascii?Q?XftIgJ+AqPCb3COMiL3ieDdMv4A3V8lWnSNMTXpuavho0i6YhqoVBG7ynQJk?= =?us-ascii?Q?mtB2qHQ5Z3Ca3MRh3j4CRjzCJFA2RjBULSquQjrrQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1;DM3PR12MB0858;5:3/Xw6EwbyLi7VxsaNAXwCGY+lx/nQKDCGRBQ6OKQ3F6ntrq5iAe06iDFeh9HkmC5NdGNE7AxxiuElqglYDdk63/Boyzo8GXmNuoIJq0yMlYkUQepfFAAq4TO8D9wlozdHbHVnAIPFrnGeBGnVtPjoA==;24:DiDoc7HnWqfdqmC2evyuiBBoG30funOBtSAtDXXxTtkE6sYXAQwr3YRDGHAL8zg3omDmSYIPQnu8voub7Tf77uLui+q5e/k9soKFPWUC9fc=;20:HtdjRMlRD+7GozXjCNujhITX4jNVVtbIs+o1XLfBIwlU0bfzDF4Z52XjoEMEjmN/wc96LmRQAf2NAzyDhWXsjgAmdKQEm/GW8gEkfjqOdvSZ64E1vuVtkyBuqsM+EIHTUP2K3A5vsL7MAymuke7FeSWsZFyGVFd97jCpLxXy+oc7fyQ32P9v1SnNsO/M7Ko9Oasn/naTGfY/egZgwH6JLjSqLB0Y+5HfwL33QIZ+zPAjVw0xcBTIaXLGNaC9A8S4 SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Feb 2016 09:40:02.1605 (UTC) X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.221];Helo=[atltwp01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR12MB0858 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduce an AMD accumlated power reporting mechanism for the Family 15h, Model 60h processor that can be used to calculate the average power consumed by a processor during a measurement interval. The feature support is indicated by CPUID Fn8000_0007_EDX[12]. This feature will be implemented both in hwmon and perf. The current design provides one event to report per package/processor power consumption by counting each compute unit power value. Here the gory details of how the computation is done: --------------------------------------------------------------------- * Tsample: compute unit power accumulator sample period * Tref: the PTSC counter period (PTSC: performance timestamp counter) * N: the ratio of compute unit power accumulator sample period to the PTSC period * Jmax: max compute unit accumulated power which is indicated by MSR_C001007b[MaxCpuSwPwrAcc] * Jx/Jy: compute unit accumulated power which is indicated by MSR_C001007a[CpuSwPwrAcc] * Tx/Ty: the value of performance timestamp counter which is indicated by CU_PTSC MSR_C0010280[PTSC] * PwrCPUave: CPU average power i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007. N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]]. ii. Read the full range of the cumulative energy value from the new MSR MaxCpuSwPwrAcc. Jmax = value returned. iii. At time x, software reads CpuSwPwrAcc and samples the PTSC. Jx = value read from CpuSwPwrAcc and Tx = value read from PTSC. iv. At time y, software reads CpuSwPwrAcc and samples the PTSC. Jy = value read from CpuSwPwrAcc and Ty = value read from PTSC. v. Calculate the average power consumption for a compute unit over time period (y-x). Unit of result is uWatt: if (Jy < Jx) // Rollover has occurred Jdelta = (Jy + Jmax) - Jx else Jdelta = Jy - Jx PwrCPUave = N * Jdelta * 1000 / (Ty - Tx) ---------------------------------------------------------------------- Simple example: root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' make -j4 CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h CHK include/generated/timeconst.h CHK include/generated/bounds.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h SKIPPED include/generated/compile.h Building modules, stage 2. Kernel: arch/x86/boot/bzImage is ready (#40) MODPOST 4225 modules Performance counter stats for 'system wide': 183.44 mWatts power/power-pkg/ 341.837270111 seconds time elapsed root@hr-zp:/home/ray/tip# ./tools/perf/perf stat -a -e 'power/power-pkg/' sleep 10 Performance counter stats for 'system wide': 0.18 mWatts power/power-pkg/ 10.012551815 seconds time elapsed Suggested-by: Peter Zijlstra Suggested-by: Ingo Molnar Suggested-by: Borislav Petkov Signed-off-by: Huang Rui Cc: Guenter Roeck --- Hi, This series of patches introduces the perf implementation of accumulated power reporting algorithm. It will calculate the average power consumption for the processor. The CPU feature flag is CPUID.8000_0007H:EDX[12]. The V5 is rebased on bp/tip-perf. Changes from v1 -> v2: - Add a patch to fix the build issue which is reported by kbuild test robot. Changes from v2 -> v3: - Use raw_spinlock_t instead of spinlock_t, because it need meet the -rt mode use case. - Use topology_sibling_cpumask to make the cpumask operation easier. Changes from v3 -> v4: - Remove active_list, because it is not iterated. - Capitalize sentences consistently and fix some typos. - Fix some code style issues. - Initialize structures in a vertically aligned manner. - Remove unnecessary comment. - Fix the runtime bug, and do some testing on CPU-hotplug scenario. Changes from v4 -> v5: - Remove "struct pmu" and lock from power_pmu, and rename it to power_pmu_masks - As Peter's suggestion, add a new struct to hw_perf_event, and track these values from per-event. Thanks, Rui --- arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/perf_event_amd_power.c | 445 +++++++++++++++++++++++++++++ include/linux/perf_event.h | 4 + 3 files changed, 450 insertions(+) create mode 100644 arch/x86/kernel/cpu/perf_event_amd_power.c diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index faa7b52..ffc9650 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -34,6 +34,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_event.o ifdef CONFIG_PERF_EVENTS obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd.o perf_event_amd_uncore.o +obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_power.o ifdef CONFIG_AMD_IOMMU obj-$(CONFIG_CPU_SUP_AMD) += perf_event_amd_iommu.o endif diff --git a/arch/x86/kernel/cpu/perf_event_amd_power.c b/arch/x86/kernel/cpu/perf_event_amd_power.c new file mode 100644 index 0000000..fe2e5e0 --- /dev/null +++ b/arch/x86/kernel/cpu/perf_event_amd_power.c @@ -0,0 +1,445 @@ +/* + * Performance events - AMD Processor Power Reporting Mechanism + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Huang Rui + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include "perf_event.h" + +#define MSR_F15H_CU_PWR_ACCUMULATOR 0xc001007a +#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b +#define MSR_F15H_PTSC 0xc0010280 + +/* Event code: LSB 8 bits, passed in attr->config any other bit is reserved. */ +#define AMD_POWER_EVENT_MASK 0xFFULL + +#define MAX_CUS 8 + +/* + * Accumulated power status counters. + */ +#define AMD_POWER_EVENTSEL_PKG 1 + +/* + * The ratio of compute unit power accumulator sample period to the + * PTSC period. + */ +static unsigned int cpu_pwr_sample_ratio; +static unsigned int cores_per_cu; +static unsigned int cu_num; + +/* Maximum accumulated power of a compute unit. */ +static u64 max_cu_acc_power; + +struct power_pmu_masks { + /* + * These two cpumasks are used for avoiding the allocations on the + * CPU_STARTING phase because power_cpu_prepare() will be called with + * IRQs disabled. + */ + cpumask_var_t mask; + cpumask_var_t tmp_mask; +}; + +static struct pmu pmu_class; + +/* + * Accumulated power represents the sum of each compute unit's (CU) power + * consumption. On any core of each CU we read the total accumulated power from + * MSR_F15H_CU_PWR_ACCUMULATOR. cpu_mask represents CPU bit map of all cores + * which are picked to measure the power for the CUs they belong to. + */ +static cpumask_t cpu_mask; + +static DEFINE_PER_CPU(struct power_pmu_masks *, amd_power_pmu); + +static void event_update(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + u64 prev_pwr_acc, new_pwr_acc, prev_ptsc, new_ptsc; + u64 delta, tdelta; + + prev_pwr_acc = hwc->pwr_acc; + prev_ptsc = hwc->ptsc; + rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, new_pwr_acc); + rdmsrl(MSR_F15H_PTSC, new_ptsc); + + /* + * Calculate the CU power consumption over a time period, the unit of + * final value (delta) is micro-Watts. Then add it to the event count. + */ + if (new_pwr_acc < prev_pwr_acc) { + delta = max_cu_acc_power + new_pwr_acc; + delta -= prev_pwr_acc; + } else + delta = new_pwr_acc - prev_pwr_acc; + + delta *= cpu_pwr_sample_ratio * 1000; + tdelta = new_ptsc - prev_ptsc; + + do_div(delta, tdelta); + local64_add(delta, &event->count); +} + +static void __pmu_event_start(struct perf_event *event) +{ + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + event->hw.state = 0; + + rdmsrl(MSR_F15H_PTSC, event->hw.ptsc); + rdmsrl(MSR_F15H_CU_PWR_ACCUMULATOR, event->hw.pwr_acc); +} + +static void pmu_event_start(struct perf_event *event, int mode) +{ + __pmu_event_start(event); +} + +static void pmu_event_stop(struct perf_event *event, int mode) +{ + struct hw_perf_event *hwc = &event->hw; + + /* Mark event as deactivated and stopped. */ + if (!(hwc->state & PERF_HES_STOPPED)) + hwc->state |= PERF_HES_STOPPED; + + /* Check if software counter update is necessary. */ + if ((mode & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + /* + * Drain the remaining delta count out of an event + * that we are disabling: + */ + event_update(event); + hwc->state |= PERF_HES_UPTODATE; + } +} + +static int pmu_event_add(struct perf_event *event, int mode) +{ + struct hw_perf_event *hwc = &event->hw; + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + + if (mode & PERF_EF_START) + __pmu_event_start(event); + + return 0; +} + +static void pmu_event_del(struct perf_event *event, int flags) +{ + pmu_event_stop(event, PERF_EF_UPDATE); +} + +static int pmu_event_init(struct perf_event *event) +{ + u64 cfg = event->attr.config & AMD_POWER_EVENT_MASK; + + /* Only look at AMD power events. */ + if (event->attr.type != pmu_class.type) + return -ENOENT; + + /* Unsupported modes and filters. */ + if (event->attr.exclude_user || + event->attr.exclude_kernel || + event->attr.exclude_hv || + event->attr.exclude_idle || + event->attr.exclude_host || + event->attr.exclude_guest || + /* no sampling */ + event->attr.sample_period) + return -EINVAL; + + if (cfg != AMD_POWER_EVENTSEL_PKG) + return -EINVAL; + + return 0; +} + +static void pmu_event_read(struct perf_event *event) +{ + event_update(event); +} + +static ssize_t +get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &cpu_mask); +} + +static DEVICE_ATTR(cpumask, S_IRUGO, get_attr_cpumask, NULL); + +static struct attribute *pmu_attrs[] = { + &dev_attr_cpumask.attr, + NULL, +}; + +static struct attribute_group pmu_attr_group = { + .attrs = pmu_attrs, +}; + +/* + * Currently it only supports to report the power of each + * processor/package. + */ +EVENT_ATTR_STR(power-pkg, power_pkg, "event=0x01"); + +EVENT_ATTR_STR(power-pkg.unit, power_pkg_unit, "mWatts"); + +/* Convert the count from micro-Watts to milli-Watts. */ +EVENT_ATTR_STR(power-pkg.scale, power_pkg_scale, "1.000000e-3"); + +static struct attribute *events_attr[] = { + EVENT_PTR(power_pkg), + EVENT_PTR(power_pkg_unit), + EVENT_PTR(power_pkg_scale), + NULL, +}; + +static struct attribute_group pmu_events_group = { + .name = "events", + .attrs = events_attr, +}; + +PMU_FORMAT_ATTR(event, "config:0-7"); + +static struct attribute *formats_attr[] = { + &format_attr_event.attr, + NULL, +}; + +static struct attribute_group pmu_format_group = { + .name = "format", + .attrs = formats_attr, +}; + +static const struct attribute_group *attr_groups[] = { + &pmu_attr_group, + &pmu_format_group, + &pmu_events_group, + NULL, +}; + +static struct pmu pmu_class = { + .attr_groups = attr_groups, + /* system-wide only */ + .task_ctx_nr = perf_invalid_context, + .event_init = pmu_event_init, + .add = pmu_event_add, + .del = pmu_event_del, + .start = pmu_event_start, + .stop = pmu_event_stop, + .read = pmu_event_read, +}; + +static int power_cpu_exit(int cpu) +{ + struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu); + int target = nr_cpumask_bits; + int ret = 0; + + cpumask_copy(pmu->mask, topology_sibling_cpumask(cpu)); + + cpumask_clear_cpu(cpu, &cpu_mask); + cpumask_clear_cpu(cpu, pmu->mask); + + if (!cpumask_and(pmu->tmp_mask, pmu->mask, cpu_online_mask)) + goto out; + + /* + * Find a new CPU on the same compute unit, if was set in cpumask + * and still some CPUs on compute unit. Then move on to the new CPU. + */ + target = cpumask_any(pmu->tmp_mask); + if (target < nr_cpumask_bits && target != cpu) + cpumask_set_cpu(target, &cpu_mask); + + WARN_ON(cpumask_empty(&cpu_mask)); + +out: + /* + * Migrate event and context to new CPU. + */ + if (target < nr_cpumask_bits) + perf_pmu_migrate_context(&pmu_class, cpu, target); + + return ret; + +} + +static int power_cpu_init(int cpu) +{ + struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return 0; + + if (!cpumask_and(pmu->mask, topology_sibling_cpumask(cpu), &cpu_mask)) + cpumask_set_cpu(cpu, &cpu_mask); + + return 0; +} + +static int power_cpu_prepare(int cpu) +{ + struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu); + int phys_id = topology_physical_package_id(cpu); + int ret = 0; + + if (pmu) + return 0; + + if (phys_id < 0) + return -EINVAL; + + pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu)); + if (!pmu) + return -ENOMEM; + + if (!zalloc_cpumask_var(&pmu->mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out; + } + + if (!zalloc_cpumask_var(&pmu->tmp_mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto out1; + } + + per_cpu(amd_power_pmu, cpu) = pmu; + + return 0; + +out1: + free_cpumask_var(pmu->mask); +out: + kfree(pmu); + + return ret; +} + +static void power_cpu_kfree(int cpu) +{ + struct power_pmu_masks *pmu = per_cpu(amd_power_pmu, cpu); + + if (!pmu) + return; + + free_cpumask_var(pmu->mask); + free_cpumask_var(pmu->tmp_mask); + kfree(pmu); + + per_cpu(amd_power_pmu, cpu) = NULL; +} + +static int +power_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu) +{ + unsigned int cpu = (long)hcpu; + + switch (action & ~CPU_TASKS_FROZEN) { + case CPU_UP_PREPARE: + if (power_cpu_prepare(cpu)) + return NOTIFY_BAD; + break; + case CPU_STARTING: + if (power_cpu_init(cpu)) + return NOTIFY_BAD; + break; + case CPU_DEAD: + power_cpu_kfree(cpu); + break; + case CPU_DOWN_PREPARE: + if (power_cpu_exit(cpu)) + return NOTIFY_BAD; + break; + default: + break; + } + + return NOTIFY_OK; +} + +static const struct x86_cpu_id cpu_match[] = { + { .vendor = X86_VENDOR_AMD, .family = 0x15 }, + {}, +}; + +static int __init amd_power_pmu_init(void) +{ + int i, ret; + u64 tmp; + + if (!x86_match_cpu(cpu_match)) + return 0; + + if (!boot_cpu_has(X86_FEATURE_ACC_POWER)) + return -ENODEV; + + cores_per_cu = amd_get_cores_per_cu(); + cu_num = boot_cpu_data.x86_max_cores / cores_per_cu; + + if (WARN_ON_ONCE(cu_num > MAX_CUS)) + return -EINVAL; + + cpu_pwr_sample_ratio = cpuid_ecx(0x80000007); + + if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) { + pr_err("Failed to read max compute unit power accumulator MSR\n"); + return -ENODEV; + } + max_cu_acc_power = tmp; + + cpu_notifier_register_begin(); + + /* Choose one online core of each compute unit. */ + for (i = 0; i < boot_cpu_data.x86_max_cores; i += cores_per_cu) { + WARN_ON(cpumask_empty(topology_sibling_cpumask(i))); + cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(i)), &cpu_mask); + } + + for_each_present_cpu(i) { + ret = power_cpu_prepare(i); + if (ret) { + /* Unwind on [0 ... i-1] CPUs. */ + while (i--) + power_cpu_kfree(i); + goto out; + } + ret = power_cpu_init(i); + if (ret) { + /* Unwind on [0 ... i] CPUs. */ + while (i >= 0) + power_cpu_kfree(i--); + goto out; + } + } + + __perf_cpu_notifier(power_cpu_notifier); + + ret = perf_pmu_register(&pmu_class, "power", -1); + if (WARN_ON(ret)) { + pr_warn("AMD Power PMU registration failed\n"); + goto out; + } + + pr_info("AMD Power PMU detected, %d compute units\n", cu_num); + +out: + cpu_notifier_register_done(); + + return ret; +} +device_initcall(amd_power_pmu_init); diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index f9828a4..01ea21c 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -128,6 +128,10 @@ struct hw_perf_event { struct { /* itrace */ int itrace_started; }; + struct { /* amd_power */ + u64 pwr_acc; + u64 ptsc; + }; #ifdef CONFIG_HAVE_HW_BREAKPOINT struct { /* breakpoint */ /* -- 1.9.1