From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1841CC433E1 for ; Tue, 7 Jul 2020 20:55:27 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC64A206CD for ; Tue, 7 Jul 2020 20:55:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="KEE/qxlu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC64A206CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fXHa9QT7iuWjSAnFDCH2/TSdiILYKq4ZIMTZrZrao5c=; b=KEE/qxlu//jWi4W3Tw/+WCTwZ Ot0jjnODJYL6kw4uf090WFUdZlzu/XRbqstIb24XqjGPEbhFWAD9Dn2Sm0GxVujy1rZQMtGtnXv6f qXLaAB7G6fo7QuIqG9OVhPVXhN4Jo0syYO9VtD9GqTn+Bc6u7dqnTVT5n9hwPq3ekk63p8kLIJ8z8 4FOKSj+85SbzrHgbXtGPnSoj7WADtuEQGt/A7rdpcb+6AEl4cWxVioGLjemFoKZQEKzXKymEfVXSZ 2dCR9ECBMVQ6bHDsy+3KcOCNQXzl5Dl1EzIr+ulFV7qmZDpfkosJnWMw39HgqmBXAmp/Jvn1yzFHU E1KkMY87w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jsubR-0003yB-DM; Tue, 07 Jul 2020 20:54:09 +0000 Received: from mail-io1-f66.google.com ([209.85.166.66]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jsub0-0003nb-2m for linux-arm-kernel@lists.infradead.org; Tue, 07 Jul 2020 20:53:50 +0000 Received: by mail-io1-f66.google.com with SMTP id e64so39775386iof.12 for ; Tue, 07 Jul 2020 13:53:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5MJVEV7Cc73iH9jWoT+nouPpY4EFOK2eJCHimu/Xr9g=; b=hIXZGjjgefJAL+nmQfqgTZvOVI7PxUC9m5W4X2ptqGwJO6whzQ52oKkOuKeTC0m4aV VEBDyoVp3Y4b5HaBKJfS6WQ9Vd6B8CDA5gXqwRfxTY2kJ0ysEVwOTnh1oiMUig5ZGAj+ 4sV2FtuH0fx2S3Ue2WsBrwPQbiYRjUaU4MDDrSya3qxbOn72YOO2p3NjSko8Ivs5MvIH +oM0EJlC2yQNZZMFLHehUWsjS17svNY+tm1WUB9X6jYSdTnjAgWB5afmNPF/Qv4E4r1d tU0jfRRT9l4/+FjtK6DZuy27esvg/47L3t0UlY8qj2llUZ3pzifV67o/Pdlok5K5vPyF HwmQ== X-Gm-Message-State: AOAM5300IrNhzzjlR6LAiZ1X3XsdOsMZ2GjR/Txk/8iiWWszXC4fKY7C kquGhD5BOoGyJYitmjVtrw== X-Google-Smtp-Source: ABdhPJzpJzVGfY0b+APYDnSwYQjc9bF9NoyfT27dIgTEwL14LViwiJWsKiqD9Y4MqZOkRIMoA6kG5w== X-Received: by 2002:a6b:d31a:: with SMTP id s26mr33382023iob.48.1594155220682; Tue, 07 Jul 2020 13:53:40 -0700 (PDT) Received: from xps15.herring.priv ([64.188.179.254]) by smtp.googlemail.com with ESMTPSA id y6sm13110712ila.74.2020.07.07.13.53.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jul 2020 13:53:40 -0700 (PDT) From: Rob Herring To: Will Deacon , Catalin Marinas Subject: [PATCH 4/5] arm64: perf: Enable pmu counter direct access for perf event on armv8 Date: Tue, 7 Jul 2020 14:53:32 -0600 Message-Id: <20200707205333.624938-5-robh@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200707205333.624938-1-robh@kernel.org> References: <20200707205333.624938-1-robh@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200707_165342_502438_54A12868 X-CRM114-Status: GOOD ( 23.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , linux-kernel@vger.kernel.org, Arnaldo Carvalho de Melo , Alexander Shishkin , Raphael Gault , Ingo Molnar , Jonathan Cameron , Namhyung Kim , Jiri Olsa , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Raphael Gault Keep track of event opened with direct access to the hardware counters and modify permissions while they are open. The strategy used here is the same which x86 uses: everytime an event is mapped, the permissions are set if required. The atomic field added in the mm_context helps keep track of the different event opened and de-activate the permissions when all are unmapped. We also need to update the permissions in the context switch code so that tasks keep the right permissions. Signed-off-by: Raphael Gault Signed-off-by: Rob Herring --- Changes: - Drop homogeneous check - Disable access for chained counters - Set pmc_width in user page --- arch/arm64/include/asm/mmu.h | 6 +++++ arch/arm64/include/asm/mmu_context.h | 2 ++ arch/arm64/include/asm/perf_event.h | 14 ++++++++++ arch/arm64/kernel/perf_event.c | 4 +++ drivers/perf/arm_pmu.c | 38 ++++++++++++++++++++++++++++ 5 files changed, 64 insertions(+) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6..420938fe4982 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -19,6 +19,12 @@ typedef struct { atomic64_t id; + + /* + * non-zero if userspace have access to hardware + * counters directly. + */ + atomic_t pmu_direct_access; void *vdso; unsigned long flags; } mm_context_t; diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index b0bd9b55594c..b6c5a8df36ba 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -226,6 +227,7 @@ static inline void __switch_mm(struct mm_struct *next) } check_and_switch_context(next, cpu); + perf_switch_user_access(next); } static inline void diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index e7765b62c712..65d47a7106db 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -8,6 +8,7 @@ #include #include +#include #define ARMV8_PMU_MAX_COUNTERS 32 #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) @@ -224,4 +225,17 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +static inline void perf_switch_user_access(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_PERF_EVENTS)) + return; + + if (atomic_read(&mm->context.pmu_direct_access)) { + write_sysreg(ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR, + pmuserenr_el0); + } else { + write_sysreg(0, pmuserenr_el0); + } +} + #endif diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 6c12a6ad36f5..93975ea0ec1a 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1250,6 +1250,10 @@ void arch_perf_update_userpage(struct perf_event *event, */ freq = arch_timer_get_rate(); userpg->cap_user_time = 1; + userpg->cap_user_rdpmc = !!(event->hw.flags & ARMPMU_EL0_RD_CNTR); + + if (userpg->cap_user_rdpmc) + userpg->pmc_width = armv8pmu_event_is_64bit(event) ? 64 : 32; clocks_calc_mult_shift(&userpg->time_mult, &shift, freq, NSEC_PER_SEC, 0); diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index df352b334ea7..7a3263a09b34 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -25,6 +25,7 @@ #include #include +#include static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu); static DEFINE_PER_CPU(int, cpu_irq); @@ -778,6 +779,41 @@ static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu) &cpu_pmu->node); } +static void refresh_pmuserenr(void *mm) +{ + perf_switch_user_access(mm); +} + +static void armpmu_event_mapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + /* + * This function relies on not being called concurrently in two + * tasks in the same mm. Otherwise one task could observe + * pmu_direct_access > 1 and return all the way back to + * userspace with user access disabled while another task is still + * doing on_each_cpu_mask() to enable user access. + * + * For now, this can't happen because all callers hold mmap_sem + * for write. If this changes, we'll need a different solution. + */ + lockdep_assert_held_write(&mm->mmap_lock); + + if (atomic_inc_return(&mm->context.pmu_direct_access) == 1) + on_each_cpu(refresh_pmuserenr, mm, 1); +} + +static void armpmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + if (atomic_dec_and_test(&mm->context.pmu_direct_access)) + on_each_cpu_mask(mm_cpumask(mm), refresh_pmuserenr, NULL, 1); +} + static struct arm_pmu *__armpmu_alloc(gfp_t flags) { struct arm_pmu *pmu; @@ -799,6 +835,8 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, + .event_mapped = armpmu_event_mapped, + .event_unmapped = armpmu_event_unmapped, .add = armpmu_add, .del = armpmu_del, .start = armpmu_start, -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel