From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30B30C4708F for ; Tue, 1 Jun 2021 13:57:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA6E5600CD for ; Tue, 1 Jun 2021 13:57:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA6E5600CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dU30BSorxHGmpeWZ1AjvVhMfB7mpISmzAK30hWCWq80=; b=oCTbAOsiToub/o h9wi9Wlir3TCpcGKnV+CSecH82GrFnV0WooLWpS7zkY51rcXFrNF6lpaCsdRmUS2mGThcUt1Y+tWC Xx3lYQnd64AMABgI6xN5HtxIRCcTiKYse2vhPIGuXF0HFBJBJD/MNwFodcSgEL+YivJTKYgm6Z4wa rKs578fFdMQkcKh1p9Ymg85cSVoAb1jIx7/odZsx2DnCdLXHu7BbkJ1S0eZAw9Xh8jeRhaXMLRIVc JcZWJI2vr89UX73Zk5N5ZA1vvUuCu3Rl4wCb2ffv7qCCpRIDvdCxFCESqGh0zv9Y7eMoI6ybeWwJt LFOokl/Zb+fr8kpRlBKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lo4rz-00Gtxi-1P; Tue, 01 Jun 2021 13:55:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lo4ru-00GtwN-NH for linux-arm-kernel@lists.infradead.org; Tue, 01 Jun 2021 13:55:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 567F96D; Tue, 1 Jun 2021 06:55:41 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.0.106]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6ABEF3F719; Tue, 1 Jun 2021 06:55:37 -0700 (PDT) Date: Tue, 1 Jun 2021 14:55:26 +0100 From: Mark Rutland To: Rob Herring Cc: Will Deacon , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ian Rogers , Alexander Shishkin , honnappa.nagarahalli@arm.com, Zachary.Leaf@arm.com, Raphael Gault , Jonathan Cameron , Namhyung Kim , Itaru Kitayama , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v8 3/5] arm64: perf: Enable PMU counter userspace access for perf event Message-ID: <20210601135526.GA3326@C02TD0UTHF1T.local> References: <20210517195405.3079458-1-robh@kernel.org> <20210517195405.3079458-4-robh@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210517195405.3079458-4-robh@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210601_065542_883785_E707690A X-CRM114-Status: GOOD ( 26.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, May 17, 2021 at 02:54:03PM -0500, Rob Herring wrote: > Arm PMUs can support direct userspace access of counters which allows for > low overhead (i.e. no syscall) self-monitoring of tasks. The same feature > exists on x86 called 'rdpmc'. Unlike x86, userspace access will only be > enabled for thread bound events. This could be extended if needed, but > simplifies the implementation and reduces the chances for any > information leaks (which the x86 implementation suffers from). > > When an event is capable of userspace access and has been mmapped, userspace > access is enabled when the event is scheduled on a CPU's PMU. There's some > additional overhead clearing counters when disabled in order to prevent > leaking disabled counter data from other tasks. > > Unlike x86, enabling of userspace access must be requested with a new > attr bit: config1:1. If the user requests userspace access and 64-bit > counters, then chaining will be disabled and the user will get the > maximum size counter the underlying h/w can support. The modes for > config1 are as follows: > > config1 = 0 : user access disabled and always 32-bit > config1 = 1 : user access disabled and always 64-bit (using chaining if needed) > config1 = 2 : user access enabled and always 32-bit > config1 = 3 : user access enabled and counter size matches underlying counter. > > Based on work by Raphael Gault , but has been > completely re-written. > > Signed-off-by: Rob Herring [...] > +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) > +{ > + struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events); > + > + if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) { > + int i; > + /* Don't need to clear assigned counters. */ > + bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS); > + > + for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) { > + if (i == ARMV8_IDX_CYCLE_COUNTER) > + write_sysreg(0, pmccntr_el0); > + else > + armv8pmu_write_evcntr(i, 0); > + } > + bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS); > + } > + > + write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0); > +} This still leaks the values of CPU-bound events, or task-bound events owned by others, right? [...] > +static void armv8pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) > +{ > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > + return; > + > + if (atomic_inc_return(&event->ctx->nr_user) == 1) { > + unsigned long flags; > + atomic_inc(&event->pmu->sched_cb_usage); > + local_irq_save(flags); > + armv8pmu_enable_user_access(to_arm_pmu(event->pmu)); > + local_irq_restore(flags); > + } > +} > + > +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) > +{ > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > + return; > + > + if (atomic_dec_and_test(&event->ctx->nr_user)) { > + atomic_dec(&event->pmu->sched_cb_usage); > + armv8pmu_disable_user_access(); > + } > } We can open an event for task A, but call mmap()/munmap() for that event from task B, which will do the enable/disable on task B rather than task A. The core doesn't enforce that the mmap is performed on the same core, so I don't think this is quite right, unfortunately. I reckon we need to do something with task_function_call() to make this happen in the context of the expected task. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel