From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D494FC4708F for ; Tue, 1 Jun 2021 17:13:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9183960FF1 for ; Tue, 1 Jun 2021 17:13:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9183960FF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=06xeP0Z/haFdVrjXAR+J3CuB80NjtOOb2ZU5sEYm7gw=; b=1tDEwaHGiGzDuY ijw4Un93Rl9oyCUS72UCSp+g533fI9RZMeqsZDc5SRQIxnY63DTAf3xMqstakuN0YSGCWmuW2Cgh/ pgO8P7VXpFiGGVLolP77QtZ+3hSjLCtuEjNqGcC1GTPnkYJZfYsj8vcVFNF4sVosjI7h/hjQmrbVk yJvI91HW20S16TE7sS6hy6hXThzwl24xb3C4mcaYaqMliVT17v/rfBF3ulF8bDKL48fpMwDxghYAB Xi1LS2JeVTe2EP2WGW96wwEM+WFa2XLuOcglcf3JEZm+xjSkWSjk6h33iKxLy02AWPuJcM7r4THvz CeF/pFYeEWfBWCa7gX1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lo7w5-0002mu-UH; Tue, 01 Jun 2021 17:12:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lo7w1-0002lU-KM for linux-arm-kernel@lists.infradead.org; Tue, 01 Jun 2021 17:12:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A8AEF6D; Tue, 1 Jun 2021 10:12:06 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.0.106]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EEA963F719; Tue, 1 Jun 2021 10:12:02 -0700 (PDT) Date: Tue, 1 Jun 2021 18:11:59 +0100 From: Mark Rutland To: Rob Herring Cc: Will Deacon , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa , Kan Liang , Ian Rogers , Alexander Shishkin , Honnappa Nagarahalli , Zachary.Leaf@arm.com, Raphael Gault , Jonathan Cameron , Namhyung Kim , Itaru Kitayama , linux-arm-kernel , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v8 3/5] arm64: perf: Enable PMU counter userspace access for perf event Message-ID: <20210601171159.GD3326@C02TD0UTHF1T.local> References: <20210517195405.3079458-1-robh@kernel.org> <20210517195405.3079458-4-robh@kernel.org> <20210601135526.GA3326@C02TD0UTHF1T.local> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210601_101209_793402_D9B8F410 X-CRM114-Status: GOOD ( 35.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Jun 01, 2021 at 10:00:53AM -0500, Rob Herring wrote: > On Tue, Jun 1, 2021 at 8:55 AM Mark Rutland wrote: > > On Mon, May 17, 2021 at 02:54:03PM -0500, Rob Herring wrote: > > > +static void armv8pmu_enable_user_access(struct arm_pmu *cpu_pmu) > > > +{ > > > + struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events); > > > + > > > + if (!bitmap_empty(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS)) { > > > + int i; > > > + /* Don't need to clear assigned counters. */ > > > + bitmap_xor(cpuc->dirty_mask, cpuc->dirty_mask, cpuc->used_mask, ARMPMU_MAX_HWEVENTS); > > > + > > > + for_each_set_bit(i, cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS) { > > > + if (i == ARMV8_IDX_CYCLE_COUNTER) > > > + write_sysreg(0, pmccntr_el0); > > > + else > > > + armv8pmu_write_evcntr(i, 0); > > > + } > > > + bitmap_zero(cpuc->dirty_mask, ARMPMU_MAX_HWEVENTS); > > > + } > > > + > > > + write_sysreg(ARMV8_PMU_USERENR_ER | ARMV8_PMU_USERENR_CR, pmuserenr_el0); > > > +} > > > > This still leaks the values of CPU-bound events, or task-bound events > > owned by others, right? > > For CPU-bound events, yes. There's not any way to prevent that without > per counter access controls. > > It is clearing other's task-bound events. Sorry, I misspoke. When I said "task-bound events owned by others", I had meant events bounds to this task, but owned by someone else (e.g. the system administrator). Thanks for confirming! > > > +static void armv8pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) > > > +{ > > > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > > > + return; > > > + > > > + if (atomic_inc_return(&event->ctx->nr_user) == 1) { > > > + unsigned long flags; > > > + atomic_inc(&event->pmu->sched_cb_usage); > > > + local_irq_save(flags); > > > + armv8pmu_enable_user_access(to_arm_pmu(event->pmu)); > > > + local_irq_restore(flags); > > > + } > > > +} > > > + > > > +static void armv8pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) > > > +{ > > > + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR) || (atomic_read(&event->mmap_count) != 1)) > > > + return; > > > + > > > + if (atomic_dec_and_test(&event->ctx->nr_user)) { > > > + atomic_dec(&event->pmu->sched_cb_usage); > > > + armv8pmu_disable_user_access(); > > > + } > > > } > > > > We can open an event for task A, but call mmap()/munmap() for that event > > from task B, which will do the enable/disable on task B rather than task > > A. The core doesn't enforce that the mmap is performed on the same core, > > so I don't think this is quite right, unfortunately. > > Why do we care and who wants to do that? It all seems like a > convoluted scenario that isn't really going to happen. I prefer to not > support that until someone asks for it. Maybe we should check for the > condition (event->ctx->task != current) though. My reason for caring is that it means our accounting structures aren't aligned with the actual CPU state, and it's going to be very easy for this to go wrong as things get reworked in future. If we're saying someone shouldn't do this, then we should enforce that they can't. If they can, then I'd prefer to have deterministic behaviour that isn't going to cause us issues in future. I think that we should treat this like changing other event properties (e.g. the period, or enabling/disabling the event), with an event_function_call, since that will do the right thing regardless. I also expect that people will want to do setup/teardown in a single thread, and this would make that work too. I reckon we can get the core logic for that in kernel/events/core.c. > > I reckon we need to do something with task_function_call() to make this > > happen in the context of the expected task. > > Following the x86 way using the mm_context and IPI handled that case, > but Will didn't like either part of that implementation. His > suggestion then was to enable access in an undef instr handler rather > than an IPI. I think that would still work with this version. IIUC the main objection there was that we'd have to signal all tasks associated with the mm or all CPUs, neither of which were attractive. Now that we're saying this is strictly task-bound, I think this is similar to enabling/disabling an event or changing its period, for which we already do a (single) event_function_call() to ensure the CPU state change happens on the correct CPU. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel