From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760647AbcBYN3v (ORCPT ); Thu, 25 Feb 2016 08:29:51 -0500 Received: from mga09.intel.com ([134.134.136.24]:2404 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759644AbcBYN3t (ORCPT ); Thu, 25 Feb 2016 08:29:49 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,498,1449561600"; d="scan'208";a="911070863" From: Alexander Shishkin To: kan.liang@intel.com, peterz@infradead.org, linux-kernel@vger.kernel.org Cc: ak@linux.intel.com, eranian@google.com, vincent.weaver@maine.edu, tglx@linutronix.de, mingo@kernel.org, acme@redhat.com, jolsa@redhat.com, ying.huang@linux.intel.com, Kan Liang Subject: Re: [PATCH 1/1] perf/core: find auxiliary events in running pmus list In-Reply-To: <1456348836-6163-1-git-send-email-kan.liang@intel.com> References: <1456348836-6163-1-git-send-email-kan.liang@intel.com> User-Agent: Notmuch/0.21 (http://notmuchmail.org) Emacs/24.5.1 (x86_64-pc-linux-gnu) Date: Thu, 25 Feb 2016 15:29:24 +0200 Message-ID: <87y4a82a17.fsf@ashishki-desk.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org kan.liang@intel.com writes: > From: Kan Liang > > perf_event_aux funciton goes through pmus list to find proper auxiliary > events to output. The pmus list consists of all possible pmus in the > system, that may or may not be running at the moment, while the > auxiliary events must be from the running pmus. Therefore searching > non-running pmus is unnecessary and expensive especially when there are > many non-running pmus on the list. > > For example, the brk test case in lkp triggers many mmap operations, > at the time, perf with cycles:pp is also running on the system. As a > result, many perf_event_aux are invoked, and each would search the whole > pmus list. If we enable the uncore support (even when uncore event are > not really used), dozens of uncore pmus will be added into pmus list, > which can significantly decrease brk_test's ops_per_sec. Based on our > test, the ops_per_sec without uncore patch is 2647573, while the > ops_per_sec with uncore patch is only 1768444, which is a 33.2% > reduction. What does this ops_per_sec measure, exactly? Just curious. You'll probably also observe the same effect if you simply create a bunch of disabled events before you measure the time that it takes perf_event_aux() to notify everybody. Even worse, because you can have way more events than pmus. Question is, is this really a problem. > This patch introduces a running_pmus list which only tracks the running > pmus in the system. The perf_event_aux uses running_pmus list instead of > pmus list to find auxiliary events. This patch also adds a global mutex that serializes *all* event creation/freeing. Including the fork and exit paths. I mean: > @@ -7740,6 +7770,29 @@ static void account_event_cpu(struct perf_event *event, int cpu) > atomic_inc(&per_cpu(perf_cgroup_events, cpu)); > } > > +static void account_running_pmu(struct perf_event *event) > +{ > + struct running_pmu *pmu; > + > + mutex_lock(&running_pmus_lock); > + > + list_for_each_entry(pmu, &running_pmus, entry) { > + if (pmu->pmu == event->pmu) { > + pmu->nr_event++; > + goto out; > + } > + } > + > + pmu = kzalloc(sizeof(struct running_pmu), GFP_KERNEL); > + if (pmu != NULL) { > + pmu->nr_event++; > + pmu->pmu = event->pmu; > + list_add_rcu(&pmu->entry, &running_pmus); > + } > +out: > + mutex_unlock(&running_pmus_lock); > +} > + > static void account_event(struct perf_event *event) > { > bool inc = false; > @@ -7772,6 +7825,8 @@ static void account_event(struct perf_event *event) > static_key_slow_inc(&perf_sched_events.key); > > account_event_cpu(event, event->cpu); > + > + account_running_pmu(event); doesn't look justified. Regards, -- Alex