linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Carrillo-Cisneros <davidcc@google.com>
To: linux-kernel@vger.kernel.org
Cc: "x86@kernel.org" <x86@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Andi Kleen <ak@linux.intel.com>, Kan Liang <kan.liang@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Vegard Nossum <vegard.nossum@gmail.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Nilay Vaish <nilayvaish@gmail.com>, Borislav Petkov <bp@suse.de>,
	Vikas Shivappa <vikas.shivappa@linux.intel.com>,
	Ravi V Shankar <ravi.v.shankar@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, Paul Turner <pjt@google.com>,
	Stephane Eranian <eranian@google.com>,
	David Carrillo-Cisneros <davidcc@google.com>
Subject: [PATCH v3 16/46] perf/x86/intel/cmt: set sched rmid and complete pmu start/stop/add/del
Date: Sat, 29 Oct 2016 17:38:13 -0700	[thread overview]
Message-ID: <1477787923-61185-17-git-send-email-davidcc@google.com> (raw)
In-Reply-To: <1477787923-61185-1-git-send-email-davidcc@google.com>

Now that the pmonr state machine and pqr_common are in place, add
pmonr_update_sched_rmid to find the adequate rmid to use. With it,
complete the body of PMUs functions that start/stop, add/del events.

A pmonr in Unused state tries to allocate a free rmid the first time one
of its monitored threads is scheduled in a CPU package (lazy allocation
of rmids). If there is no available rmids in that package, the pmonr
enters a Dep_Idle state (it borrows the sched_rmid from its
Lowest Monitored Ancestor (lma) pmonr).

When a event is stopped, no other event runs in that CPU and PQR msr
uses the rmid of the monr_hrchy_root's pmonr for that CPU's package.

Details in pmonr state machine's comments.

Signed-off-by: David Carrillo-Cisneros <davidcc@google.com>
---
 arch/x86/events/intel/cmt.c | 101 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 101 insertions(+)

diff --git a/arch/x86/events/intel/cmt.c b/arch/x86/events/intel/cmt.c
index ce5be74..9421a3e 100644
--- a/arch/x86/events/intel/cmt.c
+++ b/arch/x86/events/intel/cmt.c
@@ -650,6 +650,74 @@ static int monr_append_event(struct monr *monr, struct perf_event *event)
 	return err;
 }
 
+/**
+ * pmonr_update_sched_rmid() - Update sched_rmid for @pmonr in current package.
+ *
+ * Always finds valid rmids for non Off pmonr. Safe to call with IRQs disabled.
+ * A lock-free fast path reuses the rmid when the pmonr has been scheduled
+ * before in this package. Otherwise, tries to get a free rmid. On failure,
+ * enters Dep_Idle state and uses the rmid of its lender. There is always a
+ * pmonr to borrow from since monr_hrchy_root has all its pmonrs in Active
+ * state.
+ * Return: new pmonr_rmids for pmonr.
+ */
+static inline union pmonr_rmids pmonr_update_sched_rmid(struct pmonr *pmonr)
+{
+	struct pkg_data *pkgd = pmonr->pkgd;
+	union pmonr_rmids rmids;
+	u32 free_rmid;
+
+	/* Use atomic_rmids to check state in a lock-free fastpath. */
+	rmids.value = atomic64_read(&pmonr->atomic_rmids);
+	if (rmids.sched_rmid != INVALID_RMID)
+		return rmids;
+
+	/* No need to obtain RMID if in Off state. */
+	if (rmids.sched_rmid == rmids.read_rmid)
+		return rmids;
+
+	/*
+	 * Lock-free path failed. Now acquire lock and verify that state
+	 * and atomic_rmids haven't changed. If still Unused, try to
+	 * obtain a free RMID.
+	 */
+	raw_spin_lock(&pkgd->lock);
+
+	/* With lock acquired it is ok to read pmonr::state. */
+	if (pmonr->state != PMONR_UNUSED) {
+		/* Update rmids in case they changed before acquiring lock. */
+		rmids.value = atomic64_read(&pmonr->atomic_rmids);
+		raw_spin_unlock(&pkgd->lock);
+		return rmids;
+	}
+
+	free_rmid = find_first_bit(pkgd->free_rmids, CMT_MAX_NR_RMIDS);
+	if (free_rmid == CMT_MAX_NR_RMIDS)
+		pmonr_unused_to_dep_idle(pmonr);
+	else
+		pmonr_unused_to_active(pmonr, free_rmid);
+
+	raw_spin_unlock(&pkgd->lock);
+
+	rmids.value = atomic64_read(&pmonr->atomic_rmids);
+
+	return rmids;
+}
+
+static inline union pmonr_rmids monr_get_sched_in_rmids(struct monr *monr)
+{
+	struct pmonr *pmonr;
+	union pmonr_rmids rmids;
+	u16 pkgid = topology_logical_package_id(smp_processor_id());
+
+	rcu_read_lock();
+	pmonr = rcu_dereference(monr->pmonrs[pkgid]);
+	rmids = pmonr_update_sched_rmid(pmonr);
+	rcu_read_unlock();
+
+	return rmids;
+}
+
 static void monr_hrchy_insert_leaf(struct monr *monr, struct monr *parent)
 {
 	unsigned long flags;
@@ -865,16 +933,49 @@ static void intel_cmt_event_read(struct perf_event *event)
 {
 }
 
+static inline void __intel_cmt_event_start(struct perf_event *event,
+					   union pmonr_rmids rmids)
+{
+	if (!(event->hw.state & PERF_HES_STOPPED))
+		return;
+	event->hw.state &= ~PERF_HES_STOPPED;
+	pqr_cache_update_rmid(rmids.sched_rmid);
+}
+
 static void intel_cmt_event_start(struct perf_event *event, int mode)
 {
+	union pmonr_rmids rmids;
+
+	rmids = monr_get_sched_in_rmids(monr_from_event(event));
+	__intel_cmt_event_start(event, rmids);
 }
 
 static void intel_cmt_event_stop(struct perf_event *event, int mode)
 {
+	union pmonr_rmids rmids;
+
+	if (event->hw.state & PERF_HES_STOPPED)
+		return;
+	event->hw.state |= PERF_HES_STOPPED;
+	rmids = monr_get_sched_in_rmids(monr_hrchy_root);
+	/*
+	 * HW tracks the rmid even when event is not scheduled and event
+	 * reads occur even if event is Inactive. Therefore there is no need to
+	 * read when event is stopped.
+	 */
+	pqr_cache_update_rmid(rmids.sched_rmid);
 }
 
 static int intel_cmt_event_add(struct perf_event *event, int mode)
 {
+	union pmonr_rmids rmids;
+
+	event->hw.state = PERF_HES_STOPPED;
+	rmids = monr_get_sched_in_rmids(monr_from_event(event));
+
+	if (mode & PERF_EF_START)
+		__intel_cmt_event_start(event, rmids);
+
 	return 0;
 }
 
-- 
2.8.0.rc3.226.g39d4020

  parent reply	other threads:[~2016-10-30  0:46 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-30  0:37 [PATCH v3 00/46] Cache Monitoring Technology (aka CQM) David Carrillo-Cisneros
2016-10-30  0:37 ` [PATCH v3 01/46] perf/x86/intel/cqm: remove previous version of CQM and MBM David Carrillo-Cisneros
2016-10-30  0:37 ` [PATCH v3 02/46] perf/x86/intel: rename CQM cpufeatures to CMT David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 03/46] x86/intel: add CONFIG_INTEL_RDT_M configuration flag David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 04/46] perf/x86/intel/cmt: add device initialization and CPU hotplug support David Carrillo-Cisneros
2016-11-10 15:19   ` Thomas Gleixner
2016-10-30  0:38 ` [PATCH v3 05/46] perf/x86/intel/cmt: add per-package locks David Carrillo-Cisneros
2016-11-10 21:23   ` Thomas Gleixner
2016-11-11  2:22     ` David Carrillo-Cisneros
2016-11-11  7:21       ` Peter Zijlstra
2016-11-11  7:32         ` Ingo Molnar
2016-11-11  9:41         ` Thomas Gleixner
2016-11-11 17:21           ` David Carrillo-Cisneros
2016-11-13 10:58             ` Thomas Gleixner
2016-11-15  4:53         ` David Carrillo-Cisneros
2016-11-16 19:00           ` Thomas Gleixner
2016-10-30  0:38 ` [PATCH v3 06/46] perf/x86/intel/cmt: add intel_cmt pmu David Carrillo-Cisneros
2016-11-10 21:27   ` Thomas Gleixner
2016-10-30  0:38 ` [PATCH v3 07/46] perf/core: add RDT Monitoring attributes to struct hw_perf_event David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 08/46] perf/x86/intel/cmt: add MONitored Resource (monr) initialization David Carrillo-Cisneros
2016-11-10 23:09   ` Thomas Gleixner
2016-10-30  0:38 ` [PATCH v3 09/46] perf/x86/intel/cmt: add basic monr hierarchy David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 10/46] perf/x86/intel/cmt: add Package MONitored Resource (pmonr) initialization David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 11/46] perf/x86/intel/cmt: add cmt_user_flags (uflags) to monr David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 12/46] perf/x86/intel/cmt: add per-package rmid pools David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 13/46] perf/x86/intel/cmt: add pmonr's Off and Unused states David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 14/46] perf/x86/intel/cmt: add Active and Dep_{Idle, Dirty} states David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 15/46] perf/x86/intel: encapsulate rmid and closid updates in pqr cache David Carrillo-Cisneros
2016-10-30  0:38 ` David Carrillo-Cisneros [this message]
2016-10-30  0:38 ` [PATCH v3 17/46] perf/x86/intel/cmt: add uflag CMT_UF_NOLAZY_RMID David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 18/46] perf/core: add arch_info field to struct perf_cgroup David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 19/46] perf/x86/intel/cmt: add support for cgroup events David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 20/46] perf/core: add pmu::event_terminate David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 21/46] perf/x86/intel/cmt: use newly introduced event_terminate David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 22/46] perf/x86/intel/cmt: sync cgroups and intel_cmt device start/stop David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 23/46] perf/core: hooks to add architecture specific features in perf_cgroup David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 24/46] perf/x86/intel/cmt: add perf_cgroup_arch_css_{online,offline} David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 25/46] perf/x86/intel/cmt: add monr->flags and CMT_MONR_ZOMBIE David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 26/46] sched: introduce the finish_arch_pre_lock_switch() scheduler hook David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 27/46] perf/x86/intel: add pqr cache flags and intel_pqr_ctx_switch David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 28/46] perf,perf/x86,perf/powerpc,perf/arm,perf/*: add int error return to pmu::read David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 29/46] perf/x86/intel/cmt: add error handling to intel_cmt_event_read David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 30/46] perf/x86/intel/cmt: add asynchronous read for task events David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 31/46] perf/x86/intel/cmt: add subtree read for cgroup events David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 32/46] perf/core: Add PERF_EV_CAP_READ_ANY_{CPU_,}PKG flags David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 33/46] perf/x86/intel/cmt: use PERF_EV_CAP_READ_{,CPU_}PKG flags in Intel cmt David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 34/46] perf/core: introduce PERF_EV_CAP_CGROUP_NO_RECURSION David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 35/46] perf/x86/intel/cmt: use PERF_EV_CAP_CGROUP_NO_RECURSION in intel_cmt David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 36/46] perf/core: add perf_event cgroup hooks for subsystem attributes David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 37/46] perf/x86/intel/cmt: add cont_monitoring to perf cgroup David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 38/46] perf/x86/intel/cmt: introduce read SLOs for rotation David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 39/46] perf/x86/intel/cmt: add max_recycle_threshold sysfs attribute David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 40/46] perf/x86/intel/cmt: add rotation scheduled work David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 41/46] perf/x86/intel/cmt: add rotation minimum progress SLO David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 42/46] perf/x86/intel/cmt: add rmid stealing David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 43/46] perf/x86/intel/cmt: add CMT_UF_NOSTEAL_RMID flag David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 44/46] perf/x86/intel/cmt: add debugfs intel_cmt directory David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 45/46] perf/stat: fix bug in handling events in error state David Carrillo-Cisneros
2016-10-30  0:38 ` [PATCH v3 46/46] perf/stat: revamp read error handling, snapshot and per_pkg events David Carrillo-Cisneros

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1477787923-61185-17-git-send-email-davidcc@google.com \
    --to=davidcc@google.com \
    --cc=ak@linux.intel.com \
    --cc=bp@suse.de \
    --cc=eranian@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=nilayvaish@gmail.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=ravi.v.shankar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=vegard.nossum@gmail.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).