* [PATCH v2 0/6] perf/x86/amd: Add memory controller events
@ 2023-10-05 5:23 Sandipan Das
2023-10-05 5:23 ` [PATCH v2 1/6] perf/x86/amd/uncore: Refactor uncore management Sandipan Das
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Unified Memory Controller (UMC) events were introduced with Zen 4 as a
part of the Performance Monitoring Version 2 (PerfMonV2) enhancements.
Currently, Zen 4 supports up to 12 channels of DDR5 memory and each of
them are controlled by a dedicated UMC. Each UMC, in turn, has its own
set of performance monitoring counters. These counters can provide info
on UMC command activity which in turn can be used to derive utilization
and bandwidth. Using perf tool, users can profile activity either on a
combined basis (includes all active UMCs) or for individual UMCs.
E.g. measurement across all UMCs
$ sudo perf stat -e amd_umc/umc_cas_cmd.all/ -a -- sleep 1
Performance counter stats for 'system wide':
544,810 amd_umc/umc_cas_cmd.all/
1.002012663 seconds time elapsed
E.g. measurement specific to certain UMCs
$ sudo perf stat -e amd_umc_0/umc_cas_cmd.all/ -e amd_umc_4/umc_cas_cmd.all/ -a -- sleep 1
Performance counter stats for 'system wide':
27,096 amd_umc_0/umc_cas_cmd.all/
35,136 amd_umc_4/umc_cas_cmd.all/
1.001602807 seconds time elapsed
The available UMCs can be found from sysfs and the socket to which they
belong can be derived from the cpumask.
E.g.
$ find /sys/bus/event_source/devices/ -maxdepth 1 -name "amd_umc_*" | sort
/sys/bus/event_source/devices/amd_umc_0
/sys/bus/event_source/devices/amd_umc_1
/sys/bus/event_source/devices/amd_umc_2
/sys/bus/event_source/devices/amd_umc_3
/sys/bus/event_source/devices/amd_umc_4
/sys/bus/event_source/devices/amd_umc_5
/sys/bus/event_source/devices/amd_umc_6
/sys/bus/event_source/devices/amd_umc_7
$ cat /sys/devices/amd_umc_0/cpumask
0
$ cat /sys/devices/amd_umc_4/cpumask
96
All of the output above comes from a dual socket Genoa system having
96 cores and 4 populated memory channels per socket.
Previous versions can be found at:
v1: https://lore.kernel.org/all/cover.1689748843.git.sandipan.das@amd.com/
Changes in v2:
- Move collection of PMU CPUID info to startup of UNCORE_STARTING.
- Remove mechanism to read CPUID information using SMP callbacks.
- Defer PMU registration to startup of UNCORE_ONLINE since this can
only be done after collection of CPUID information.
- Remove mechanism to collect and free up unused uncore contexts as
this is no longer required.
- Rename some structures (amd_uncore is now called amd_uncore_pmu and
amd_uncore is instead a collection of amd_uncore_pmu instances).
- Add new uncore management handlers (scan, init, move, free) which are
called at different stages of CPU hotplug.
- Add Acked-by from Ian Rogers for the JSON events.
Sandipan Das (6):
perf/x86/amd/uncore: Refactor uncore management
perf/x86/amd/uncore: Move discovery and registration
perf/x86/amd/uncore: Use rdmsr if rdpmc is unavailable
perf/x86/amd/uncore: Add group exclusivity
perf/x86/amd/uncore: Add memory controller support
perf vendor events amd: Add Zen 4 memory controller events
arch/x86/events/amd/uncore.c | 1036 +++++++++++------
arch/x86/include/asm/msr-index.h | 4 +
arch/x86/include/asm/perf_event.h | 9 +
.../arch/x86/amdzen4/memory-controller.json | 101 ++
.../arch/x86/amdzen4/recommended.json | 84 ++
tools/perf/pmu-events/jevents.py | 2 +
6 files changed, 879 insertions(+), 357 deletions(-)
create mode 100644 tools/perf/pmu-events/arch/x86/amdzen4/memory-controller.json
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/6] perf/x86/amd/uncore: Refactor uncore management
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
2023-10-05 5:23 ` [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration Sandipan Das
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Since struct amd_uncore is used to manage per-cpu contexts, rename it to
amd_uncore_ctx in order to better reflect its purpose. Add a new struct
amd_uncore_pmu to encapsulate all attributes which are shared by per-cpu
contexts for a corresponding PMU. These include the number of counters,
active mask, MSR and RDPMC base addresses, etc. Since the struct pmu is
now embedded, the corresponding amd_uncore_pmu for a given event can be
found by simply using container_of().
Finally, move all PMU-specific code to separate functions. While the
original event management functions continue to provide the base
functionality, all PMU-specific quirks and customizations are applied in
separate functions.
The motivation is to simplify the management of uncore PMUs.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
---
arch/x86/events/amd/uncore.c | 737 ++++++++++++++++++-----------------
1 file changed, 379 insertions(+), 358 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 83f15fe411b3..ffcecda13d65 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -27,56 +27,41 @@
#define COUNTER_SHIFT 16
+#define NUM_UNCORES_MAX 2 /* DF (or NB) and L3 (or L2) */
+#define UNCORE_NAME_LEN 16
+
#undef pr_fmt
#define pr_fmt(fmt) "amd_uncore: " fmt
static int pmu_version;
-static int num_counters_llc;
-static int num_counters_nb;
-static bool l3_mask;
static HLIST_HEAD(uncore_unused_list);
-struct amd_uncore {
+struct amd_uncore_ctx {
int id;
int refcnt;
int cpu;
- int num_counters;
- int rdpmc_base;
- u32 msr_base;
- cpumask_t *active_mask;
- struct pmu *pmu;
struct perf_event **events;
struct hlist_node node;
};
-static struct amd_uncore * __percpu *amd_uncore_nb;
-static struct amd_uncore * __percpu *amd_uncore_llc;
-
-static struct pmu amd_nb_pmu;
-static struct pmu amd_llc_pmu;
-
-static cpumask_t amd_nb_active_mask;
-static cpumask_t amd_llc_active_mask;
-
-static bool is_nb_event(struct perf_event *event)
-{
- return event->pmu->type == amd_nb_pmu.type;
-}
+struct amd_uncore_pmu {
+ char name[UNCORE_NAME_LEN];
+ int num_counters;
+ int rdpmc_base;
+ u32 msr_base;
+ cpumask_t active_mask;
+ struct pmu pmu;
+ struct amd_uncore_ctx * __percpu *ctx;
+ int (*id)(unsigned int cpu);
+};
-static bool is_llc_event(struct perf_event *event)
-{
- return event->pmu->type == amd_llc_pmu.type;
-}
+static struct amd_uncore_pmu pmus[NUM_UNCORES_MAX];
+static int num_pmus __read_mostly;
-static struct amd_uncore *event_to_amd_uncore(struct perf_event *event)
+static struct amd_uncore_pmu *event_to_amd_uncore_pmu(struct perf_event *event)
{
- if (is_nb_event(event) && amd_uncore_nb)
- return *per_cpu_ptr(amd_uncore_nb, event->cpu);
- else if (is_llc_event(event) && amd_uncore_llc)
- return *per_cpu_ptr(amd_uncore_llc, event->cpu);
-
- return NULL;
+ return container_of(event->pmu, struct amd_uncore_pmu, pmu);
}
static void amd_uncore_read(struct perf_event *event)
@@ -118,7 +103,7 @@ static void amd_uncore_stop(struct perf_event *event, int flags)
hwc->state |= PERF_HES_STOPPED;
if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
- amd_uncore_read(event);
+ event->pmu->read(event);
hwc->state |= PERF_HES_UPTODATE;
}
}
@@ -126,15 +111,16 @@ static void amd_uncore_stop(struct perf_event *event, int flags)
static int amd_uncore_add(struct perf_event *event, int flags)
{
int i;
- struct amd_uncore *uncore = event_to_amd_uncore(event);
+ struct amd_uncore_pmu *pmu = event_to_amd_uncore_pmu(event);
+ struct amd_uncore_ctx *ctx = *per_cpu_ptr(pmu->ctx, event->cpu);
struct hw_perf_event *hwc = &event->hw;
/* are we already assigned? */
- if (hwc->idx != -1 && uncore->events[hwc->idx] == event)
+ if (hwc->idx != -1 && ctx->events[hwc->idx] == event)
goto out;
- for (i = 0; i < uncore->num_counters; i++) {
- if (uncore->events[i] == event) {
+ for (i = 0; i < pmu->num_counters; i++) {
+ if (ctx->events[i] == event) {
hwc->idx = i;
goto out;
}
@@ -142,8 +128,8 @@ static int amd_uncore_add(struct perf_event *event, int flags)
/* if not, take the first available counter */
hwc->idx = -1;
- for (i = 0; i < uncore->num_counters; i++) {
- if (cmpxchg(&uncore->events[i], NULL, event) == NULL) {
+ for (i = 0; i < pmu->num_counters; i++) {
+ if (cmpxchg(&ctx->events[i], NULL, event) == NULL) {
hwc->idx = i;
break;
}
@@ -153,23 +139,13 @@ static int amd_uncore_add(struct perf_event *event, int flags)
if (hwc->idx == -1)
return -EBUSY;
- hwc->config_base = uncore->msr_base + (2 * hwc->idx);
- hwc->event_base = uncore->msr_base + 1 + (2 * hwc->idx);
- hwc->event_base_rdpmc = uncore->rdpmc_base + hwc->idx;
+ hwc->config_base = pmu->msr_base + (2 * hwc->idx);
+ hwc->event_base = pmu->msr_base + 1 + (2 * hwc->idx);
+ hwc->event_base_rdpmc = pmu->rdpmc_base + hwc->idx;
hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
- /*
- * The first four DF counters are accessible via RDPMC index 6 to 9
- * followed by the L3 counters from index 10 to 15. For processors
- * with more than four DF counters, the DF RDPMC assignments become
- * discontiguous as the additional counters are accessible starting
- * from index 16.
- */
- if (is_nb_event(event) && hwc->idx >= NUM_COUNTERS_NB)
- hwc->event_base_rdpmc += NUM_COUNTERS_L3;
-
if (flags & PERF_EF_START)
- amd_uncore_start(event, PERF_EF_RELOAD);
+ event->pmu->start(event, PERF_EF_RELOAD);
return 0;
}
@@ -177,55 +153,36 @@ static int amd_uncore_add(struct perf_event *event, int flags)
static void amd_uncore_del(struct perf_event *event, int flags)
{
int i;
- struct amd_uncore *uncore = event_to_amd_uncore(event);
+ struct amd_uncore_pmu *pmu = event_to_amd_uncore_pmu(event);
+ struct amd_uncore_ctx *ctx = *per_cpu_ptr(pmu->ctx, event->cpu);
struct hw_perf_event *hwc = &event->hw;
- amd_uncore_stop(event, PERF_EF_UPDATE);
+ event->pmu->stop(event, PERF_EF_UPDATE);
- for (i = 0; i < uncore->num_counters; i++) {
- if (cmpxchg(&uncore->events[i], event, NULL) == event)
+ for (i = 0; i < pmu->num_counters; i++) {
+ if (cmpxchg(&ctx->events[i], event, NULL) == event)
break;
}
hwc->idx = -1;
}
-/*
- * Return a full thread and slice mask unless user
- * has provided them
- */
-static u64 l3_thread_slice_mask(u64 config)
-{
- if (boot_cpu_data.x86 <= 0x18)
- return ((config & AMD64_L3_SLICE_MASK) ? : AMD64_L3_SLICE_MASK) |
- ((config & AMD64_L3_THREAD_MASK) ? : AMD64_L3_THREAD_MASK);
-
- /*
- * If the user doesn't specify a threadmask, they're not trying to
- * count core 0, so we enable all cores & threads.
- * We'll also assume that they want to count slice 0 if they specify
- * a threadmask and leave sliceid and enallslices unpopulated.
- */
- if (!(config & AMD64_L3_F19H_THREAD_MASK))
- return AMD64_L3_F19H_THREAD_MASK | AMD64_L3_EN_ALL_SLICES |
- AMD64_L3_EN_ALL_CORES;
-
- return config & (AMD64_L3_F19H_THREAD_MASK | AMD64_L3_SLICEID_MASK |
- AMD64_L3_EN_ALL_CORES | AMD64_L3_EN_ALL_SLICES |
- AMD64_L3_COREID_MASK);
-}
-
static int amd_uncore_event_init(struct perf_event *event)
{
- struct amd_uncore *uncore;
+ struct amd_uncore_pmu *pmu;
+ struct amd_uncore_ctx *ctx;
struct hw_perf_event *hwc = &event->hw;
- u64 event_mask = AMD64_RAW_EVENT_MASK_NB;
if (event->attr.type != event->pmu->type)
return -ENOENT;
- if (pmu_version >= 2 && is_nb_event(event))
- event_mask = AMD64_PERFMON_V2_RAW_EVENT_MASK_NB;
+ if (event->cpu < 0)
+ return -EINVAL;
+
+ pmu = event_to_amd_uncore_pmu(event);
+ ctx = *per_cpu_ptr(pmu->ctx, event->cpu);
+ if (!ctx)
+ return -ENODEV;
/*
* NB and Last level cache counters (MSRs) are shared across all cores
@@ -235,28 +192,14 @@ static int amd_uncore_event_init(struct perf_event *event)
* out. So we do not support sampling and per-thread events via
* CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:
*/
- hwc->config = event->attr.config & event_mask;
+ hwc->config = event->attr.config;
hwc->idx = -1;
- if (event->cpu < 0)
- return -EINVAL;
-
- /*
- * SliceMask and ThreadMask need to be set for certain L3 events.
- * For other events, the two fields do not affect the count.
- */
- if (l3_mask && is_llc_event(event))
- hwc->config |= l3_thread_slice_mask(event->attr.config);
-
- uncore = event_to_amd_uncore(event);
- if (!uncore)
- return -ENODEV;
-
/*
* since request can come in to any of the shared cores, we will remap
* to a single common cpu.
*/
- event->cpu = uncore->cpu;
+ event->cpu = ctx->cpu;
return 0;
}
@@ -278,17 +221,10 @@ static ssize_t amd_uncore_attr_show_cpumask(struct device *dev,
struct device_attribute *attr,
char *buf)
{
- cpumask_t *active_mask;
- struct pmu *pmu = dev_get_drvdata(dev);
-
- if (pmu->type == amd_nb_pmu.type)
- active_mask = &amd_nb_active_mask;
- else if (pmu->type == amd_llc_pmu.type)
- active_mask = &amd_llc_active_mask;
- else
- return 0;
+ struct pmu *ptr = dev_get_drvdata(dev);
+ struct amd_uncore_pmu *pmu = container_of(ptr, struct amd_uncore_pmu, pmu);
- return cpumap_print_to_pagebuf(true, buf, active_mask);
+ return cpumap_print_to_pagebuf(true, buf, &pmu->active_mask);
}
static DEVICE_ATTR(cpumask, S_IRUGO, amd_uncore_attr_show_cpumask, NULL);
@@ -396,113 +332,57 @@ static const struct attribute_group *amd_uncore_l3_attr_update[] = {
NULL,
};
-static struct pmu amd_nb_pmu = {
- .task_ctx_nr = perf_invalid_context,
- .attr_groups = amd_uncore_df_attr_groups,
- .name = "amd_nb",
- .event_init = amd_uncore_event_init,
- .add = amd_uncore_add,
- .del = amd_uncore_del,
- .start = amd_uncore_start,
- .stop = amd_uncore_stop,
- .read = amd_uncore_read,
- .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
- .module = THIS_MODULE,
-};
-
-static struct pmu amd_llc_pmu = {
- .task_ctx_nr = perf_invalid_context,
- .attr_groups = amd_uncore_l3_attr_groups,
- .attr_update = amd_uncore_l3_attr_update,
- .name = "amd_l2",
- .event_init = amd_uncore_event_init,
- .add = amd_uncore_add,
- .del = amd_uncore_del,
- .start = amd_uncore_start,
- .stop = amd_uncore_stop,
- .read = amd_uncore_read,
- .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
- .module = THIS_MODULE,
-};
-
-static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
-{
- return kzalloc_node(sizeof(struct amd_uncore), GFP_KERNEL,
- cpu_to_node(cpu));
-}
-
-static inline struct perf_event **
-amd_uncore_events_alloc(unsigned int num, unsigned int cpu)
-{
- return kzalloc_node(sizeof(struct perf_event *) * num, GFP_KERNEL,
- cpu_to_node(cpu));
-}
-
static int amd_uncore_cpu_up_prepare(unsigned int cpu)
{
- struct amd_uncore *uncore_nb = NULL, *uncore_llc = NULL;
-
- if (amd_uncore_nb) {
- *per_cpu_ptr(amd_uncore_nb, cpu) = NULL;
- uncore_nb = amd_uncore_alloc(cpu);
- if (!uncore_nb)
- goto fail;
- uncore_nb->cpu = cpu;
- uncore_nb->num_counters = num_counters_nb;
- uncore_nb->rdpmc_base = RDPMC_BASE_NB;
- uncore_nb->msr_base = MSR_F15H_NB_PERF_CTL;
- uncore_nb->active_mask = &amd_nb_active_mask;
- uncore_nb->pmu = &amd_nb_pmu;
- uncore_nb->events = amd_uncore_events_alloc(num_counters_nb, cpu);
- if (!uncore_nb->events)
+ struct amd_uncore_pmu *pmu;
+ struct amd_uncore_ctx *ctx;
+ int node = cpu_to_node(cpu), i;
+
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ *per_cpu_ptr(pmu->ctx, cpu) = NULL;
+ ctx = kzalloc_node(sizeof(struct amd_uncore_ctx), GFP_KERNEL,
+ node);
+ if (!ctx)
goto fail;
- uncore_nb->id = -1;
- *per_cpu_ptr(amd_uncore_nb, cpu) = uncore_nb;
- }
- if (amd_uncore_llc) {
- *per_cpu_ptr(amd_uncore_llc, cpu) = NULL;
- uncore_llc = amd_uncore_alloc(cpu);
- if (!uncore_llc)
- goto fail;
- uncore_llc->cpu = cpu;
- uncore_llc->num_counters = num_counters_llc;
- uncore_llc->rdpmc_base = RDPMC_BASE_LLC;
- uncore_llc->msr_base = MSR_F16H_L2I_PERF_CTL;
- uncore_llc->active_mask = &amd_llc_active_mask;
- uncore_llc->pmu = &amd_llc_pmu;
- uncore_llc->events = amd_uncore_events_alloc(num_counters_llc, cpu);
- if (!uncore_llc->events)
+ ctx->cpu = cpu;
+ ctx->events = kzalloc_node(sizeof(struct perf_event *) *
+ pmu->num_counters, GFP_KERNEL,
+ node);
+ if (!ctx->events)
goto fail;
- uncore_llc->id = -1;
- *per_cpu_ptr(amd_uncore_llc, cpu) = uncore_llc;
+
+ ctx->id = -1;
+ *per_cpu_ptr(pmu->ctx, cpu) = ctx;
}
return 0;
fail:
- if (uncore_nb) {
- kfree(uncore_nb->events);
- kfree(uncore_nb);
- }
+ /* Rollback */
+ for (; i >= 0; i--) {
+ pmu = &pmus[i];
+ ctx = *per_cpu_ptr(pmu->ctx, cpu);
+ if (!ctx)
+ continue;
- if (uncore_llc) {
- kfree(uncore_llc->events);
- kfree(uncore_llc);
+ kfree(ctx->events);
+ kfree(ctx);
}
return -ENOMEM;
}
-static struct amd_uncore *
-amd_uncore_find_online_sibling(struct amd_uncore *this,
- struct amd_uncore * __percpu *uncores)
+static struct amd_uncore_ctx *
+amd_uncore_find_online_sibling(struct amd_uncore_ctx *this,
+ struct amd_uncore_pmu *pmu)
{
unsigned int cpu;
- struct amd_uncore *that;
+ struct amd_uncore_ctx *that;
for_each_online_cpu(cpu) {
- that = *per_cpu_ptr(uncores, cpu);
+ that = *per_cpu_ptr(pmu->ctx, cpu);
if (!that)
continue;
@@ -523,24 +403,16 @@ amd_uncore_find_online_sibling(struct amd_uncore *this,
static int amd_uncore_cpu_starting(unsigned int cpu)
{
- unsigned int eax, ebx, ecx, edx;
- struct amd_uncore *uncore;
-
- if (amd_uncore_nb) {
- uncore = *per_cpu_ptr(amd_uncore_nb, cpu);
- cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
- uncore->id = ecx & 0xff;
-
- uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_nb);
- *per_cpu_ptr(amd_uncore_nb, cpu) = uncore;
- }
-
- if (amd_uncore_llc) {
- uncore = *per_cpu_ptr(amd_uncore_llc, cpu);
- uncore->id = get_llc_id(cpu);
+ struct amd_uncore_pmu *pmu;
+ struct amd_uncore_ctx *ctx;
+ int i;
- uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_llc);
- *per_cpu_ptr(amd_uncore_llc, cpu) = uncore;
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ ctx = *per_cpu_ptr(pmu->ctx, cpu);
+ ctx->id = pmu->id(cpu);
+ ctx = amd_uncore_find_online_sibling(ctx, pmu);
+ *per_cpu_ptr(pmu->ctx, cpu) = ctx;
}
return 0;
@@ -548,195 +420,359 @@ static int amd_uncore_cpu_starting(unsigned int cpu)
static void uncore_clean_online(void)
{
- struct amd_uncore *uncore;
+ struct amd_uncore_ctx *ctx;
struct hlist_node *n;
- hlist_for_each_entry_safe(uncore, n, &uncore_unused_list, node) {
- hlist_del(&uncore->node);
- kfree(uncore->events);
- kfree(uncore);
+ hlist_for_each_entry_safe(ctx, n, &uncore_unused_list, node) {
+ hlist_del(&ctx->node);
+ kfree(ctx->events);
+ kfree(ctx);
}
}
-static void uncore_online(unsigned int cpu,
- struct amd_uncore * __percpu *uncores)
+static int amd_uncore_cpu_online(unsigned int cpu)
{
- struct amd_uncore *uncore = *per_cpu_ptr(uncores, cpu);
+ struct amd_uncore_pmu *pmu;
+ struct amd_uncore_ctx *ctx;
+ int i;
uncore_clean_online();
- if (cpu == uncore->cpu)
- cpumask_set_cpu(cpu, uncore->active_mask);
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ ctx = *per_cpu_ptr(pmu->ctx, cpu);
+ if (cpu == ctx->cpu)
+ cpumask_set_cpu(cpu, &pmu->active_mask);
+ }
+
+ return 0;
}
-static int amd_uncore_cpu_online(unsigned int cpu)
+static int amd_uncore_cpu_down_prepare(unsigned int cpu)
{
- if (amd_uncore_nb)
- uncore_online(cpu, amd_uncore_nb);
+ struct amd_uncore_ctx *this, *that;
+ struct amd_uncore_pmu *pmu;
+ int i, j;
+
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ this = *per_cpu_ptr(pmu->ctx, cpu);
+
+ /* this cpu is going down, migrate to a shared sibling if possible */
+ for_each_online_cpu(j) {
+ that = *per_cpu_ptr(pmu->ctx, j);
+
+ if (cpu == j)
+ continue;
+
+ if (this == that) {
+ perf_pmu_migrate_context(&pmu->pmu, cpu, j);
+ cpumask_clear_cpu(cpu, &pmu->active_mask);
+ cpumask_set_cpu(j, &pmu->active_mask);
+ that->cpu = j;
+ break;
+ }
+ }
+ }
- if (amd_uncore_llc)
- uncore_online(cpu, amd_uncore_llc);
+ return 0;
+}
+
+static int amd_uncore_cpu_dead(unsigned int cpu)
+{
+ struct amd_uncore_ctx *ctx;
+ struct amd_uncore_pmu *pmu;
+ int i;
+
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ ctx = *per_cpu_ptr(pmu->ctx, cpu);
+ if (cpu == ctx->cpu)
+ cpumask_clear_cpu(cpu, &pmu->active_mask);
+
+ if (!--ctx->refcnt) {
+ kfree(ctx->events);
+ kfree(ctx);
+ }
+
+ *per_cpu_ptr(pmu->ctx, cpu) = NULL;
+ }
return 0;
}
-static void uncore_down_prepare(unsigned int cpu,
- struct amd_uncore * __percpu *uncores)
+static int amd_uncore_df_id(unsigned int cpu)
{
- unsigned int i;
- struct amd_uncore *this = *per_cpu_ptr(uncores, cpu);
+ unsigned int eax, ebx, ecx, edx;
- if (this->cpu != cpu)
- return;
+ cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
- /* this cpu is going down, migrate to a shared sibling if possible */
- for_each_online_cpu(i) {
- struct amd_uncore *that = *per_cpu_ptr(uncores, i);
+ return ecx & 0xff;
+}
- if (cpu == i)
- continue;
+static int amd_uncore_df_event_init(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ int ret = amd_uncore_event_init(event);
- if (this == that) {
- perf_pmu_migrate_context(this->pmu, cpu, i);
- cpumask_clear_cpu(cpu, that->active_mask);
- cpumask_set_cpu(i, that->active_mask);
- that->cpu = i;
- break;
- }
- }
+ if (ret || pmu_version < 2)
+ return ret;
+
+ hwc->config = event->attr.config &
+ (pmu_version >= 2 ? AMD64_PERFMON_V2_RAW_EVENT_MASK_NB :
+ AMD64_RAW_EVENT_MASK_NB);
+
+ return 0;
}
-static int amd_uncore_cpu_down_prepare(unsigned int cpu)
+static int amd_uncore_df_add(struct perf_event *event, int flags)
{
- if (amd_uncore_nb)
- uncore_down_prepare(cpu, amd_uncore_nb);
+ int ret = amd_uncore_add(event, flags & ~PERF_EF_START);
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (ret)
+ return ret;
+
+ /*
+ * The first four DF counters are accessible via RDPMC index 6 to 9
+ * followed by the L3 counters from index 10 to 15. For processors
+ * with more than four DF counters, the DF RDPMC assignments become
+ * discontiguous as the additional counters are accessible starting
+ * from index 16.
+ */
+ if (hwc->idx >= NUM_COUNTERS_NB)
+ hwc->event_base_rdpmc += NUM_COUNTERS_L3;
- if (amd_uncore_llc)
- uncore_down_prepare(cpu, amd_uncore_llc);
+ /* Delayed start after rdpmc base update */
+ if (flags & PERF_EF_START)
+ amd_uncore_start(event, PERF_EF_RELOAD);
return 0;
}
-static void uncore_dead(unsigned int cpu, struct amd_uncore * __percpu *uncores)
+static int amd_uncore_df_init(void)
{
- struct amd_uncore *uncore = *per_cpu_ptr(uncores, cpu);
+ struct attribute **df_attr = amd_uncore_df_format_attr;
+ struct amd_uncore_pmu *pmu = &pmus[num_pmus];
+ union cpuid_0x80000022_ebx ebx;
+ int ret;
- if (cpu == uncore->cpu)
- cpumask_clear_cpu(cpu, uncore->active_mask);
+ if (!boot_cpu_has(X86_FEATURE_PERFCTR_NB))
+ return 0;
- if (!--uncore->refcnt) {
- kfree(uncore->events);
- kfree(uncore);
+ /*
+ * For Family 17h and above, the Northbridge counters are repurposed
+ * as Data Fabric counters. The PMUs are exported based on family as
+ * either NB or DF.
+ */
+ strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_df" : "amd_nb",
+ sizeof(pmu->name));
+
+ pmu->num_counters = NUM_COUNTERS_NB;
+ pmu->msr_base = MSR_F15H_NB_PERF_CTL;
+ pmu->rdpmc_base = RDPMC_BASE_NB;
+ pmu->id = amd_uncore_df_id;
+
+ if (pmu_version >= 2) {
+ *df_attr++ = &format_attr_event14v2.attr;
+ *df_attr++ = &format_attr_umask12.attr;
+ ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
+ pmu->num_counters = ebx.split.num_df_pmc;
+ } else if (boot_cpu_data.x86 >= 0x17) {
+ *df_attr = &format_attr_event14.attr;
+ }
+
+ pmu->ctx = alloc_percpu(struct amd_uncore_ctx *);
+ if (!pmu->ctx)
+ return -ENOMEM;
+
+ pmu->pmu = (struct pmu) {
+ .task_ctx_nr = perf_invalid_context,
+ .attr_groups = amd_uncore_df_attr_groups,
+ .name = pmu->name,
+ .event_init = amd_uncore_df_event_init,
+ .add = amd_uncore_df_add,
+ .del = amd_uncore_del,
+ .start = amd_uncore_start,
+ .stop = amd_uncore_stop,
+ .read = amd_uncore_read,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
+ .module = THIS_MODULE,
+ };
+
+ ret = perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1);
+ if (ret) {
+ free_percpu(pmu->ctx);
+ pmu->ctx = NULL;
+ return ret;
}
- *per_cpu_ptr(uncores, cpu) = NULL;
+ pr_info("%d %s %s counters detected\n", pmu->num_counters,
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
+ pmu->pmu.name);
+
+ num_pmus++;
+
+ return 0;
}
-static int amd_uncore_cpu_dead(unsigned int cpu)
+static int amd_uncore_l3_id(unsigned int cpu)
{
- if (amd_uncore_nb)
- uncore_dead(cpu, amd_uncore_nb);
+ return get_llc_id(cpu);
+}
+
+static int amd_uncore_l3_event_init(struct perf_event *event)
+{
+ int ret = amd_uncore_event_init(event);
+ struct hw_perf_event *hwc = &event->hw;
+ u64 config = event->attr.config;
+ u64 mask;
+
+ hwc->config = config & AMD64_RAW_EVENT_MASK_NB;
+
+ /*
+ * SliceMask and ThreadMask need to be set for certain L3 events.
+ * For other events, the two fields do not affect the count.
+ */
+ if (ret || boot_cpu_data.x86 < 0x17)
+ return ret;
+
+ mask = config & (AMD64_L3_F19H_THREAD_MASK | AMD64_L3_SLICEID_MASK |
+ AMD64_L3_EN_ALL_CORES | AMD64_L3_EN_ALL_SLICES |
+ AMD64_L3_COREID_MASK);
+
+ if (boot_cpu_data.x86 <= 0x18)
+ mask = ((config & AMD64_L3_SLICE_MASK) ? : AMD64_L3_SLICE_MASK) |
+ ((config & AMD64_L3_THREAD_MASK) ? : AMD64_L3_THREAD_MASK);
+
+ /*
+ * If the user doesn't specify a ThreadMask, they're not trying to
+ * count core 0, so we enable all cores & threads.
+ * We'll also assume that they want to count slice 0 if they specify
+ * a ThreadMask and leave SliceId and EnAllSlices unpopulated.
+ */
+ else if (!(config & AMD64_L3_F19H_THREAD_MASK))
+ mask = AMD64_L3_F19H_THREAD_MASK | AMD64_L3_EN_ALL_SLICES |
+ AMD64_L3_EN_ALL_CORES;
- if (amd_uncore_llc)
- uncore_dead(cpu, amd_uncore_llc);
+ hwc->config |= mask;
return 0;
}
-static int __init amd_uncore_init(void)
+static int amd_uncore_l3_init(void)
{
- struct attribute **df_attr = amd_uncore_df_format_attr;
struct attribute **l3_attr = amd_uncore_l3_format_attr;
- union cpuid_0x80000022_ebx ebx;
- int ret = -ENODEV;
+ struct amd_uncore_pmu *pmu = &pmus[num_pmus];
+ int ret;
- if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
- boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
- return -ENODEV;
+ if (!boot_cpu_has(X86_FEATURE_PERFCTR_LLC))
+ return 0;
- if (!boot_cpu_has(X86_FEATURE_TOPOEXT))
- return -ENODEV;
+ /*
+ * For Family 17h and above, L3 cache counters are available instead
+ * of L2 cache counters. The PMUs are exported based on family as
+ * either L2 or L3.
+ */
+ strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_l3" : "amd_l2",
+ sizeof(pmu->name));
- if (boot_cpu_has(X86_FEATURE_PERFMON_V2))
- pmu_version = 2;
+ pmu->num_counters = NUM_COUNTERS_L2;
+ pmu->msr_base = MSR_F16H_L2I_PERF_CTL;
+ pmu->rdpmc_base = RDPMC_BASE_LLC;
+ pmu->id = amd_uncore_l3_id;
- num_counters_nb = NUM_COUNTERS_NB;
- num_counters_llc = NUM_COUNTERS_L2;
if (boot_cpu_data.x86 >= 0x17) {
- /*
- * For F17h and above, the Northbridge counters are
- * repurposed as Data Fabric counters. Also, L3
- * counters are supported too. The PMUs are exported
- * based on family as either L2 or L3 and NB or DF.
- */
- num_counters_llc = NUM_COUNTERS_L3;
- amd_nb_pmu.name = "amd_df";
- amd_llc_pmu.name = "amd_l3";
- l3_mask = true;
+ *l3_attr++ = &format_attr_event8.attr;
+ *l3_attr++ = &format_attr_umask8.attr;
+ *l3_attr++ = boot_cpu_data.x86 >= 0x19 ?
+ &format_attr_threadmask2.attr :
+ &format_attr_threadmask8.attr;
+ pmu->num_counters = NUM_COUNTERS_L3;
}
- if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) {
- if (pmu_version >= 2) {
- *df_attr++ = &format_attr_event14v2.attr;
- *df_attr++ = &format_attr_umask12.attr;
- } else if (boot_cpu_data.x86 >= 0x17) {
- *df_attr = &format_attr_event14.attr;
- }
+ pmu->ctx = alloc_percpu(struct amd_uncore_ctx *);
+ if (!pmu->ctx)
+ return -ENOMEM;
+
+ pmu->pmu = (struct pmu) {
+ .task_ctx_nr = perf_invalid_context,
+ .attr_groups = amd_uncore_l3_attr_groups,
+ .attr_update = amd_uncore_l3_attr_update,
+ .name = pmu->name,
+ .event_init = amd_uncore_l3_event_init,
+ .add = amd_uncore_add,
+ .del = amd_uncore_del,
+ .start = amd_uncore_start,
+ .stop = amd_uncore_stop,
+ .read = amd_uncore_read,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
+ .module = THIS_MODULE,
+ };
+
+ ret = perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1);
+ if (ret) {
+ free_percpu(pmu->ctx);
+ pmu->ctx = NULL;
+ return ret;
+ }
- amd_uncore_nb = alloc_percpu(struct amd_uncore *);
- if (!amd_uncore_nb) {
- ret = -ENOMEM;
- goto fail_nb;
- }
- ret = perf_pmu_register(&amd_nb_pmu, amd_nb_pmu.name, -1);
- if (ret)
- goto fail_nb;
+ pr_info("%d %s %s counters detected\n", pmu->num_counters,
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
+ pmu->pmu.name);
- if (pmu_version >= 2) {
- ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
- num_counters_nb = ebx.split.num_df_pmc;
- }
+ num_pmus++;
- pr_info("%d %s %s counters detected\n", num_counters_nb,
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
- amd_nb_pmu.name);
+ return 0;
+}
- ret = 0;
- }
+static void uncore_free(void)
+{
+ struct amd_uncore_pmu *pmu;
+ int i;
- if (boot_cpu_has(X86_FEATURE_PERFCTR_LLC)) {
- if (boot_cpu_data.x86 >= 0x19) {
- *l3_attr++ = &format_attr_event8.attr;
- *l3_attr++ = &format_attr_umask8.attr;
- *l3_attr++ = &format_attr_threadmask2.attr;
- } else if (boot_cpu_data.x86 >= 0x17) {
- *l3_attr++ = &format_attr_event8.attr;
- *l3_attr++ = &format_attr_umask8.attr;
- *l3_attr++ = &format_attr_threadmask8.attr;
- }
+ for (i = 0; i < num_pmus; i++) {
+ pmu = &pmus[i];
+ if (!pmu->ctx)
+ continue;
- amd_uncore_llc = alloc_percpu(struct amd_uncore *);
- if (!amd_uncore_llc) {
- ret = -ENOMEM;
- goto fail_llc;
- }
- ret = perf_pmu_register(&amd_llc_pmu, amd_llc_pmu.name, -1);
- if (ret)
- goto fail_llc;
-
- pr_info("%d %s %s counters detected\n", num_counters_llc,
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
- amd_llc_pmu.name);
- ret = 0;
+ perf_pmu_unregister(&pmu->pmu);
+ free_percpu(pmu->ctx);
+ pmu->ctx = NULL;
}
+ num_pmus = 0;
+}
+
+static int __init amd_uncore_init(void)
+{
+ int ret;
+
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+ return -ENODEV;
+
+ if (!boot_cpu_has(X86_FEATURE_TOPOEXT))
+ return -ENODEV;
+
+ if (boot_cpu_has(X86_FEATURE_PERFMON_V2))
+ pmu_version = 2;
+
+ ret = amd_uncore_df_init();
+ if (ret)
+ goto fail;
+
+ ret = amd_uncore_l3_init();
+ if (ret)
+ goto fail;
+
/*
* Install callbacks. Core will call them for each online cpu.
*/
if (cpuhp_setup_state(CPUHP_PERF_X86_AMD_UNCORE_PREP,
"perf/x86/amd/uncore:prepare",
amd_uncore_cpu_up_prepare, amd_uncore_cpu_dead))
- goto fail_llc;
+ goto fail;
if (cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
"perf/x86/amd/uncore:starting",
@@ -753,12 +789,8 @@ static int __init amd_uncore_init(void)
cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING);
fail_prep:
cpuhp_remove_state(CPUHP_PERF_X86_AMD_UNCORE_PREP);
-fail_llc:
- if (boot_cpu_has(X86_FEATURE_PERFCTR_NB))
- perf_pmu_unregister(&amd_nb_pmu);
- free_percpu(amd_uncore_llc);
-fail_nb:
- free_percpu(amd_uncore_nb);
+fail:
+ uncore_free();
return ret;
}
@@ -768,18 +800,7 @@ static void __exit amd_uncore_exit(void)
cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_ONLINE);
cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING);
cpuhp_remove_state(CPUHP_PERF_X86_AMD_UNCORE_PREP);
-
- if (boot_cpu_has(X86_FEATURE_PERFCTR_LLC)) {
- perf_pmu_unregister(&amd_llc_pmu);
- free_percpu(amd_uncore_llc);
- amd_uncore_llc = NULL;
- }
-
- if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) {
- perf_pmu_unregister(&amd_nb_pmu);
- free_percpu(amd_uncore_nb);
- amd_uncore_nb = NULL;
- }
+ uncore_free();
}
module_init(amd_uncore_init);
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
2023-10-05 5:23 ` [PATCH v2 1/6] perf/x86/amd/uncore: Refactor uncore management Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
2023-10-09 12:36 ` kernel test robot
2023-10-05 5:23 ` [PATCH v2 3/6] perf/x86/amd/uncore: Use rdmsr if rdpmc is unavailable Sandipan Das
` (3 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Uncore PMUs have traditionally been registered in the module init path.
This is fine for the existing DF and L3 PMUs since the CPUID information
does not vary across CPUs but not for the memory controller (UMC) PMUs
since information like active memory channels can vary for each socket
depending on how the DIMMs have been physically populated.
To overcome this, the discovery of PMU information using CPUID is moved
to the startup of UNCORE_STARTING. This cannot be done in the startup of
UNCORE_PREP since the hotplug callback does not run on the CPU that is
being brought online.
Previously, the startup of UNCORE_PREP was used for allocating uncore
contexts following which, the startup of UNCORE_STARTING was used to
find and reuse an existing sibling context, if possible. Any unused
contexts were added to a list for reclaimation later during the startup
of UNCORE_ONLINE.
Since all required CPUID info is now available only after the startup of
UNCORE_STARTING has completed, context allocation has been moved to the
startup of UNCORE_ONLINE. Before allocating contexts, the first CPU that
comes online has to take up the additional responsibility of registering
the PMUs. This is a one-time process though. Since sibling discovery now
happens prior to deciding whether a new context is required, there is no
longer a need to track and free up unused contexts.
The teardown of UNCORE_ONLINE and UNCORE_PREP functionally remain the
same.
Overall, the flow of control described above is achieved using the
following handlers for managing uncore PMUs. It is mandatory to define
them for each type of uncore PMU.
* scan() runs during startup of UNCORE_STARTING and collects PMU info
using CPUID.
* init() runs during startup of UNCORE_ONLINE, registers PMUs and sets
up uncore contexts.
* move() runs during teardown of UNCORE_ONLINE and migrates uncore
contexts to a shared sibling, if possible.
* free() runs during teardown of UNCORE_PREP and frees up uncore
contexts.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
---
arch/x86/events/amd/uncore.c | 494 +++++++++++++++++++++--------------
1 file changed, 305 insertions(+), 189 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index ffcecda13d65..ff1d09cc07ad 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -26,8 +26,6 @@
#define RDPMC_BASE_LLC 10
#define COUNTER_SHIFT 16
-
-#define NUM_UNCORES_MAX 2 /* DF (or NB) and L3 (or L2) */
#define UNCORE_NAME_LEN 16
#undef pr_fmt
@@ -35,10 +33,7 @@
static int pmu_version;
-static HLIST_HEAD(uncore_unused_list);
-
struct amd_uncore_ctx {
- int id;
int refcnt;
int cpu;
struct perf_event **events;
@@ -53,11 +48,36 @@ struct amd_uncore_pmu {
cpumask_t active_mask;
struct pmu pmu;
struct amd_uncore_ctx * __percpu *ctx;
- int (*id)(unsigned int cpu);
};
-static struct amd_uncore_pmu pmus[NUM_UNCORES_MAX];
-static int num_pmus __read_mostly;
+enum {
+ UNCORE_TYPE_DF,
+ UNCORE_TYPE_L3,
+
+ UNCORE_TYPE_MAX
+};
+
+union amd_uncore_info {
+ struct {
+ u64 aux_data:32; /* auxiliary data */
+ u64 num_pmcs:8; /* number of counters */
+ u64 cid:8; /* context id */
+ } split;
+ u64 full;
+};
+
+struct amd_uncore {
+ union amd_uncore_info * __percpu info;
+ struct amd_uncore_pmu *pmus;
+ unsigned int num_pmus;
+ bool init_done;
+ void (*scan)(struct amd_uncore *uncore, unsigned int cpu);
+ int (*init)(struct amd_uncore *uncore, unsigned int cpu);
+ void (*move)(struct amd_uncore *uncore, unsigned int cpu);
+ void (*free)(struct amd_uncore *uncore, unsigned int cpu);
+};
+
+static struct amd_uncore uncores[UNCORE_TYPE_MAX];
static struct amd_uncore_pmu *event_to_amd_uncore_pmu(struct perf_event *event)
{
@@ -332,182 +352,192 @@ static const struct attribute_group *amd_uncore_l3_attr_update[] = {
NULL,
};
-static int amd_uncore_cpu_up_prepare(unsigned int cpu)
+static __always_inline
+int amd_uncore_ctx_cid(struct amd_uncore *uncore, unsigned int cpu)
{
- struct amd_uncore_pmu *pmu;
- struct amd_uncore_ctx *ctx;
- int node = cpu_to_node(cpu), i;
-
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- *per_cpu_ptr(pmu->ctx, cpu) = NULL;
- ctx = kzalloc_node(sizeof(struct amd_uncore_ctx), GFP_KERNEL,
- node);
- if (!ctx)
- goto fail;
-
- ctx->cpu = cpu;
- ctx->events = kzalloc_node(sizeof(struct perf_event *) *
- pmu->num_counters, GFP_KERNEL,
- node);
- if (!ctx->events)
- goto fail;
-
- ctx->id = -1;
- *per_cpu_ptr(pmu->ctx, cpu) = ctx;
- }
-
- return 0;
-
-fail:
- /* Rollback */
- for (; i >= 0; i--) {
- pmu = &pmus[i];
- ctx = *per_cpu_ptr(pmu->ctx, cpu);
- if (!ctx)
- continue;
-
- kfree(ctx->events);
- kfree(ctx);
- }
+ union amd_uncore_info *info = per_cpu_ptr(uncore->info, cpu);
+ return info->split.cid;
+}
- return -ENOMEM;
+static __always_inline
+int amd_uncore_ctx_num_pmcs(struct amd_uncore *uncore, unsigned int cpu)
+{
+ union amd_uncore_info *info = per_cpu_ptr(uncore->info, cpu);
+ return info->split.num_pmcs;
}
-static struct amd_uncore_ctx *
-amd_uncore_find_online_sibling(struct amd_uncore_ctx *this,
- struct amd_uncore_pmu *pmu)
+static void amd_uncore_ctx_free(struct amd_uncore *uncore, unsigned int cpu)
{
- unsigned int cpu;
- struct amd_uncore_ctx *that;
+ struct amd_uncore_pmu *pmu;
+ struct amd_uncore_ctx *ctx;
+ int i;
- for_each_online_cpu(cpu) {
- that = *per_cpu_ptr(pmu->ctx, cpu);
+ if (!uncore->init_done)
+ return;
- if (!that)
+ for (i = 0; i < uncore->num_pmus; i++) {
+ pmu = &uncore->pmus[i];
+ ctx = *per_cpu_ptr(pmu->ctx, cpu);
+ if (!ctx)
continue;
- if (this == that)
- continue;
+ if (cpu == ctx->cpu)
+ cpumask_clear_cpu(cpu, &pmu->active_mask);
- if (this->id == that->id) {
- hlist_add_head(&this->node, &uncore_unused_list);
- this = that;
- break;
+ if (!--ctx->refcnt) {
+ kfree(ctx->events);
+ kfree(ctx);
}
- }
- this->refcnt++;
- return this;
+ *per_cpu_ptr(pmu->ctx, cpu) = NULL;
+ }
}
-static int amd_uncore_cpu_starting(unsigned int cpu)
+static int amd_uncore_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
{
+ struct amd_uncore_ctx *curr, *prev;
struct amd_uncore_pmu *pmu;
- struct amd_uncore_ctx *ctx;
- int i;
+ int node, cid, i, j;
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- ctx = *per_cpu_ptr(pmu->ctx, cpu);
- ctx->id = pmu->id(cpu);
- ctx = amd_uncore_find_online_sibling(ctx, pmu);
- *per_cpu_ptr(pmu->ctx, cpu) = ctx;
- }
+ if (!uncore->init_done || !uncore->num_pmus)
+ return 0;
- return 0;
-}
+ cid = amd_uncore_ctx_cid(uncore, cpu);
-static void uncore_clean_online(void)
-{
- struct amd_uncore_ctx *ctx;
- struct hlist_node *n;
+ for (i = 0; i < uncore->num_pmus; i++) {
+ pmu = &uncore->pmus[i];
+ *per_cpu_ptr(pmu->ctx, cpu) = NULL;
+ curr = NULL;
- hlist_for_each_entry_safe(ctx, n, &uncore_unused_list, node) {
- hlist_del(&ctx->node);
- kfree(ctx->events);
- kfree(ctx);
- }
-}
+ /* Find a sibling context */
+ for_each_online_cpu(j) {
+ if (cpu == j)
+ continue;
-static int amd_uncore_cpu_online(unsigned int cpu)
-{
- struct amd_uncore_pmu *pmu;
- struct amd_uncore_ctx *ctx;
- int i;
+ prev = *per_cpu_ptr(pmu->ctx, j);
+ if (!prev)
+ continue;
- uncore_clean_online();
+ if (cid == amd_uncore_ctx_cid(uncore, j)) {
+ curr = prev;
+ break;
+ }
+ }
+
+ /* Allocate context if sibling does not exist */
+ if (!curr) {
+ node = cpu_to_node(cpu);
+ curr = kzalloc_node(sizeof(*curr), GFP_KERNEL, node);
+ if (!curr)
+ goto fail;
+
+ curr->cpu = cpu;
+ curr->events = kzalloc_node(sizeof(*curr->events) *
+ pmu->num_counters,
+ GFP_KERNEL, node);
+ if (!curr->events) {
+ kfree(curr);
+ goto fail;
+ }
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- ctx = *per_cpu_ptr(pmu->ctx, cpu);
- if (cpu == ctx->cpu)
cpumask_set_cpu(cpu, &pmu->active_mask);
+ }
+
+ curr->refcnt++;
+ *per_cpu_ptr(pmu->ctx, cpu) = curr;
}
return 0;
+
+fail:
+ amd_uncore_ctx_free(uncore, cpu);
+
+ return -ENOMEM;
}
-static int amd_uncore_cpu_down_prepare(unsigned int cpu)
+static void amd_uncore_ctx_move(struct amd_uncore *uncore, unsigned int cpu)
{
- struct amd_uncore_ctx *this, *that;
+ struct amd_uncore_ctx *curr, *next;
struct amd_uncore_pmu *pmu;
int i, j;
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- this = *per_cpu_ptr(pmu->ctx, cpu);
+ if (!uncore->init_done)
+ return;
- /* this cpu is going down, migrate to a shared sibling if possible */
- for_each_online_cpu(j) {
- that = *per_cpu_ptr(pmu->ctx, j);
+ for (i = 0; i < uncore->num_pmus; i++) {
+ pmu = &uncore->pmus[i];
+ curr = *per_cpu_ptr(pmu->ctx, cpu);
+ if (!curr)
+ continue;
- if (cpu == j)
+ /* Migrate to a shared sibling if possible */
+ for_each_online_cpu(j) {
+ next = *per_cpu_ptr(pmu->ctx, j);
+ if (!next || cpu == j)
continue;
- if (this == that) {
+ if (curr == next) {
perf_pmu_migrate_context(&pmu->pmu, cpu, j);
cpumask_clear_cpu(cpu, &pmu->active_mask);
cpumask_set_cpu(j, &pmu->active_mask);
- that->cpu = j;
+ next->cpu = j;
break;
}
}
}
+}
+
+static int amd_uncore_cpu_starting(unsigned int cpu)
+{
+ struct amd_uncore *uncore;
+ int i;
+
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ uncore->scan(uncore, cpu);
+ }
return 0;
}
-static int amd_uncore_cpu_dead(unsigned int cpu)
+static int amd_uncore_cpu_online(unsigned int cpu)
{
- struct amd_uncore_ctx *ctx;
- struct amd_uncore_pmu *pmu;
+ struct amd_uncore *uncore;
int i;
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- ctx = *per_cpu_ptr(pmu->ctx, cpu);
- if (cpu == ctx->cpu)
- cpumask_clear_cpu(cpu, &pmu->active_mask);
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ if (uncore->init(uncore, cpu))
+ break;
+ }
- if (!--ctx->refcnt) {
- kfree(ctx->events);
- kfree(ctx);
- }
+ return 0;
+}
- *per_cpu_ptr(pmu->ctx, cpu) = NULL;
+static int amd_uncore_cpu_down_prepare(unsigned int cpu)
+{
+ struct amd_uncore *uncore;
+ int i;
+
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ uncore->move(uncore, cpu);
}
return 0;
}
-static int amd_uncore_df_id(unsigned int cpu)
+static int amd_uncore_cpu_dead(unsigned int cpu)
{
- unsigned int eax, ebx, ecx, edx;
+ struct amd_uncore *uncore;
+ int i;
- cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ uncore->free(uncore, cpu);
+ }
- return ecx & 0xff;
+ return 0;
}
static int amd_uncore_df_event_init(struct perf_event *event)
@@ -550,41 +580,66 @@ static int amd_uncore_df_add(struct perf_event *event, int flags)
return 0;
}
-static int amd_uncore_df_init(void)
+static
+void amd_uncore_df_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
{
- struct attribute **df_attr = amd_uncore_df_format_attr;
- struct amd_uncore_pmu *pmu = &pmus[num_pmus];
union cpuid_0x80000022_ebx ebx;
- int ret;
+ union amd_uncore_info info;
if (!boot_cpu_has(X86_FEATURE_PERFCTR_NB))
- return 0;
+ return;
+
+ info.split.aux_data = 0;
+ info.split.num_pmcs = NUM_COUNTERS_NB;
+ info.split.cid = topology_die_id(cpu);
+
+ if (pmu_version >= 2) {
+ ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
+ info.split.num_pmcs = ebx.split.num_df_pmc;
+ }
+
+ *per_cpu_ptr(uncore->info, cpu) = info;
+}
+
+static
+int amd_uncore_df_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+{
+ struct attribute **df_attr = amd_uncore_df_format_attr;
+ struct amd_uncore_pmu *pmu;
+
+ /* Run just once */
+ if (uncore->init_done)
+ return amd_uncore_ctx_init(uncore, cpu);
+
+ /* No grouping, single instance for a system */
+ uncore->pmus = kzalloc(sizeof(*uncore->pmus), GFP_KERNEL);
+ if (!uncore->pmus) {
+ uncore->num_pmus = 0;
+ goto done;
+ }
/*
* For Family 17h and above, the Northbridge counters are repurposed
* as Data Fabric counters. The PMUs are exported based on family as
* either NB or DF.
*/
+ pmu = &uncore->pmus[0];
strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_df" : "amd_nb",
sizeof(pmu->name));
-
- pmu->num_counters = NUM_COUNTERS_NB;
+ pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
pmu->msr_base = MSR_F15H_NB_PERF_CTL;
pmu->rdpmc_base = RDPMC_BASE_NB;
- pmu->id = amd_uncore_df_id;
if (pmu_version >= 2) {
*df_attr++ = &format_attr_event14v2.attr;
*df_attr++ = &format_attr_umask12.attr;
- ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
- pmu->num_counters = ebx.split.num_df_pmc;
} else if (boot_cpu_data.x86 >= 0x17) {
*df_attr = &format_attr_event14.attr;
}
pmu->ctx = alloc_percpu(struct amd_uncore_ctx *);
if (!pmu->ctx)
- return -ENOMEM;
+ goto done;
pmu->pmu = (struct pmu) {
.task_ctx_nr = perf_invalid_context,
@@ -600,25 +655,22 @@ static int amd_uncore_df_init(void)
.module = THIS_MODULE,
};
- ret = perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1);
- if (ret) {
+ if (perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1)) {
free_percpu(pmu->ctx);
pmu->ctx = NULL;
- return ret;
+ goto done;
}
- pr_info("%d %s %s counters detected\n", pmu->num_counters,
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
+ pr_info("%d %s%s counters detected\n", pmu->num_counters,
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON " : "",
pmu->pmu.name);
- num_pmus++;
+ uncore->num_pmus = 1;
- return 0;
-}
+done:
+ uncore->init_done = true;
-static int amd_uncore_l3_id(unsigned int cpu)
-{
- return get_llc_id(cpu);
+ return amd_uncore_ctx_init(uncore, cpu);
}
static int amd_uncore_l3_event_init(struct perf_event *event)
@@ -660,27 +712,52 @@ static int amd_uncore_l3_event_init(struct perf_event *event)
return 0;
}
-static int amd_uncore_l3_init(void)
+static
+void amd_uncore_l3_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
{
- struct attribute **l3_attr = amd_uncore_l3_format_attr;
- struct amd_uncore_pmu *pmu = &pmus[num_pmus];
- int ret;
+ union amd_uncore_info info;
if (!boot_cpu_has(X86_FEATURE_PERFCTR_LLC))
- return 0;
+ return;
+
+ info.split.aux_data = 0;
+ info.split.num_pmcs = NUM_COUNTERS_L2;
+ info.split.cid = get_llc_id(cpu);
+
+ if (boot_cpu_data.x86 >= 0x17)
+ info.split.num_pmcs = NUM_COUNTERS_L3;
+
+ *per_cpu_ptr(uncore->info, cpu) = info;
+}
+
+static
+int amd_uncore_l3_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+{
+ struct attribute **l3_attr = amd_uncore_l3_format_attr;
+ struct amd_uncore_pmu *pmu;
+
+ /* Run just once */
+ if (uncore->init_done)
+ return amd_uncore_ctx_init(uncore, cpu);
+
+ /* No grouping, single instance for a system */
+ uncore->pmus = kzalloc(sizeof(*uncore->pmus), GFP_KERNEL);
+ if (!uncore->pmus) {
+ uncore->num_pmus = 0;
+ goto done;
+ }
/*
* For Family 17h and above, L3 cache counters are available instead
* of L2 cache counters. The PMUs are exported based on family as
* either L2 or L3.
*/
+ pmu = &uncore->pmus[0];
strscpy(pmu->name, boot_cpu_data.x86 >= 0x17 ? "amd_l3" : "amd_l2",
sizeof(pmu->name));
-
- pmu->num_counters = NUM_COUNTERS_L2;
+ pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
pmu->msr_base = MSR_F16H_L2I_PERF_CTL;
pmu->rdpmc_base = RDPMC_BASE_LLC;
- pmu->id = amd_uncore_l3_id;
if (boot_cpu_data.x86 >= 0x17) {
*l3_attr++ = &format_attr_event8.attr;
@@ -688,12 +765,11 @@ static int amd_uncore_l3_init(void)
*l3_attr++ = boot_cpu_data.x86 >= 0x19 ?
&format_attr_threadmask2.attr :
&format_attr_threadmask8.attr;
- pmu->num_counters = NUM_COUNTERS_L3;
}
pmu->ctx = alloc_percpu(struct amd_uncore_ctx *);
if (!pmu->ctx)
- return -ENOMEM;
+ goto done;
pmu->pmu = (struct pmu) {
.task_ctx_nr = perf_invalid_context,
@@ -710,43 +786,45 @@ static int amd_uncore_l3_init(void)
.module = THIS_MODULE,
};
- ret = perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1);
- if (ret) {
+ if (perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1)) {
free_percpu(pmu->ctx);
pmu->ctx = NULL;
- return ret;
+ goto done;
}
- pr_info("%d %s %s counters detected\n", pmu->num_counters,
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON" : "",
+ pr_info("%d %s%s counters detected\n", pmu->num_counters,
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ? "HYGON " : "",
pmu->pmu.name);
- num_pmus++;
-
- return 0;
-}
-
-static void uncore_free(void)
-{
- struct amd_uncore_pmu *pmu;
- int i;
+ uncore->num_pmus = 1;
- for (i = 0; i < num_pmus; i++) {
- pmu = &pmus[i];
- if (!pmu->ctx)
- continue;
-
- perf_pmu_unregister(&pmu->pmu);
- free_percpu(pmu->ctx);
- pmu->ctx = NULL;
- }
+done:
+ uncore->init_done = true;
- num_pmus = 0;
+ return amd_uncore_ctx_init(uncore, cpu);
}
+static struct amd_uncore uncores[UNCORE_TYPE_MAX] = {
+ /* UNCORE_TYPE_DF */
+ {
+ .scan = amd_uncore_df_ctx_scan,
+ .init = amd_uncore_df_ctx_init,
+ .move = amd_uncore_ctx_move,
+ .free = amd_uncore_ctx_free,
+ },
+ /* UNCORE_TYPE_L3 */
+ {
+ .scan = amd_uncore_l3_ctx_scan,
+ .init = amd_uncore_l3_ctx_init,
+ .move = amd_uncore_ctx_move,
+ .free = amd_uncore_ctx_free,
+ },
+};
+
static int __init amd_uncore_init(void)
{
- int ret;
+ struct amd_uncore *uncore;
+ int ret, i;
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
@@ -758,20 +836,27 @@ static int __init amd_uncore_init(void)
if (boot_cpu_has(X86_FEATURE_PERFMON_V2))
pmu_version = 2;
- ret = amd_uncore_df_init();
- if (ret)
- goto fail;
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
- ret = amd_uncore_l3_init();
- if (ret)
- goto fail;
+ BUG_ON(!uncore->scan);
+ BUG_ON(!uncore->init);
+ BUG_ON(!uncore->move);
+ BUG_ON(!uncore->free);
+
+ uncore->info = alloc_percpu(union amd_uncore_info);
+ if (!uncore->info) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+ };
/*
* Install callbacks. Core will call them for each online cpu.
*/
if (cpuhp_setup_state(CPUHP_PERF_X86_AMD_UNCORE_PREP,
"perf/x86/amd/uncore:prepare",
- amd_uncore_cpu_up_prepare, amd_uncore_cpu_dead))
+ NULL, amd_uncore_cpu_dead))
goto fail;
if (cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
@@ -790,17 +875,48 @@ static int __init amd_uncore_init(void)
fail_prep:
cpuhp_remove_state(CPUHP_PERF_X86_AMD_UNCORE_PREP);
fail:
- uncore_free();
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ if (uncore->info) {
+ free_percpu(uncore->info);
+ uncore->info = NULL;
+ }
+ }
return ret;
}
static void __exit amd_uncore_exit(void)
{
+ struct amd_uncore *uncore;
+ struct amd_uncore_pmu *pmu;
+ int i, j;
+
cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_ONLINE);
cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING);
cpuhp_remove_state(CPUHP_PERF_X86_AMD_UNCORE_PREP);
- uncore_free();
+
+ for (i = 0; i < UNCORE_TYPE_MAX; i++) {
+ uncore = &uncores[i];
+ if (!uncore->info)
+ continue;
+
+ free_percpu(uncore->info);
+ uncore->info = NULL;
+
+ for (j = 0; j < uncore->num_pmus; j++) {
+ pmu = &uncore->pmus[j];
+ if (!pmu->ctx)
+ continue;
+
+ perf_pmu_unregister(&pmu->pmu);
+ free_percpu(pmu->ctx);
+ pmu->ctx = NULL;
+ }
+
+ kfree(uncore->pmus);
+ uncore->pmus = NULL;
+ }
}
module_init(amd_uncore_init);
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/6] perf/x86/amd/uncore: Use rdmsr if rdpmc is unavailable
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
2023-10-05 5:23 ` [PATCH v2 1/6] perf/x86/amd/uncore: Refactor uncore management Sandipan Das
2023-10-05 5:23 ` [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
2023-10-05 5:23 ` [PATCH v2 4/6] perf/x86/amd/uncore: Add group exclusivity Sandipan Das
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Not all uncore PMUs may support the use of the RDPMC instruction for
reading counters. In such cases, read the count from the corresponding
PERF_CTR register using the RDMSR instruction.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
---
arch/x86/events/amd/uncore.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index ff1d09cc07ad..2fe623923034 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -96,7 +96,16 @@ static void amd_uncore_read(struct perf_event *event)
*/
prev = local64_read(&hwc->prev_count);
- rdpmcl(hwc->event_base_rdpmc, new);
+
+ /*
+ * Some uncore PMUs do not have RDPMC assignments. In such cases,
+ * read counts directly from the corresponding PERF_CTR.
+ */
+ if (hwc->event_base_rdpmc < 0)
+ rdmsrl(hwc->event_base, new);
+ else
+ rdpmcl(hwc->event_base_rdpmc, new);
+
local64_set(&hwc->prev_count, new);
delta = (new << COUNTER_SHIFT) - (prev << COUNTER_SHIFT);
delta >>= COUNTER_SHIFT;
@@ -164,6 +173,9 @@ static int amd_uncore_add(struct perf_event *event, int flags)
hwc->event_base_rdpmc = pmu->rdpmc_base + hwc->idx;
hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ if (pmu->rdpmc_base < 0)
+ hwc->event_base_rdpmc = -1;
+
if (flags & PERF_EF_START)
event->pmu->start(event, PERF_EF_RELOAD);
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 4/6] perf/x86/amd/uncore: Add group exclusivity
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
` (2 preceding siblings ...)
2023-10-05 5:23 ` [PATCH v2 3/6] perf/x86/amd/uncore: Use rdmsr if rdpmc is unavailable Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
2023-10-05 5:23 ` [PATCH v2 5/6] perf/x86/amd/uncore: Add memory controller support Sandipan Das
2023-10-05 5:23 ` [PATCH v2 6/6] perf vendor events amd: Add Zen 4 memory controller events Sandipan Das
5 siblings, 0 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
In some cases, it may be necessary to restrict opening PMU events to a
subset of CPUs. E.g. Unified Memory Controller (UMC) PMUs are specific
to each active memory channel and the MSR address space for the PERF_CTL
and PERF_CTR registers is reused on each socket. Thus, opening events
for a specific UMC PMU should be restricted to CPUs belonging to the
same socket as that of the UMC. The "cpumask" of the PMU should also
reflect this accordingly.
Uncore PMUs which require this can use the new group attribute in struct
amd_uncore_pmu to set a valid group ID during the scan() phase. Later,
during init(), an uncore context for a CPU will be unavailable if the
group ID does not match.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
---
arch/x86/events/amd/uncore.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 2fe623923034..318982951dd2 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -27,6 +27,7 @@
#define COUNTER_SHIFT 16
#define UNCORE_NAME_LEN 16
+#define UNCORE_GROUP_MAX 256
#undef pr_fmt
#define pr_fmt(fmt) "amd_uncore: " fmt
@@ -45,6 +46,7 @@ struct amd_uncore_pmu {
int num_counters;
int rdpmc_base;
u32 msr_base;
+ int group;
cpumask_t active_mask;
struct pmu pmu;
struct amd_uncore_ctx * __percpu *ctx;
@@ -61,6 +63,7 @@ union amd_uncore_info {
struct {
u64 aux_data:32; /* auxiliary data */
u64 num_pmcs:8; /* number of counters */
+ u64 gid:8; /* group id */
u64 cid:8; /* context id */
} split;
u64 full;
@@ -371,6 +374,13 @@ int amd_uncore_ctx_cid(struct amd_uncore *uncore, unsigned int cpu)
return info->split.cid;
}
+static __always_inline
+int amd_uncore_ctx_gid(struct amd_uncore *uncore, unsigned int cpu)
+{
+ union amd_uncore_info *info = per_cpu_ptr(uncore->info, cpu);
+ return info->split.gid;
+}
+
static __always_inline
int amd_uncore_ctx_num_pmcs(struct amd_uncore *uncore, unsigned int cpu)
{
@@ -409,18 +419,23 @@ static int amd_uncore_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
{
struct amd_uncore_ctx *curr, *prev;
struct amd_uncore_pmu *pmu;
- int node, cid, i, j;
+ int node, cid, gid, i, j;
if (!uncore->init_done || !uncore->num_pmus)
return 0;
cid = amd_uncore_ctx_cid(uncore, cpu);
+ gid = amd_uncore_ctx_gid(uncore, cpu);
for (i = 0; i < uncore->num_pmus; i++) {
pmu = &uncore->pmus[i];
*per_cpu_ptr(pmu->ctx, cpu) = NULL;
curr = NULL;
+ /* Check for group exclusivity */
+ if (gid != pmu->group)
+ continue;
+
/* Find a sibling context */
for_each_online_cpu(j) {
if (cpu == j)
@@ -603,6 +618,7 @@ void amd_uncore_df_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
info.split.aux_data = 0;
info.split.num_pmcs = NUM_COUNTERS_NB;
+ info.split.gid = 0;
info.split.cid = topology_die_id(cpu);
if (pmu_version >= 2) {
@@ -641,6 +657,7 @@ int amd_uncore_df_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
pmu->msr_base = MSR_F15H_NB_PERF_CTL;
pmu->rdpmc_base = RDPMC_BASE_NB;
+ pmu->group = amd_uncore_ctx_gid(uncore, cpu);
if (pmu_version >= 2) {
*df_attr++ = &format_attr_event14v2.attr;
@@ -734,6 +751,7 @@ void amd_uncore_l3_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
info.split.aux_data = 0;
info.split.num_pmcs = NUM_COUNTERS_L2;
+ info.split.gid = 0;
info.split.cid = get_llc_id(cpu);
if (boot_cpu_data.x86 >= 0x17)
@@ -770,6 +788,7 @@ int amd_uncore_l3_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
pmu->num_counters = amd_uncore_ctx_num_pmcs(uncore, cpu);
pmu->msr_base = MSR_F16H_L2I_PERF_CTL;
pmu->rdpmc_base = RDPMC_BASE_LLC;
+ pmu->group = amd_uncore_ctx_gid(uncore, cpu);
if (boot_cpu_data.x86 >= 0x17) {
*l3_attr++ = &format_attr_event8.attr;
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 5/6] perf/x86/amd/uncore: Add memory controller support
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
` (3 preceding siblings ...)
2023-10-05 5:23 ` [PATCH v2 4/6] perf/x86/amd/uncore: Add group exclusivity Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
2023-10-05 5:23 ` [PATCH v2 6/6] perf vendor events amd: Add Zen 4 memory controller events Sandipan Das
5 siblings, 0 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Unified Memory Controller (UMC) events were introduced with Zen 4 as a
part of the Performance Monitoring Version 2 (PerfMonV2) enhancements.
An event is specified using the EventSelect bits and the RdWrMask bits
can be used for additional filtering of read and write requests.
As of now, a maximum of 12 channels of DDR5 are available on each socket
and each channel is controlled by a dedicated UMC. Each UMC, in turn,
has its own set of performance monitoring counters.
Since the MSR address space for the UMC PERF_CTL and PERF_CTR registers
are reused across sockets, uncore groups are created on the basis of
socket IDs. Hence, group exclusivity is mandatory while opening events
so that events for an UMC can only be opened on CPUs which are on the
same socket as the corresponding memory channel.
For each socket, the total number of available UMC counters and active
memory channels are determined from CPUID leaf 0x80000022 EBX and ECX
respectively. Usually, on Zen 4, each UMC has four counters.
MSR assignments are determined on the basis of active UMCs. E.g. if
UMCs 1, 4 and 9 are active for a given socket, then
* UMC 1 gets MSRs 0xc0010800 to 0xc0010807 as PERF_CTLs and PERF_CTRs
* UMC 4 gets MSRs 0xc0010808 to 0xc001080f as PERF_CTLs and PERF_CTRs
* UMC 9 gets MSRs 0xc0010810 to 0xc0010817 as PERF_CTLs and PERF_CTRs
If there are sockets without any online CPUs when the amd_uncore driver
is loaded, UMCs for such sockets will not be discoverable since the
mechanism relies on executing the CPUID instruction on an online CPU
from the socket.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
---
arch/x86/events/amd/uncore.c | 156 +++++++++++++++++++++++++++++-
arch/x86/include/asm/msr-index.h | 4 +
arch/x86/include/asm/perf_event.h | 9 ++
3 files changed, 168 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 318982951dd2..9b444ce24108 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -55,6 +55,7 @@ struct amd_uncore_pmu {
enum {
UNCORE_TYPE_DF,
UNCORE_TYPE_L3,
+ UNCORE_TYPE_UMC,
UNCORE_TYPE_MAX
};
@@ -286,7 +287,7 @@ static struct device_attribute format_attr_##_var = \
DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35");
DEFINE_UNCORE_FORMAT_ATTR(event14, event, "config:0-7,32-35,59-60"); /* F17h+ DF */
DEFINE_UNCORE_FORMAT_ATTR(event14v2, event, "config:0-7,32-37"); /* PerfMonV2 DF */
-DEFINE_UNCORE_FORMAT_ATTR(event8, event, "config:0-7"); /* F17h+ L3 */
+DEFINE_UNCORE_FORMAT_ATTR(event8, event, "config:0-7"); /* F17h+ L3, PerfMonV2 UMC */
DEFINE_UNCORE_FORMAT_ATTR(umask8, umask, "config:8-15");
DEFINE_UNCORE_FORMAT_ATTR(umask12, umask, "config:8-15,24-27"); /* PerfMonV2 DF */
DEFINE_UNCORE_FORMAT_ATTR(coreid, coreid, "config:42-44"); /* F19h L3 */
@@ -296,6 +297,7 @@ DEFINE_UNCORE_FORMAT_ATTR(threadmask2, threadmask, "config:56-57"); /* F19h L
DEFINE_UNCORE_FORMAT_ATTR(enallslices, enallslices, "config:46"); /* F19h L3 */
DEFINE_UNCORE_FORMAT_ATTR(enallcores, enallcores, "config:47"); /* F19h L3 */
DEFINE_UNCORE_FORMAT_ATTR(sliceid, sliceid, "config:48-50"); /* F19h L3 */
+DEFINE_UNCORE_FORMAT_ATTR(rdwrmask, rdwrmask, "config:8-9"); /* PerfMonV2 UMC */
/* Common DF and NB attributes */
static struct attribute *amd_uncore_df_format_attr[] = {
@@ -312,6 +314,13 @@ static struct attribute *amd_uncore_l3_format_attr[] = {
NULL,
};
+/* Common UMC attributes */
+static struct attribute *amd_uncore_umc_format_attr[] = {
+ &format_attr_event8.attr, /* event */
+ &format_attr_rdwrmask.attr, /* rdwrmask */
+ NULL,
+};
+
/* F17h unique L3 attributes */
static struct attribute *amd_f17h_uncore_l3_format_attr[] = {
&format_attr_slicemask.attr, /* slicemask */
@@ -349,6 +358,11 @@ static struct attribute_group amd_f19h_uncore_l3_format_group = {
.is_visible = amd_f19h_uncore_is_visible,
};
+static struct attribute_group amd_uncore_umc_format_group = {
+ .name = "format",
+ .attrs = amd_uncore_umc_format_attr,
+};
+
static const struct attribute_group *amd_uncore_df_attr_groups[] = {
&amd_uncore_attr_group,
&amd_uncore_df_format_group,
@@ -367,6 +381,12 @@ static const struct attribute_group *amd_uncore_l3_attr_update[] = {
NULL,
};
+static const struct attribute_group *amd_uncore_umc_attr_groups[] = {
+ &amd_uncore_attr_group,
+ &amd_uncore_umc_format_group,
+ NULL,
+};
+
static __always_inline
int amd_uncore_ctx_cid(struct amd_uncore *uncore, unsigned int cpu)
{
@@ -835,6 +855,133 @@ int amd_uncore_l3_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
return amd_uncore_ctx_init(uncore, cpu);
}
+static int amd_uncore_umc_event_init(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+ int ret = amd_uncore_event_init(event);
+
+ if (ret)
+ return ret;
+
+ hwc->config = event->attr.config & AMD64_PERFMON_V2_RAW_EVENT_MASK_UMC;
+
+ return 0;
+}
+
+static void amd_uncore_umc_start(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (flags & PERF_EF_RELOAD)
+ wrmsrl(hwc->event_base, (u64)local64_read(&hwc->prev_count));
+
+ hwc->state = 0;
+ wrmsrl(hwc->config_base, (hwc->config | AMD64_PERFMON_V2_ENABLE_UMC));
+ perf_event_update_userpage(event);
+}
+
+static
+void amd_uncore_umc_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
+{
+ union cpuid_0x80000022_ebx ebx;
+ union amd_uncore_info info;
+ unsigned int eax, ecx, edx;
+
+ if (pmu_version < 2)
+ return;
+
+ cpuid(EXT_PERFMON_DEBUG_FEATURES, &eax, &ebx.full, &ecx, &edx);
+ info.split.aux_data = ecx; /* stash active mask */
+ info.split.num_pmcs = ebx.split.num_umc_pmc;
+ info.split.gid = topology_die_id(cpu);
+ info.split.cid = topology_die_id(cpu);
+ *per_cpu_ptr(uncore->info, cpu) = info;
+}
+
+static
+int amd_uncore_umc_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+{
+ DECLARE_BITMAP(gmask, UNCORE_GROUP_MAX) = { 0 };
+ u8 group_num_pmus[UNCORE_GROUP_MAX] = { 0 };
+ u8 group_num_pmcs[UNCORE_GROUP_MAX] = { 0 };
+ union amd_uncore_info info;
+ struct amd_uncore_pmu *pmu;
+ int index = 0, gid, i;
+
+ if (pmu_version < 2)
+ return 0;
+
+ /* Run just once */
+ if (uncore->init_done)
+ return amd_uncore_ctx_init(uncore, cpu);
+
+ /* Find unique groups */
+ for_each_online_cpu(i) {
+ info = *per_cpu_ptr(uncore->info, i);
+ gid = info.split.gid;
+ if (test_bit(gid, gmask))
+ continue;
+
+ __set_bit(gid, gmask);
+ group_num_pmus[gid] = hweight32(info.split.aux_data);
+ group_num_pmcs[gid] = info.split.num_pmcs;
+ uncore->num_pmus += group_num_pmus[gid];
+ }
+
+ uncore->pmus = kzalloc(sizeof(*uncore->pmus) * uncore->num_pmus,
+ GFP_KERNEL);
+ if (!uncore->pmus) {
+ uncore->num_pmus = 0;
+ goto done;
+ }
+
+ for_each_set_bit(gid, gmask, UNCORE_GROUP_MAX) {
+ for (i = 0; i < group_num_pmus[gid]; i++) {
+ pmu = &uncore->pmus[index];
+ snprintf(pmu->name, sizeof(pmu->name), "amd_umc_%d", index);
+ pmu->num_counters = group_num_pmcs[gid] / group_num_pmus[gid];
+ pmu->msr_base = MSR_F19H_UMC_PERF_CTL + i * pmu->num_counters * 2;
+ pmu->rdpmc_base = -1;
+ pmu->group = gid;
+
+ pmu->ctx = alloc_percpu(struct amd_uncore_ctx *);
+ if (!pmu->ctx)
+ goto done;
+
+ pmu->pmu = (struct pmu) {
+ .task_ctx_nr = perf_invalid_context,
+ .attr_groups = amd_uncore_umc_attr_groups,
+ .name = pmu->name,
+ .event_init = amd_uncore_umc_event_init,
+ .add = amd_uncore_add,
+ .del = amd_uncore_del,
+ .start = amd_uncore_umc_start,
+ .stop = amd_uncore_stop,
+ .read = amd_uncore_read,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
+ .module = THIS_MODULE,
+ };
+
+ if (perf_pmu_register(&pmu->pmu, pmu->pmu.name, -1)) {
+ free_percpu(pmu->ctx);
+ pmu->ctx = NULL;
+ goto done;
+ }
+
+ pr_info("%d %s counters detected\n", pmu->num_counters,
+ pmu->pmu.name);
+
+ index++;
+ }
+ }
+
+done:
+ uncore->num_pmus = index;
+ uncore->init_done = true;
+
+ return amd_uncore_ctx_init(uncore, cpu);
+}
+
static struct amd_uncore uncores[UNCORE_TYPE_MAX] = {
/* UNCORE_TYPE_DF */
{
@@ -850,6 +997,13 @@ static struct amd_uncore uncores[UNCORE_TYPE_MAX] = {
.move = amd_uncore_ctx_move,
.free = amd_uncore_ctx_free,
},
+ /* UNCORE_TYPE_UMC */
+ {
+ .scan = amd_uncore_umc_ctx_scan,
+ .init = amd_uncore_umc_ctx_init,
+ .move = amd_uncore_ctx_move,
+ .free = amd_uncore_ctx_free,
+ },
};
static int __init amd_uncore_init(void)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 1d111350197f..dc159acb350a 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -637,6 +637,10 @@
/* AMD Last Branch Record MSRs */
#define MSR_AMD64_LBR_SELECT 0xc000010e
+/* Fam 19h MSRs */
+#define MSR_F19H_UMC_PERF_CTL 0xc0010800
+#define MSR_F19H_UMC_PERF_CTR 0xc0010801
+
/* Fam 17h MSRs */
#define MSR_F17H_IRPERF 0xc00000e9
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 85a9fd5a3ec3..2618ec7c3d1d 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -112,6 +112,13 @@
(AMD64_PERFMON_V2_EVENTSEL_EVENT_NB | \
AMD64_PERFMON_V2_EVENTSEL_UMASK_NB)
+#define AMD64_PERFMON_V2_ENABLE_UMC BIT_ULL(31)
+#define AMD64_PERFMON_V2_EVENTSEL_EVENT_UMC GENMASK_ULL(7, 0)
+#define AMD64_PERFMON_V2_EVENTSEL_RDWRMASK_UMC GENMASK_ULL(9, 8)
+#define AMD64_PERFMON_V2_RAW_EVENT_MASK_UMC \
+ (AMD64_PERFMON_V2_EVENTSEL_EVENT_UMC | \
+ AMD64_PERFMON_V2_EVENTSEL_RDWRMASK_UMC)
+
#define AMD64_NUM_COUNTERS 4
#define AMD64_NUM_COUNTERS_CORE 6
#define AMD64_NUM_COUNTERS_NB 4
@@ -232,6 +239,8 @@ union cpuid_0x80000022_ebx {
unsigned int lbr_v2_stack_sz:6;
/* Number of Data Fabric Counters */
unsigned int num_df_pmc:6;
+ /* Number of Unified Memory Controller Counters */
+ unsigned int num_umc_pmc:6;
} split;
unsigned int full;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 6/6] perf vendor events amd: Add Zen 4 memory controller events
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
` (4 preceding siblings ...)
2023-10-05 5:23 ` [PATCH v2 5/6] perf/x86/amd/uncore: Add memory controller support Sandipan Das
@ 2023-10-05 5:23 ` Sandipan Das
5 siblings, 0 replies; 8+ messages in thread
From: Sandipan Das @ 2023-10-05 5:23 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, x86
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, irogers, adrian.hunter, tglx, bp, dave.hansen, hpa,
eranian, ananth.narayan, ravi.bangoria, santosh.shukla,
sandipan.das
Make the jevents parser aware of the Unified Memory Controller (UMC) PMU
and add events taken from Section 8.2.1 "UMC Performance Monitor Events"
of the Processor Programming Reference (PPR) for AMD Family 19h Model 11h
processors. The events capture UMC command activity such as CAS, ACTIVATE,
PRECHARGE etc. while the metrics derive data bus utilization and memory
bandwidth out of these events.
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
Acked-by: Ian Rogers <irogers@google.com>
---
.../arch/x86/amdzen4/memory-controller.json | 101 ++++++++++++++++++
.../arch/x86/amdzen4/recommended.json | 84 +++++++++++++++
tools/perf/pmu-events/jevents.py | 2 +
3 files changed, 187 insertions(+)
create mode 100644 tools/perf/pmu-events/arch/x86/amdzen4/memory-controller.json
diff --git a/tools/perf/pmu-events/arch/x86/amdzen4/memory-controller.json b/tools/perf/pmu-events/arch/x86/amdzen4/memory-controller.json
new file mode 100644
index 000000000000..55263e5e4f69
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen4/memory-controller.json
@@ -0,0 +1,101 @@
+[
+ {
+ "EventName": "umc_mem_clk",
+ "PublicDescription": "Number of memory clock cycles.",
+ "EventCode": "0x00",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.all",
+ "PublicDescription": "Number of ACTIVATE commands sent.",
+ "EventCode": "0x05",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.rd",
+ "PublicDescription": "Number of ACTIVATE commands sent for reads.",
+ "EventCode": "0x05",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.wr",
+ "PublicDescription": "Number of ACTIVATE commands sent for writes.",
+ "EventCode": "0x05",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.all",
+ "PublicDescription": "Number of PRECHARGE commands sent.",
+ "EventCode": "0x06",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.rd",
+ "PublicDescription": "Number of PRECHARGE commands sent for reads.",
+ "EventCode": "0x06",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.wr",
+ "PublicDescription": "Number of PRECHARGE commands sent for writes.",
+ "EventCode": "0x06",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.all",
+ "PublicDescription": "Number of CAS commands sent.",
+ "EventCode": "0x0a",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.rd",
+ "PublicDescription": "Number of CAS commands sent for reads.",
+ "EventCode": "0x0a",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.wr",
+ "PublicDescription": "Number of CAS commands sent for writes.",
+ "EventCode": "0x0a",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.all",
+ "PublicDescription": "Number of clocks used by the data bus.",
+ "EventCode": "0x14",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.rd",
+ "PublicDescription": "Number of clocks used by the data bus for reads.",
+ "EventCode": "0x14",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.wr",
+ "PublicDescription": "Number of clocks used by the data bus for writes.",
+ "EventCode": "0x14",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen4/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen4/recommended.json
index 5e6a793acf7b..96e06401c6cb 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen4/recommended.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen4/recommended.json
@@ -330,5 +330,89 @@
"MetricGroup": "data_fabric",
"PerPkg": "1",
"ScaleUnit": "6.103515625e-5MiB"
+ },
+ {
+ "MetricName": "umc_data_bus_utilization",
+ "BriefDescription": "Memory controller data bus utilization.",
+ "MetricExpr": "d_ratio(umc_data_slot_clks.all / 2, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_cas_cmd_rate",
+ "BriefDescription": "Memory controller CAS command rate.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1"
+ },
+ {
+ "MetricName": "umc_cas_cmd_read_ratio",
+ "BriefDescription": "Ratio of memory controller CAS commands for reads.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.rd, umc_cas_cmd.all)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_cas_cmd_write_ratio",
+ "BriefDescription": "Ratio of memory controller CAS commands for writes.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.wr, umc_cas_cmd.all)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_mem_read_bandwidth",
+ "BriefDescription": "Estimated memory read bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.rd * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_mem_write_bandwidth",
+ "BriefDescription": "Estimated memory write bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.wr * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_mem_bandwidth",
+ "BriefDescription": "Estimated combined memory bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.all * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_cas_cmd_read_ratio",
+ "BriefDescription": "Ratio of memory controller CAS commands for reads.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.rd, umc_cas_cmd.all)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_cas_cmd_rate",
+ "BriefDescription": "Memory controller CAS command rate.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1"
+ },
+ {
+ "MetricName": "umc_activate_cmd_rate",
+ "BriefDescription": "Memory controller ACTIVATE command rate.",
+ "MetricExpr": "d_ratio(umc_act_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1"
+ },
+ {
+ "MetricName": "umc_precharge_cmd_rate",
+ "BriefDescription": "Memory controller PRECHARGE command rate.",
+ "MetricExpr": "d_ratio(umc_pchg_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1"
}
]
diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
index 72ba4a9239c6..ff202e0b7b0a 100755
--- a/tools/perf/pmu-events/jevents.py
+++ b/tools/perf/pmu-events/jevents.py
@@ -286,6 +286,7 @@ class JsonEvent:
'imx8_ddr': 'imx8_ddr',
'L3PMC': 'amd_l3',
'DFPMC': 'amd_df',
+ 'UMCPMC': 'amd_umc',
'cpu_core': 'cpu_core',
'cpu_atom': 'cpu_atom',
'ali_drw': 'ali_drw',
@@ -345,6 +346,7 @@ class JsonEvent:
('Invert', 'inv='),
('SampleAfterValue', 'period='),
('UMask', 'umask='),
+ ('RdWrMask', 'rdwrmask='),
]
for key, value in event_fields:
if key in jd and jd[key] != '0':
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration
2023-10-05 5:23 ` [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration Sandipan Das
@ 2023-10-09 12:36 ` kernel test robot
0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2023-10-09 12:36 UTC (permalink / raw)
To: Sandipan Das, linux-kernel, linux-perf-users, x86
Cc: oe-kbuild-all, peterz, mingo, acme, mark.rutland,
alexander.shishkin, jolsa, namhyung, irogers, adrian.hunter, tglx,
bp, dave.hansen, hpa, eranian, ananth.narayan, ravi.bangoria,
santosh.shukla, sandipan.das
Hi Sandipan,
kernel test robot noticed the following build warnings:
[auto build test WARNING on tip/perf/core]
[also build test WARNING on perf-tools/perf-tools acme/perf/core linus/master v6.6-rc5 next-20231009]
[cannot apply to perf-tools-next/perf-tools-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Sandipan-Das/perf-x86-amd-uncore-Move-discovery-and-registration/20231006-005053
base: tip/perf/core
patch link: https://lore.kernel.org/r/e6c447e48872fcab8452e0dd81b1c9cb09f39eb4.1696425185.git.sandipan.das%40amd.com
patch subject: [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration
config: i386-randconfig-061-20231009 (https://download.01.org/0day-ci/archive/20231009/202310092019.yMh3nlyD-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231009/202310092019.yMh3nlyD-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202310092019.yMh3nlyD-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
>> arch/x86/events/amd/uncore.c:601:10: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:601:10: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:601:10: sparse: got union amd_uncore_info *
arch/x86/events/amd/uncore.c:730:10: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:730:10: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:730:10: sparse: got union amd_uncore_info *
>> arch/x86/events/amd/uncore.c:847:30: sparse: sparse: incorrect type in assignment (different address spaces) @@ expected union amd_uncore_info *[noderef] info @@ got union amd_uncore_info [noderef] __percpu * @@
arch/x86/events/amd/uncore.c:847:30: sparse: expected union amd_uncore_info *[noderef] info
arch/x86/events/amd/uncore.c:847:30: sparse: got union amd_uncore_info [noderef] __percpu *
>> arch/x86/events/amd/uncore.c:881:43: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected void [noderef] __percpu *__pdata @@ got union amd_uncore_info *[noderef] info @@
arch/x86/events/amd/uncore.c:881:43: sparse: expected void [noderef] __percpu *__pdata
arch/x86/events/amd/uncore.c:881:43: sparse: got union amd_uncore_info *[noderef] info
arch/x86/events/amd/uncore.c:904:35: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected void [noderef] __percpu *__pdata @@ got union amd_uncore_info *[noderef] info @@
arch/x86/events/amd/uncore.c:904:35: sparse: expected void [noderef] __percpu *__pdata
arch/x86/events/amd/uncore.c:904:35: sparse: got union amd_uncore_info *[noderef] info
arch/x86/events/amd/uncore.c:358:39: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:358:39: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:358:39: sparse: got union amd_uncore_info *
>> arch/x86/events/amd/uncore.c:358:39: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:358:39: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:358:39: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:358:39: sparse: got union amd_uncore_info *
>> arch/x86/events/amd/uncore.c:358:39: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:601:10: sparse: sparse: dereference of noderef expression
>> arch/x86/events/amd/uncore.c:593:31: sparse: sparse: invalid access past the end of 'info' (4 8)
arch/x86/events/amd/uncore.c:598:48: sparse: sparse: invalid access past the end of 'info' (4 8)
arch/x86/events/amd/uncore.c:365:39: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:365:39: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:365:39: sparse: got union amd_uncore_info *
arch/x86/events/amd/uncore.c:365:39: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:730:10: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:724:31: sparse: sparse: invalid access past the end of 'info' (4 8)
arch/x86/events/amd/uncore.c:365:39: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got union amd_uncore_info * @@
arch/x86/events/amd/uncore.c:365:39: sparse: expected void const [noderef] __percpu *__vpp_verify
arch/x86/events/amd/uncore.c:365:39: sparse: got union amd_uncore_info *
arch/x86/events/amd/uncore.c:365:39: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:848:22: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:880:21: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:881:37: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:882:25: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:901:22: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:904:29: sparse: sparse: dereference of noderef expression
arch/x86/events/amd/uncore.c:905:17: sparse: sparse: dereference of noderef expression
vim +601 arch/x86/events/amd/uncore.c
582
583 static
584 void amd_uncore_df_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
585 {
586 union cpuid_0x80000022_ebx ebx;
587 union amd_uncore_info info;
588
589 if (!boot_cpu_has(X86_FEATURE_PERFCTR_NB))
590 return;
591
592 info.split.aux_data = 0;
> 593 info.split.num_pmcs = NUM_COUNTERS_NB;
594 info.split.cid = topology_die_id(cpu);
595
596 if (pmu_version >= 2) {
597 ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
598 info.split.num_pmcs = ebx.split.num_df_pmc;
599 }
600
> 601 *per_cpu_ptr(uncore->info, cpu) = info;
602 }
603
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-10-09 12:37 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-05 5:23 [PATCH v2 0/6] perf/x86/amd: Add memory controller events Sandipan Das
2023-10-05 5:23 ` [PATCH v2 1/6] perf/x86/amd/uncore: Refactor uncore management Sandipan Das
2023-10-05 5:23 ` [PATCH v2 2/6] perf/x86/amd/uncore: Move discovery and registration Sandipan Das
2023-10-09 12:36 ` kernel test robot
2023-10-05 5:23 ` [PATCH v2 3/6] perf/x86/amd/uncore: Use rdmsr if rdpmc is unavailable Sandipan Das
2023-10-05 5:23 ` [PATCH v2 4/6] perf/x86/amd/uncore: Add group exclusivity Sandipan Das
2023-10-05 5:23 ` [PATCH v2 5/6] perf/x86/amd/uncore: Add memory controller support Sandipan Das
2023-10-05 5:23 ` [PATCH v2 6/6] perf vendor events amd: Add Zen 4 memory controller events Sandipan Das
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).