* [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support
@ 2010-03-31 19:56 Borislav Petkov
2010-03-31 19:56 ` [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo Borislav Petkov
` (5 more replies)
0 siblings, 6 replies; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Borislav Petkov <borislav.petkov@amd.com>
Hi,
ok, this should be the final version of the patchset. It adds the
suggested changes to patch 2/6 and also a CPU hotplug notifier which
takes care of the boosting flag when off-/onlining CPUs so that the
remaining can still boost.
Changelog:
v2: add only the move of APERF/MPERF cpuid fragment to vendor-agnostic
x86 init path, as Thomas suggested.
v1: add a /proc/cpuinfo flag and remove the scaling_cur_freq interface
overload in favor of cpufreq-aperf userspace tool.
v0: initial submission
--
Regards/Gruss,
Boris.
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-09 22:18 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-03-31 19:56 ` [PATCH 2/6] powernow-k8: Add core performance boost support Borislav Petkov
` (4 subsequent siblings)
5 siblings, 1 reply; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Borislav Petkov <borislav.petkov@amd.com>
By semi-popular demand, this adds the Core Performance Boost feature
flag to /proc/cpuinfo. Possible use case for this is userspace tools
like cpufreq-aperf, for example, so that they don't have to jump through
hoops of accessing "/dev/cpu/%d/cpuid" in order to check for CPB hw
support, or call cpuid from userspace.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/include/asm/cpufeature.h | 1 +
arch/x86/kernel/cpu/addon_cpuid_features.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 0cd82d0..630e623 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -161,6 +161,7 @@
*/
#define X86_FEATURE_IDA (7*32+ 0) /* Intel Dynamic Acceleration */
#define X86_FEATURE_ARAT (7*32+ 1) /* Always Running APIC Timer */
+#define X86_FEATURE_CPB (7*32+ 2) /* AMD Core Performance Boost */
/* Virtualization flags: Linux defined */
#define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
index 97ad79c..ead2a1c 100644
--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
+++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
@@ -30,8 +30,9 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
const struct cpuid_bit *cb;
static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
- { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
- { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
+ { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
+ { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
+ { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007 },
{ X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a },
{ X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a },
{ X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a },
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/6] powernow-k8: Add core performance boost support
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
2010-03-31 19:56 ` [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-03-31 19:56 ` [PATCH 3/6] x86: Unify APERF/MPERF support Borislav Petkov
` (3 subsequent siblings)
5 siblings, 1 reply; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Borislav Petkov <borislav.petkov@amd.com>
Starting with F10h, revE, AMD processors add support for a dynamic
core boosting feature called Core Performance Boost. When a specific
condition is present, a subset of the cores on a system are boosted
beyond their P0 operating frequency to speed up the performance of
single-threaded applications.
In the normal case, the system comes out of reset with core boosting
enabled. This patch adds a sysfs knob with which core boosting can be
switched on or off for benchmarking purposes.
While at it, make the CPB code hotplug-aware so that taking cores
offline wouldn't interfere with boosting the remaining online cores.
Furthermore, add cpu_online_mask hotplug protection as suggested by
Andrew.
Finally, cleanup the driver init codepath and update copyrights.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 161 +++++++++++++++++++++++++++--
arch/x86/kernel/cpu/cpufreq/powernow-k8.h | 2 -
2 files changed, 151 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index d360b56..74ca34b 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -1,6 +1,5 @@
-
/*
- * (c) 2003-2006 Advanced Micro Devices, Inc.
+ * (c) 2003-2010 Advanced Micro Devices, Inc.
* Your use of this code is subject to the terms and conditions of the
* GNU general public license version 2. See "COPYING" or
* http://www.gnu.org/licenses/gpl.html
@@ -54,6 +53,10 @@ static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
static int cpu_family = CPU_OPTERON;
+/* core performance boost */
+static bool cpb_capable, cpb_enabled;
+static struct msr __percpu *msrs;
+
#ifndef CONFIG_SMP
static inline const struct cpumask *cpu_core_mask(int cpu)
{
@@ -1393,8 +1396,77 @@ out:
return khz;
}
+static void _cpb_toggle_msrs(bool t)
+{
+ int cpu;
+
+ get_online_cpus();
+
+ rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ for_each_cpu(cpu, cpu_online_mask) {
+ struct msr *reg = per_cpu_ptr(msrs, cpu);
+ if (t)
+ reg->l &= ~BIT(25);
+ else
+ reg->l |= BIT(25);
+ }
+ wrmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ put_online_cpus();
+}
+
+/*
+ * Switch on/off core performance boosting.
+ *
+ * 0=disable
+ * 1=enable.
+ */
+static void cpb_toggle(bool t)
+{
+ if (!cpb_capable)
+ return;
+
+ if (t && !cpb_enabled) {
+ cpb_enabled = true;
+ _cpb_toggle_msrs(t);
+ printk(KERN_INFO PFX "Core Boosting enabled.\n");
+ } else if (!t && cpb_enabled) {
+ cpb_enabled = false;
+ _cpb_toggle_msrs(t);
+ printk(KERN_INFO PFX "Core Boosting disabled.\n");
+ }
+}
+
+static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf,
+ size_t count)
+{
+ int ret = -EINVAL;
+ unsigned long val = 0;
+
+ ret = strict_strtoul(buf, 10, &val);
+ if (!ret && (val == 0 || val == 1) && cpb_capable)
+ cpb_toggle(val);
+ else
+ return -EINVAL;
+
+ return count;
+}
+
+static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf)
+{
+ return sprintf(buf, "%u\n", cpb_enabled);
+}
+
+#define define_one_rw(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+define_one_rw(cpb);
+
static struct freq_attr *powernow_k8_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
+ &cpb,
NULL,
};
@@ -1410,10 +1482,51 @@ static struct cpufreq_driver cpufreq_amd64_driver = {
.attr = powernow_k8_attr,
};
+/*
+ * Clear the boost-disable flag on the CPU_DOWN path so that this cpu
+ * cannot block the remaining ones from boosting. On the CPU_UP path we
+ * simply keep the boost-disable flag in sync with the current global
+ * state.
+ */
+static int __cpuinit cpb_notify(struct notifier_block *nb, unsigned long action,
+ void *hcpu)
+{
+ unsigned cpu = (long)hcpu;
+ u32 lo, hi;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ case CPU_UP_PREPARE_FROZEN:
+
+ if (!cpb_enabled) {
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi);
+ lo |= BIT(25);
+ wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi);
+ }
+ break;
+
+ case CPU_DOWN_PREPARE:
+ case CPU_DOWN_PREPARE_FROZEN:
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi);
+ lo &= ~BIT(25);
+ wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi);
+ break;
+
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __cpuinitdata cpb_nb = {
+ .notifier_call = cpb_notify,
+};
+
/* driver entry point for init */
static int __cpuinit powernowk8_init(void)
{
- unsigned int i, supported_cpus = 0;
+ unsigned int i, supported_cpus = 0, cpu;
for_each_online_cpu(i) {
int rc;
@@ -1422,15 +1535,36 @@ static int __cpuinit powernowk8_init(void)
supported_cpus++;
}
- if (supported_cpus == num_online_cpus()) {
- printk(KERN_INFO PFX "Found %d %s "
- "processors (%d cpu cores) (" VERSION ")\n",
- num_online_nodes(),
- boot_cpu_data.x86_model_id, supported_cpus);
- return cpufreq_register_driver(&cpufreq_amd64_driver);
+ if (supported_cpus != num_online_cpus())
+ return -ENODEV;
+
+ printk(KERN_INFO PFX "Found %d %s (%d cpu cores) (" VERSION ")\n",
+ num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus);
+
+ if (boot_cpu_has(X86_FEATURE_CPB)) {
+
+ cpb_capable = true;
+
+ register_cpu_notifier(&cpb_nb);
+
+ msrs = msrs_alloc();
+ if (!msrs) {
+ printk(KERN_ERR "%s: Error allocating msrs!\n", __func__);
+ return -ENOMEM;
+ }
+
+ rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ for_each_cpu(cpu, cpu_online_mask) {
+ struct msr *reg = per_cpu_ptr(msrs, cpu);
+ cpb_enabled |= !(!!(reg->l & BIT(25)));
+ }
+
+ printk(KERN_INFO PFX "Core Performance Boosting: %s.\n",
+ (cpb_enabled ? "on" : "off"));
}
- return -ENODEV;
+ return cpufreq_register_driver(&cpufreq_amd64_driver);
}
/* driver entry point for term */
@@ -1438,6 +1572,13 @@ static void __exit powernowk8_exit(void)
{
dprintk("exit\n");
+ if (boot_cpu_has(X86_FEATURE_CPB)) {
+ msrs_free(msrs);
+ msrs = NULL;
+
+ unregister_cpu_notifier(&cpb_nb);
+ }
+
cpufreq_unregister_driver(&cpufreq_amd64_driver);
}
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
index 02ce824..df3529b 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
@@ -5,7 +5,6 @@
* http://www.gnu.org/licenses/gpl.html
*/
-
enum pstate {
HW_PSTATE_INVALID = 0xff,
HW_PSTATE_0 = 0,
@@ -55,7 +54,6 @@ struct powernow_k8_data {
struct cpumask *available_cores;
};
-
/* processor's cpuid instruction support */
#define CPUID_PROCESSOR_SIGNATURE 1 /* function 1 */
#define CPUID_XFAM 0x0ff00000 /* extended family */
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/6] x86: Unify APERF/MPERF support
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
2010-03-31 19:56 ` [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo Borislav Petkov
2010-03-31 19:56 ` [PATCH 2/6] powernow-k8: Add core performance boost support Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-05-03 23:21 ` [tip:x86/cpu] x86, cpu: Make APERF/MPERF a normal table-driven flag tip-bot for H. Peter Anvin
2010-03-31 19:56 ` [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors Borislav Petkov
` (2 subsequent siblings)
5 siblings, 2 replies; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Borislav Petkov <borislav.petkov@amd.com>
Initialize this CPUID flag feature in common code. It could be made a
standalone function later, maybe, if more functionality is duplicated.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/kernel/cpu/addon_cpuid_features.c | 8 ++++++++
arch/x86/kernel/cpu/intel.c | 6 ------
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
index ead2a1c..fd1fc19 100644
--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
+++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
@@ -54,6 +54,14 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
if (regs[cb->reg] & (1 << cb->bit))
set_cpu_cap(c, cb->feature);
}
+
+ /*
+ * common AMD/Intel features
+ */
+ if (c->cpuid_level >= 6) {
+ if (cpuid_ecx(6) & 0x1)
+ set_cpu_cap(c, X86_FEATURE_APERFMPERF);
+ }
}
/* leaf 0xb SMT level */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 7e1cca1..3830258 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -352,12 +352,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
}
- if (c->cpuid_level > 6) {
- unsigned ecx = cpuid_ecx(6);
- if (ecx & 0x01)
- set_cpu_cap(c, X86_FEATURE_APERFMPERF);
- }
-
if (cpu_has_xmm2)
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
if (cpu_has_ds) {
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
` (2 preceding siblings ...)
2010-03-31 19:56 ` [PATCH 3/6] x86: Unify APERF/MPERF support Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-01 9:01 ` Marvin
2010-04-01 14:19 ` [-v3.1 PATCH " Borislav Petkov
2010-03-31 19:56 ` [PATCH 5/6] powernow-k8: Fix frequency reporting Borislav Petkov
2010-03-31 19:56 ` [PATCH 6/6] cpufreq: Unify sysfs attribute definition macros Borislav Petkov
5 siblings, 2 replies; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Mark Langsdorf <mark.langsdorf@amd.com>
Starting with model 10 of Family 0x10, AMD processors may have
support for APERF/MPERF. Add support for identifying it and using
it within cpufreq. Move the APERF/MPERF functions out of the
acpi-cpufreq code and into their own file so they can easily be
shared.
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44 +-----------------------
arch/x86/kernel/cpu/cpufreq/mperf.c | 50 ++++++++++++++++++++++++++++
arch/x86/kernel/cpu/cpufreq/mperf.h | 9 +++++
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
5 files changed, 71 insertions(+), 44 deletions(-)
create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.c
create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.h
diff --git a/arch/x86/kernel/cpu/cpufreq/Makefile b/arch/x86/kernel/cpu/cpufreq/Makefile
index 1840c0a..bd54bf6 100644
--- a/arch/x86/kernel/cpu/cpufreq/Makefile
+++ b/arch/x86/kernel/cpu/cpufreq/Makefile
@@ -2,8 +2,8 @@
# K8 systems. ACPI is preferred to all other hardware-specific drivers.
# speedstep-* is preferred over p4-clockmod.
-obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o
-obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o
+obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o
+obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o
obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o
obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o
obj-$(CONFIG_X86_POWERNOW_K7) += powernow-k7.o
diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
index 1b1920f..dc68e5c 100644
--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -45,6 +45,7 @@
#include <asm/msr.h>
#include <asm/processor.h>
#include <asm/cpufeature.h>
+#include "mperf.h"
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, \
"acpi-cpufreq", msg)
@@ -70,8 +71,6 @@ struct acpi_cpufreq_data {
static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data);
-static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
-
/* acpi_perf_data is a pointer to percpu data. */
static struct acpi_processor_performance *acpi_perf_data;
@@ -239,45 +238,6 @@ static u32 get_cur_val(const struct cpumask *mask)
return cmd.val;
}
-/* Called via smp_call_function_single(), on the target CPU */
-static void read_measured_perf_ctrs(void *_cur)
-{
- struct aperfmperf *am = _cur;
-
- get_aperfmperf(am);
-}
-
-/*
- * Return the measured active (C0) frequency on this CPU since last call
- * to this function.
- * Input: cpu number
- * Return: Average CPU frequency in terms of max frequency (zero on error)
- *
- * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
- * over a period of time, while CPU is in C0 state.
- * IA32_MPERF counts at the rate of max advertised frequency
- * IA32_APERF counts at the rate of actual CPU frequency
- * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
- * no meaning should be associated with absolute values of these MSRs.
- */
-static unsigned int get_measured_perf(struct cpufreq_policy *policy,
- unsigned int cpu)
-{
- struct aperfmperf perf;
- unsigned long ratio;
- unsigned int retval;
-
- if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
- return 0;
-
- ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
- per_cpu(acfreq_old_perf, cpu) = perf;
-
- retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
-
- return retval;
-}
-
static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
{
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu);
@@ -701,7 +661,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* Check for APERF/MPERF support in hardware */
if (cpu_has(c, X86_FEATURE_APERFMPERF))
- acpi_cpufreq_driver.getavg = get_measured_perf;
+ acpi_cpufreq_driver.getavg = cpufreq_get_measured_perf;
dprintk("CPU%u - ACPI performance management activated.\n", cpu);
for (i = 0; i < perf->state_count; i++)
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.c b/arch/x86/kernel/cpu/cpufreq/mperf.c
new file mode 100644
index 0000000..bc5a5df
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.c
@@ -0,0 +1,50 @@
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/slab.h>
+
+#include "mperf.h"
+
+static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
+
+/* Called via smp_call_function_single(), on the target CPU */
+static void read_measured_perf_ctrs(void *_cur)
+{
+ struct aperfmperf *am = _cur;
+
+ get_aperfmperf(am);
+}
+
+/*
+ * Return the measured active (C0) frequency on this CPU since last call
+ * to this function.
+ * Input: cpu number
+ * Return: Average CPU frequency in terms of max frequency (zero on error)
+ *
+ * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
+ * over a period of time, while CPU is in C0 state.
+ * IA32_MPERF counts at the rate of max advertised frequency
+ * IA32_APERF counts at the rate of actual CPU frequency
+ * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
+ * no meaning should be associated with absolute values of these MSRs.
+ */
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu)
+{
+ struct aperfmperf perf;
+ unsigned long ratio;
+ unsigned int retval;
+
+ if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
+ return 0;
+
+ ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
+ per_cpu(acfreq_old_perf, cpu) = perf;
+
+ retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
+
+ return retval;
+}
+EXPORT_SYMBOL_GPL(cpufreq_get_measured_perf);
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.h b/arch/x86/kernel/cpu/cpufreq/mperf.h
new file mode 100644
index 0000000..5dbf295
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.h
@@ -0,0 +1,9 @@
+/*
+ * (c) 2010 Advanced Micro Devices, Inc.
+ * Your use of this code is subject to the terms and conditions of the
+ * GNU general public license version 2. See "COPYING" or
+ * http://www.gnu.org/licenses/gpl.html
+ */
+
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu);
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index 74ca34b..52fce63 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -45,6 +45,7 @@
#define PFX "powernow-k8: "
#define VERSION "version 2.20.00"
#include "powernow-k8.h"
+#include "mperf.h"
/* serialize freq changes */
static DEFINE_MUTEX(fidvid_mutex);
@@ -57,6 +58,8 @@ static int cpu_family = CPU_OPTERON;
static bool cpb_capable, cpb_enabled;
static struct msr __percpu *msrs;
+static struct cpufreq_driver cpufreq_amd64_driver;
+
#ifndef CONFIG_SMP
static inline const struct cpumask *cpu_core_mask(int cpu)
{
@@ -1251,6 +1254,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
struct powernow_k8_data *data;
struct init_on_cpu init_on_cpu;
int rc;
+ struct cpuinfo_x86 *c = &cpu_data(pol->cpu);
if (!cpu_online(pol->cpu))
return -ENODEV;
@@ -1325,6 +1329,10 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
return -EINVAL;
}
+ /* Check for APERF/MPERF support in hardware */
+ if (cpu_has(c, X86_FEATURE_APERFMPERF))
+ cpufreq_amd64_driver.getavg = cpufreq_get_measured_perf;
+
cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu);
if (cpu_family == CPU_HW_PSTATE)
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 5/6] powernow-k8: Fix frequency reporting
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
` (3 preceding siblings ...)
2010-03-31 19:56 ` [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Mark Langsdorf
2010-05-03 13:09 ` [tip:x86/urgent] " tip-bot for Mark Langsdorf
2010-03-31 19:56 ` [PATCH 6/6] cpufreq: Unify sysfs attribute definition macros Borislav Petkov
5 siblings, 2 replies; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Mark Langsdorf <mark.langsdorf@amd.com>
With F10, model 10, all valid frequencies are in the ACPI _PST table.
Cc: <stable@kernel.org> # 33.x 32.x
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index 52fce63..6f3dc8f 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -935,7 +935,8 @@ static int fill_powernow_table_pstate(struct powernow_k8_data *data,
powernow_table[i].index = index;
/* Frequency may be rounded for these */
- if (boot_cpu_data.x86 == 0x10 || boot_cpu_data.x86 == 0x11) {
+ if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
+ || boot_cpu_data.x86 == 0x11) {
powernow_table[i].frequency =
freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7);
} else
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 6/6] cpufreq: Unify sysfs attribute definition macros
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
` (4 preceding siblings ...)
2010-03-31 19:56 ` [PATCH 5/6] powernow-k8: Fix frequency reporting Borislav Petkov
@ 2010-03-31 19:56 ` Borislav Petkov
2010-04-09 22:20 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
5 siblings, 1 reply; 20+ messages in thread
From: Borislav Petkov @ 2010-03-31 19:56 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: davej, trenn, linux, cpufreq, linux-kernel, x86
From: Borislav Petkov <borislav.petkov@amd.com>
Multiple modules used to define those which are with identical
functionality and were needlessly replicated among the different cpufreq
drivers. Push them into the header and remove duplication.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
drivers/cpufreq/cpufreq.c | 40 +++++++++-----------------
drivers/cpufreq/cpufreq_conservative.c | 48 ++++++++++---------------------
drivers/cpufreq/cpufreq_ondemand.c | 40 ++++++++------------------
include/linux/cpufreq.h | 30 ++++++++++++++++++++
4 files changed, 72 insertions(+), 86 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 2d5d575..e02e417 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -662,32 +662,20 @@ static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf)
return sprintf(buf, "%u\n", policy->cpuinfo.max_freq);
}
-#define define_one_ro(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-#define define_one_ro0400(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0400, show_##_name, NULL)
-
-#define define_one_rw(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_ro0400(cpuinfo_cur_freq);
-define_one_ro(cpuinfo_min_freq);
-define_one_ro(cpuinfo_max_freq);
-define_one_ro(cpuinfo_transition_latency);
-define_one_ro(scaling_available_governors);
-define_one_ro(scaling_driver);
-define_one_ro(scaling_cur_freq);
-define_one_ro(bios_limit);
-define_one_ro(related_cpus);
-define_one_ro(affected_cpus);
-define_one_rw(scaling_min_freq);
-define_one_rw(scaling_max_freq);
-define_one_rw(scaling_governor);
-define_one_rw(scaling_setspeed);
+cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400);
+cpufreq_freq_attr_ro(cpuinfo_min_freq);
+cpufreq_freq_attr_ro(cpuinfo_max_freq);
+cpufreq_freq_attr_ro(cpuinfo_transition_latency);
+cpufreq_freq_attr_ro(scaling_available_governors);
+cpufreq_freq_attr_ro(scaling_driver);
+cpufreq_freq_attr_ro(scaling_cur_freq);
+cpufreq_freq_attr_ro(bios_limit);
+cpufreq_freq_attr_ro(related_cpus);
+cpufreq_freq_attr_ro(affected_cpus);
+cpufreq_freq_attr_rw(scaling_min_freq);
+cpufreq_freq_attr_rw(scaling_max_freq);
+cpufreq_freq_attr_rw(scaling_governor);
+cpufreq_freq_attr_rw(scaling_setspeed);
static struct attribute *default_attrs[] = {
&cpuinfo_min_freq.attr,
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 599a40b..ce5248e 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -178,12 +178,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj,
return sprintf(buf, "%u\n", min_sampling_rate);
}
-#define define_one_ro(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-define_one_ro(sampling_rate_max);
-define_one_ro(sampling_rate_min);
+define_one_global_ro(sampling_rate_max);
+define_one_global_ro(sampling_rate_min);
/* cpufreq_conservative Governor Tunables */
#define show_one(file_name, object) \
@@ -221,12 +217,8 @@ show_one_old(freq_step);
show_one_old(sampling_rate_min);
show_one_old(sampling_rate_max);
-#define define_one_ro_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0444, show_##_name##_old, NULL)
-
-define_one_ro_old(sampling_rate_min_old, sampling_rate_min);
-define_one_ro_old(sampling_rate_max_old, sampling_rate_max);
+cpufreq_freq_attr_ro_old(sampling_rate_min);
+cpufreq_freq_attr_ro_old(sampling_rate_max);
/*** delete after deprecation time ***/
@@ -364,16 +356,12 @@ static ssize_t store_freq_step(struct kobject *a, struct attribute *b,
return count;
}
-#define define_one_rw(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_rw(sampling_rate);
-define_one_rw(sampling_down_factor);
-define_one_rw(up_threshold);
-define_one_rw(down_threshold);
-define_one_rw(ignore_nice_load);
-define_one_rw(freq_step);
+define_one_global_rw(sampling_rate);
+define_one_global_rw(sampling_down_factor);
+define_one_global_rw(up_threshold);
+define_one_global_rw(down_threshold);
+define_one_global_rw(ignore_nice_load);
+define_one_global_rw(freq_step);
static struct attribute *dbs_attributes[] = {
&sampling_rate_max.attr,
@@ -409,16 +397,12 @@ write_one_old(down_threshold);
write_one_old(ignore_nice_load);
write_one_old(freq_step);
-#define define_one_rw_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
-
-define_one_rw_old(sampling_rate_old, sampling_rate);
-define_one_rw_old(sampling_down_factor_old, sampling_down_factor);
-define_one_rw_old(up_threshold_old, up_threshold);
-define_one_rw_old(down_threshold_old, down_threshold);
-define_one_rw_old(ignore_nice_load_old, ignore_nice_load);
-define_one_rw_old(freq_step_old, freq_step);
+cpufreq_freq_attr_rw_old(sampling_rate);
+cpufreq_freq_attr_rw_old(sampling_down_factor);
+cpufreq_freq_attr_rw_old(up_threshold);
+cpufreq_freq_attr_rw_old(down_threshold);
+cpufreq_freq_attr_rw_old(ignore_nice_load);
+cpufreq_freq_attr_rw_old(freq_step);
static struct attribute *dbs_attributes_old[] = {
&sampling_rate_max_old.attr,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index bd444dc..c00b25f 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -234,12 +234,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj,
return sprintf(buf, "%u\n", min_sampling_rate);
}
-#define define_one_ro(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-define_one_ro(sampling_rate_max);
-define_one_ro(sampling_rate_min);
+define_one_global_ro(sampling_rate_max);
+define_one_global_ro(sampling_rate_min);
/* cpufreq_ondemand Governor Tunables */
#define show_one(file_name, object) \
@@ -274,12 +270,8 @@ show_one_old(powersave_bias);
show_one_old(sampling_rate_min);
show_one_old(sampling_rate_max);
-#define define_one_ro_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0444, show_##_name##_old, NULL)
-
-define_one_ro_old(sampling_rate_min_old, sampling_rate_min);
-define_one_ro_old(sampling_rate_max_old, sampling_rate_max);
+cpufreq_freq_attr_ro_old(sampling_rate_min);
+cpufreq_freq_attr_ro_old(sampling_rate_max);
/*** delete after deprecation time ***/
@@ -376,14 +368,10 @@ static ssize_t store_powersave_bias(struct kobject *a, struct attribute *b,
return count;
}
-#define define_one_rw(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_rw(sampling_rate);
-define_one_rw(up_threshold);
-define_one_rw(ignore_nice_load);
-define_one_rw(powersave_bias);
+define_one_global_rw(sampling_rate);
+define_one_global_rw(up_threshold);
+define_one_global_rw(ignore_nice_load);
+define_one_global_rw(powersave_bias);
static struct attribute *dbs_attributes[] = {
&sampling_rate_max.attr,
@@ -415,14 +403,10 @@ write_one_old(up_threshold);
write_one_old(ignore_nice_load);
write_one_old(powersave_bias);
-#define define_one_rw_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
-
-define_one_rw_old(sampling_rate_old, sampling_rate);
-define_one_rw_old(up_threshold_old, up_threshold);
-define_one_rw_old(ignore_nice_load_old, ignore_nice_load);
-define_one_rw_old(powersave_bias_old, powersave_bias);
+cpufreq_freq_attr_rw_old(sampling_rate);
+cpufreq_freq_attr_rw_old(up_threshold);
+cpufreq_freq_attr_rw_old(ignore_nice_load);
+cpufreq_freq_attr_rw_old(powersave_bias);
static struct attribute *dbs_attributes_old[] = {
&sampling_rate_max_old.attr,
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index 4de02b1..9f15150 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -278,6 +278,27 @@ struct freq_attr {
ssize_t (*store)(struct cpufreq_policy *, const char *, size_t count);
};
+#define cpufreq_freq_attr_ro(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0444, show_##_name, NULL)
+
+#define cpufreq_freq_attr_ro_perm(_name, _perm) \
+static struct freq_attr _name = \
+__ATTR(_name, _perm, show_##_name, NULL)
+
+#define cpufreq_freq_attr_ro_old(_name) \
+static struct freq_attr _name##_old = \
+__ATTR(_name, 0444, show_##_name##_old, NULL)
+
+#define cpufreq_freq_attr_rw(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+#define cpufreq_freq_attr_rw_old(_name) \
+static struct freq_attr _name##_old = \
+__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
+
+
struct global_attr {
struct attribute attr;
ssize_t (*show)(struct kobject *kobj,
@@ -286,6 +307,15 @@ struct global_attr {
const char *c, size_t count);
};
+#define define_one_global_ro(_name) \
+static struct global_attr _name = \
+__ATTR(_name, 0444, show_##_name, NULL)
+
+#define define_one_global_rw(_name) \
+static struct global_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+
/*********************************************************************
* CPUFREQ 2.6. INTERFACE *
*********************************************************************/
--
1.7.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-03-31 19:56 ` [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors Borislav Petkov
@ 2010-04-01 9:01 ` Marvin
2010-04-01 10:36 ` Borislav Petkov
2010-04-01 14:19 ` [-v3.1 PATCH " Borislav Petkov
1 sibling, 1 reply; 20+ messages in thread
From: Marvin @ 2010-04-01 9:01 UTC (permalink / raw)
To: Borislav Petkov; +Cc: linux-kernel
Hi,
> From: Mark Langsdorf <mark.langsdorf@amd.com>
>
> Starting with model 10 of Family 0x10, AMD processors may have
> support for APERF/MPERF. Add support for identifying it and using
> it within cpufreq. Move the APERF/MPERF functions out of the
> acpi-cpufreq code and into their own file so they can easily be
> shared.
just out of interest: what are the realnames of these processors (>= Mangy-Core) ?
> Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
> Reviewed-by: Thomas Renninger <trenn@suse.de>
> ---
> arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
> arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44 +-----------------------
> arch/x86/kernel/cpu/cpufreq/mperf.c | 50
> ++++++++++++++++++++++++++++ arch/x86/kernel/cpu/cpufreq/mperf.h |
> 9 +++++
> arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
> 5 files changed, 71 insertions(+), 44 deletions(-)
> create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.c
this file has no copyright or module license, thus tainting my kernel...
> create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.h
>
Marvin
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-04-01 9:01 ` Marvin
@ 2010-04-01 10:36 ` Borislav Petkov
2010-04-01 11:46 ` Marvin
0 siblings, 1 reply; 20+ messages in thread
From: Borislav Petkov @ 2010-04-01 10:36 UTC (permalink / raw)
To: Marvin; +Cc: linux-kernel
From: Marvin <marvin24@gmx.de>
Date: Thu, Apr 01, 2010 at 11:01:04AM +0200
> > From: Mark Langsdorf <mark.langsdorf@amd.com>
> >
> > Starting with model 10 of Family 0x10, AMD processors may have
> > support for APERF/MPERF. Add support for identifying it and using
> > it within cpufreq. Move the APERF/MPERF functions out of the
> > acpi-cpufreq code and into their own file so they can easily be
> > shared.
>
> just out of interest: what are the realnames of these processors (>= Mangy-Core) ?
Well, you're going to have to wait and see. Or search the net :)
> > Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
> > Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
> > Reviewed-by: Thomas Renninger <trenn@suse.de>
> > ---
> > arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
> > arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44 +-----------------------
> > arch/x86/kernel/cpu/cpufreq/mperf.c | 50
> > ++++++++++++++++++++++++++++ arch/x86/kernel/cpu/cpufreq/mperf.h |
> > 9 +++++
> > arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
> > 5 files changed, 71 insertions(+), 44 deletions(-)
> > create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.c
>
> this file has no copyright or module license, thus tainting my kernel...
This is not a standalone module but only an auxiliary. How does that
taint your kernel? dmesg, .config please.
--
Regards/Gruss,
Boris.
--
Advanced Micro Devices, Inc.
Operating Systems Research Center
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-04-01 10:36 ` Borislav Petkov
@ 2010-04-01 11:46 ` Marvin
2010-04-01 14:00 ` Borislav Petkov
0 siblings, 1 reply; 20+ messages in thread
From: Marvin @ 2010-04-01 11:46 UTC (permalink / raw)
To: Borislav Petkov; +Cc: linux-kernel
> From: Marvin <marvin24@gmx.de>
> Date: Thu, Apr 01, 2010 at 11:01:04AM +0200
>
> > > From: Mark Langsdorf <mark.langsdorf@amd.com>
> > >
> > > Starting with model 10 of Family 0x10, AMD processors may have
> > > support for APERF/MPERF. Add support for identifying it and using
> > > it within cpufreq. Move the APERF/MPERF functions out of the
> > > acpi-cpufreq code and into their own file so they can easily be
> > > shared.
> >
> > just out of interest: what are the realnames of these processors (>=
> > Mangy-Core) ?
>
> Well, you're going to have to wait and see. Or search the net :)
ok - I take this as "not yet".
>
> > > Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
> > > Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
> > > Reviewed-by: Thomas Renninger <trenn@suse.de>
> > > ---
> > >
> > > arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
> > > arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44
> > > +----------------------- arch/x86/kernel/cpu/cpufreq/mperf.c |
> > > 50
> > >
> > > ++++++++++++++++++++++++++++ arch/x86/kernel/cpu/cpufreq/mperf.h
> > > |
> > >
> > > 9 +++++
> > >
> > > arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
> > > 5 files changed, 71 insertions(+), 44 deletions(-)
> > > create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.c
> >
> > this file has no copyright or module license, thus tainting my kernel...
>
> This is not a standalone module but only an auxiliary. How does that
> taint your kernel? dmesg, .config please.
I think compiling powernow_k8 as a module is sufficient.
# lsmod | grep powernow
powernow_k8 16595 1
mperf 1379 1 powernow_k8
processor 39129 1 powernow_k8
# dmesg | grep -B5 powernow
[ 11.676876] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 12.005031] ip_tables: (C) 2000-2006 Netfilter Core Team
[ 12.051995] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[ 12.096556] mperf: module license 'unspecified' taints kernel.
[ 12.097906] Disabling lock debugging due to kernel taint
[ 12.117403] powernow-k8: Found 1 AMD Phenom(tm) II X4 B55 Processor (4 cpu cores)
(version 2.20.00)
[ 12.118714] powernow-k8: 0 : pstate 0 (3200 MHz)
[ 12.120063] powernow-k8: 1 : pstate 1 (2500 MHz)
[ 12.121389] powernow-k8: 2 : pstate 2 (2100 MHz)
[ 12.122713] powernow-k8: 3 : pstate 3 (800 MHz)
Marvin
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-04-01 11:46 ` Marvin
@ 2010-04-01 14:00 ` Borislav Petkov
0 siblings, 0 replies; 20+ messages in thread
From: Borislav Petkov @ 2010-04-01 14:00 UTC (permalink / raw)
To: Marvin; +Cc: linux-kernel
From: Marvin <marvin24@gmx.de>
Date: Thu, Apr 01, 2010 at 01:46:10PM +0200
Yep, I could reproduce it, thanks for reporting.
> I think compiling powernow_k8 as a module is sufficient.
Almost, you need acpi-cpufreq (CONFIG_X86_ACPI_CPUFREQ) also as module
so that mperf.ko gets compiled as a module too since its being shared
between the two. Otherwise it is statically linked into whichever of the
two (apci-cpufreq or powernow-k8) are built-in.
Fix follows.
--
Regards/Gruss,
Boris.
--
Advanced Micro Devices, Inc.
Operating Systems Research Center
^ permalink raw reply [flat|nested] 20+ messages in thread
* [-v3.1 PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
2010-03-31 19:56 ` [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors Borislav Petkov
2010-04-01 9:01 ` Marvin
@ 2010-04-01 14:19 ` Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] x86, " tip-bot for Mark Langsdorf
1 sibling, 1 reply; 20+ messages in thread
From: Borislav Petkov @ 2010-04-01 14:19 UTC (permalink / raw)
To: hpa, mingo, tglx, akpm; +Cc: x86, linux-kernel, cpufreq, linux, davej, trenn
Hi,
here's a refreshed version with module license taint fix when the two
users of mperf.c (acpi-cpufreq.c and powernow-k8.c) are compiled as
modules.
--
From: Mark Langsdorf <mark.langsdorf@amd.com>
Date: Thu, 18 Mar 2010 18:41:46 +0100
Subject: [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors
Starting with model 10 of Family 0x10, AMD processors may have
support for APERF/MPERF. Add support for identifying it and using
it within cpufreq. Move the APERF/MPERF functions out of the
acpi-cpufreq code and into their own file so they can easily be
shared.
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
---
arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44 +-----------------------
arch/x86/kernel/cpu/cpufreq/mperf.c | 51 ++++++++++++++++++++++++++++
arch/x86/kernel/cpu/cpufreq/mperf.h | 9 +++++
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
5 files changed, 72 insertions(+), 44 deletions(-)
create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.c
create mode 100644 arch/x86/kernel/cpu/cpufreq/mperf.h
diff --git a/arch/x86/kernel/cpu/cpufreq/Makefile b/arch/x86/kernel/cpu/cpufreq/Makefile
index 1840c0a..bd54bf6 100644
--- a/arch/x86/kernel/cpu/cpufreq/Makefile
+++ b/arch/x86/kernel/cpu/cpufreq/Makefile
@@ -2,8 +2,8 @@
# K8 systems. ACPI is preferred to all other hardware-specific drivers.
# speedstep-* is preferred over p4-clockmod.
-obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o
-obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o
+obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o
+obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o
obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o
obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o
obj-$(CONFIG_X86_POWERNOW_K7) += powernow-k7.o
diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
index 1b1920f..dc68e5c 100644
--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -45,6 +45,7 @@
#include <asm/msr.h>
#include <asm/processor.h>
#include <asm/cpufeature.h>
+#include "mperf.h"
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, \
"acpi-cpufreq", msg)
@@ -70,8 +71,6 @@ struct acpi_cpufreq_data {
static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data);
-static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
-
/* acpi_perf_data is a pointer to percpu data. */
static struct acpi_processor_performance *acpi_perf_data;
@@ -239,45 +238,6 @@ static u32 get_cur_val(const struct cpumask *mask)
return cmd.val;
}
-/* Called via smp_call_function_single(), on the target CPU */
-static void read_measured_perf_ctrs(void *_cur)
-{
- struct aperfmperf *am = _cur;
-
- get_aperfmperf(am);
-}
-
-/*
- * Return the measured active (C0) frequency on this CPU since last call
- * to this function.
- * Input: cpu number
- * Return: Average CPU frequency in terms of max frequency (zero on error)
- *
- * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
- * over a period of time, while CPU is in C0 state.
- * IA32_MPERF counts at the rate of max advertised frequency
- * IA32_APERF counts at the rate of actual CPU frequency
- * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
- * no meaning should be associated with absolute values of these MSRs.
- */
-static unsigned int get_measured_perf(struct cpufreq_policy *policy,
- unsigned int cpu)
-{
- struct aperfmperf perf;
- unsigned long ratio;
- unsigned int retval;
-
- if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
- return 0;
-
- ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
- per_cpu(acfreq_old_perf, cpu) = perf;
-
- retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
-
- return retval;
-}
-
static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
{
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu);
@@ -701,7 +661,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* Check for APERF/MPERF support in hardware */
if (cpu_has(c, X86_FEATURE_APERFMPERF))
- acpi_cpufreq_driver.getavg = get_measured_perf;
+ acpi_cpufreq_driver.getavg = cpufreq_get_measured_perf;
dprintk("CPU%u - ACPI performance management activated.\n", cpu);
for (i = 0; i < perf->state_count; i++)
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.c b/arch/x86/kernel/cpu/cpufreq/mperf.c
new file mode 100644
index 0000000..911e193
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.c
@@ -0,0 +1,51 @@
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/slab.h>
+
+#include "mperf.h"
+
+static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
+
+/* Called via smp_call_function_single(), on the target CPU */
+static void read_measured_perf_ctrs(void *_cur)
+{
+ struct aperfmperf *am = _cur;
+
+ get_aperfmperf(am);
+}
+
+/*
+ * Return the measured active (C0) frequency on this CPU since last call
+ * to this function.
+ * Input: cpu number
+ * Return: Average CPU frequency in terms of max frequency (zero on error)
+ *
+ * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
+ * over a period of time, while CPU is in C0 state.
+ * IA32_MPERF counts at the rate of max advertised frequency
+ * IA32_APERF counts at the rate of actual CPU frequency
+ * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
+ * no meaning should be associated with absolute values of these MSRs.
+ */
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu)
+{
+ struct aperfmperf perf;
+ unsigned long ratio;
+ unsigned int retval;
+
+ if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
+ return 0;
+
+ ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
+ per_cpu(acfreq_old_perf, cpu) = perf;
+
+ retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
+
+ return retval;
+}
+EXPORT_SYMBOL_GPL(cpufreq_get_measured_perf);
+MODULE_LICENSE("GPL");
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.h b/arch/x86/kernel/cpu/cpufreq/mperf.h
new file mode 100644
index 0000000..5dbf295
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.h
@@ -0,0 +1,9 @@
+/*
+ * (c) 2010 Advanced Micro Devices, Inc.
+ * Your use of this code is subject to the terms and conditions of the
+ * GNU general public license version 2. See "COPYING" or
+ * http://www.gnu.org/licenses/gpl.html
+ */
+
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu);
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index 74ca34b..52fce63 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -45,6 +45,7 @@
#define PFX "powernow-k8: "
#define VERSION "version 2.20.00"
#include "powernow-k8.h"
+#include "mperf.h"
/* serialize freq changes */
static DEFINE_MUTEX(fidvid_mutex);
@@ -57,6 +58,8 @@ static int cpu_family = CPU_OPTERON;
static bool cpb_capable, cpb_enabled;
static struct msr __percpu *msrs;
+static struct cpufreq_driver cpufreq_amd64_driver;
+
#ifndef CONFIG_SMP
static inline const struct cpumask *cpu_core_mask(int cpu)
{
@@ -1251,6 +1254,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
struct powernow_k8_data *data;
struct init_on_cpu init_on_cpu;
int rc;
+ struct cpuinfo_x86 *c = &cpu_data(pol->cpu);
if (!cpu_online(pol->cpu))
return -ENODEV;
@@ -1325,6 +1329,10 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
return -EINVAL;
}
+ /* Check for APERF/MPERF support in hardware */
+ if (cpu_has(c, X86_FEATURE_APERFMPERF))
+ cpufreq_amd64_driver.getavg = cpufreq_get_measured_perf;
+
cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu);
if (cpu_family == CPU_HW_PSTATE)
--
1.7.0.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo
2010-03-31 19:56 ` [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo Borislav Petkov
@ 2010-04-09 22:18 ` tip-bot for Borislav Petkov
0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Borislav Petkov @ 2010-04-09 22:18 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, trenn, tglx, borislav.petkov
Commit-ID: 5958f1d5d722df7a9e5d129676614a8e5219bacd
Gitweb: http://git.kernel.org/tip/5958f1d5d722df7a9e5d129676614a8e5219bacd
Author: Borislav Petkov <borislav.petkov@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:41 +0200
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:05:23 -0700
x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo
By semi-popular demand, this adds the Core Performance Boost feature
flag to /proc/cpuinfo. Possible use case for this is userspace tools
like cpufreq-aperf, for example, so that they don't have to jump through
hoops of accessing "/dev/cpu/%d/cpuid" in order to check for CPB hw
support, or call cpuid from userspace.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-2-git-send-email-bp@amd64.org>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
arch/x86/include/asm/cpufeature.h | 1 +
arch/x86/kernel/cpu/addon_cpuid_features.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 0cd82d0..630e623 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -161,6 +161,7 @@
*/
#define X86_FEATURE_IDA (7*32+ 0) /* Intel Dynamic Acceleration */
#define X86_FEATURE_ARAT (7*32+ 1) /* Always Running APIC Timer */
+#define X86_FEATURE_CPB (7*32+ 2) /* AMD Core Performance Boost */
/* Virtualization flags: Linux defined */
#define X86_FEATURE_TPR_SHADOW (8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
index 97ad79c..ead2a1c 100644
--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
+++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
@@ -30,8 +30,9 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
const struct cpuid_bit *cb;
static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
- { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
- { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
+ { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
+ { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
+ { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007 },
{ X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a },
{ X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a },
{ X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a },
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] powernow-k8: Add core performance boost support
2010-03-31 19:56 ` [PATCH 2/6] powernow-k8: Add core performance boost support Borislav Petkov
@ 2010-04-09 22:19 ` tip-bot for Borislav Petkov
0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Borislav Petkov @ 2010-04-09 22:19 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, trenn, tglx, borislav.petkov
Commit-ID: 73860c6b2fd159a35637e233d735e36887c266ad
Gitweb: http://git.kernel.org/tip/73860c6b2fd159a35637e233d735e36887c266ad
Author: Borislav Petkov <borislav.petkov@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:42 +0200
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:05:43 -0700
powernow-k8: Add core performance boost support
Starting with F10h, revE, AMD processors add support for a dynamic
core boosting feature called Core Performance Boost. When a specific
condition is present, a subset of the cores on a system are boosted
beyond their P0 operating frequency to speed up the performance of
single-threaded applications.
In the normal case, the system comes out of reset with core boosting
enabled. This patch adds a sysfs knob with which core boosting can be
switched on or off for benchmarking purposes.
While at it, make the CPB code hotplug-aware so that taking cores
offline wouldn't interfere with boosting the remaining online cores.
Furthermore, add cpu_online_mask hotplug protection as suggested by
Andrew.
Finally, cleanup the driver init codepath and update copyrights.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-3-git-send-email-bp@amd64.org>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 161 +++++++++++++++++++++++++++--
arch/x86/kernel/cpu/cpufreq/powernow-k8.h | 2 -
2 files changed, 151 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index d360b56..74ca34b 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -1,6 +1,5 @@
-
/*
- * (c) 2003-2006 Advanced Micro Devices, Inc.
+ * (c) 2003-2010 Advanced Micro Devices, Inc.
* Your use of this code is subject to the terms and conditions of the
* GNU general public license version 2. See "COPYING" or
* http://www.gnu.org/licenses/gpl.html
@@ -54,6 +53,10 @@ static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data);
static int cpu_family = CPU_OPTERON;
+/* core performance boost */
+static bool cpb_capable, cpb_enabled;
+static struct msr __percpu *msrs;
+
#ifndef CONFIG_SMP
static inline const struct cpumask *cpu_core_mask(int cpu)
{
@@ -1393,8 +1396,77 @@ out:
return khz;
}
+static void _cpb_toggle_msrs(bool t)
+{
+ int cpu;
+
+ get_online_cpus();
+
+ rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ for_each_cpu(cpu, cpu_online_mask) {
+ struct msr *reg = per_cpu_ptr(msrs, cpu);
+ if (t)
+ reg->l &= ~BIT(25);
+ else
+ reg->l |= BIT(25);
+ }
+ wrmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ put_online_cpus();
+}
+
+/*
+ * Switch on/off core performance boosting.
+ *
+ * 0=disable
+ * 1=enable.
+ */
+static void cpb_toggle(bool t)
+{
+ if (!cpb_capable)
+ return;
+
+ if (t && !cpb_enabled) {
+ cpb_enabled = true;
+ _cpb_toggle_msrs(t);
+ printk(KERN_INFO PFX "Core Boosting enabled.\n");
+ } else if (!t && cpb_enabled) {
+ cpb_enabled = false;
+ _cpb_toggle_msrs(t);
+ printk(KERN_INFO PFX "Core Boosting disabled.\n");
+ }
+}
+
+static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf,
+ size_t count)
+{
+ int ret = -EINVAL;
+ unsigned long val = 0;
+
+ ret = strict_strtoul(buf, 10, &val);
+ if (!ret && (val == 0 || val == 1) && cpb_capable)
+ cpb_toggle(val);
+ else
+ return -EINVAL;
+
+ return count;
+}
+
+static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf)
+{
+ return sprintf(buf, "%u\n", cpb_enabled);
+}
+
+#define define_one_rw(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+define_one_rw(cpb);
+
static struct freq_attr *powernow_k8_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
+ &cpb,
NULL,
};
@@ -1410,10 +1482,51 @@ static struct cpufreq_driver cpufreq_amd64_driver = {
.attr = powernow_k8_attr,
};
+/*
+ * Clear the boost-disable flag on the CPU_DOWN path so that this cpu
+ * cannot block the remaining ones from boosting. On the CPU_UP path we
+ * simply keep the boost-disable flag in sync with the current global
+ * state.
+ */
+static int __cpuinit cpb_notify(struct notifier_block *nb, unsigned long action,
+ void *hcpu)
+{
+ unsigned cpu = (long)hcpu;
+ u32 lo, hi;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ case CPU_UP_PREPARE_FROZEN:
+
+ if (!cpb_enabled) {
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi);
+ lo |= BIT(25);
+ wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi);
+ }
+ break;
+
+ case CPU_DOWN_PREPARE:
+ case CPU_DOWN_PREPARE_FROZEN:
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi);
+ lo &= ~BIT(25);
+ wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi);
+ break;
+
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __cpuinitdata cpb_nb = {
+ .notifier_call = cpb_notify,
+};
+
/* driver entry point for init */
static int __cpuinit powernowk8_init(void)
{
- unsigned int i, supported_cpus = 0;
+ unsigned int i, supported_cpus = 0, cpu;
for_each_online_cpu(i) {
int rc;
@@ -1422,15 +1535,36 @@ static int __cpuinit powernowk8_init(void)
supported_cpus++;
}
- if (supported_cpus == num_online_cpus()) {
- printk(KERN_INFO PFX "Found %d %s "
- "processors (%d cpu cores) (" VERSION ")\n",
- num_online_nodes(),
- boot_cpu_data.x86_model_id, supported_cpus);
- return cpufreq_register_driver(&cpufreq_amd64_driver);
+ if (supported_cpus != num_online_cpus())
+ return -ENODEV;
+
+ printk(KERN_INFO PFX "Found %d %s (%d cpu cores) (" VERSION ")\n",
+ num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus);
+
+ if (boot_cpu_has(X86_FEATURE_CPB)) {
+
+ cpb_capable = true;
+
+ register_cpu_notifier(&cpb_nb);
+
+ msrs = msrs_alloc();
+ if (!msrs) {
+ printk(KERN_ERR "%s: Error allocating msrs!\n", __func__);
+ return -ENOMEM;
+ }
+
+ rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs);
+
+ for_each_cpu(cpu, cpu_online_mask) {
+ struct msr *reg = per_cpu_ptr(msrs, cpu);
+ cpb_enabled |= !(!!(reg->l & BIT(25)));
+ }
+
+ printk(KERN_INFO PFX "Core Performance Boosting: %s.\n",
+ (cpb_enabled ? "on" : "off"));
}
- return -ENODEV;
+ return cpufreq_register_driver(&cpufreq_amd64_driver);
}
/* driver entry point for term */
@@ -1438,6 +1572,13 @@ static void __exit powernowk8_exit(void)
{
dprintk("exit\n");
+ if (boot_cpu_has(X86_FEATURE_CPB)) {
+ msrs_free(msrs);
+ msrs = NULL;
+
+ unregister_cpu_notifier(&cpb_nb);
+ }
+
cpufreq_unregister_driver(&cpufreq_amd64_driver);
}
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
index 02ce824..df3529b 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
@@ -5,7 +5,6 @@
* http://www.gnu.org/licenses/gpl.html
*/
-
enum pstate {
HW_PSTATE_INVALID = 0xff,
HW_PSTATE_0 = 0,
@@ -55,7 +54,6 @@ struct powernow_k8_data {
struct cpumask *available_cores;
};
-
/* processor's cpuid instruction support */
#define CPUID_PROCESSOR_SIGNATURE 1 /* function 1 */
#define CPUID_XFAM 0x0ff00000 /* extended family */
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] x86: Unify APERF/MPERF support
2010-03-31 19:56 ` [PATCH 3/6] x86: Unify APERF/MPERF support Borislav Petkov
@ 2010-04-09 22:19 ` tip-bot for Borislav Petkov
2010-05-03 23:21 ` [tip:x86/cpu] x86, cpu: Make APERF/MPERF a normal table-driven flag tip-bot for H. Peter Anvin
1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for Borislav Petkov @ 2010-04-09 22:19 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, trenn, tglx, borislav.petkov
Commit-ID: d65ad45cd82a0db9544469b8c54f5dc5cafbb2d8
Gitweb: http://git.kernel.org/tip/d65ad45cd82a0db9544469b8c54f5dc5cafbb2d8
Author: Borislav Petkov <borislav.petkov@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:43 +0200
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:05:50 -0700
x86: Unify APERF/MPERF support
Initialize this CPUID flag feature in common code. It could be made a
standalone function later, maybe, if more functionality is duplicated.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-4-git-send-email-bp@amd64.org>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
arch/x86/kernel/cpu/addon_cpuid_features.c | 8 ++++++++
arch/x86/kernel/cpu/intel.c | 6 ------
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
index ead2a1c..fd1fc19 100644
--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
+++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
@@ -54,6 +54,14 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
if (regs[cb->reg] & (1 << cb->bit))
set_cpu_cap(c, cb->feature);
}
+
+ /*
+ * common AMD/Intel features
+ */
+ if (c->cpuid_level >= 6) {
+ if (cpuid_ecx(6) & 0x1)
+ set_cpu_cap(c, X86_FEATURE_APERFMPERF);
+ }
}
/* leaf 0xb SMT level */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 7e1cca1..3830258 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -352,12 +352,6 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
}
- if (c->cpuid_level > 6) {
- unsigned ecx = cpuid_ecx(6);
- if (ecx & 0x01)
- set_cpu_cap(c, X86_FEATURE_APERFMPERF);
- }
-
if (cpu_has_xmm2)
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
if (cpu_has_ds) {
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] x86, cpufreq: Add APERF/MPERF support for AMD processors
2010-04-01 14:19 ` [-v3.1 PATCH " Borislav Petkov
@ 2010-04-09 22:19 ` tip-bot for Mark Langsdorf
0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Mark Langsdorf @ 2010-04-09 22:19 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, mark.langsdorf, trenn, tglx,
borislav.petkov
Commit-ID: a2fed573f065e526bfd5cbf26e5491973d9e9aaa
Gitweb: http://git.kernel.org/tip/a2fed573f065e526bfd5cbf26e5491973d9e9aaa
Author: Mark Langsdorf <mark.langsdorf@amd.com>
AuthorDate: Thu, 18 Mar 2010 18:41:46 +0100
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:07:40 -0700
x86, cpufreq: Add APERF/MPERF support for AMD processors
Starting with model 10 of Family 0x10, AMD processors may have
support for APERF/MPERF. Add support for identifying it and using
it within cpufreq. Move the APERF/MPERF functions out of the
acpi-cpufreq code and into their own file so they can easily be
shared.
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
LKML-Reference: <20100401141956.GA1930@aftab>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
arch/x86/kernel/cpu/cpufreq/Makefile | 4 +-
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c | 44 +-----------------------
arch/x86/kernel/cpu/cpufreq/mperf.c | 51 ++++++++++++++++++++++++++++
arch/x86/kernel/cpu/cpufreq/mperf.h | 9 +++++
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 8 ++++
5 files changed, 72 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/Makefile b/arch/x86/kernel/cpu/cpufreq/Makefile
index 1840c0a..bd54bf6 100644
--- a/arch/x86/kernel/cpu/cpufreq/Makefile
+++ b/arch/x86/kernel/cpu/cpufreq/Makefile
@@ -2,8 +2,8 @@
# K8 systems. ACPI is preferred to all other hardware-specific drivers.
# speedstep-* is preferred over p4-clockmod.
-obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o
-obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o
+obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o
+obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o
obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o
obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o
obj-$(CONFIG_X86_POWERNOW_K7) += powernow-k7.o
diff --git a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
index 1b1920f..dc68e5c 100644
--- a/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ b/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -45,6 +45,7 @@
#include <asm/msr.h>
#include <asm/processor.h>
#include <asm/cpufeature.h>
+#include "mperf.h"
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, \
"acpi-cpufreq", msg)
@@ -70,8 +71,6 @@ struct acpi_cpufreq_data {
static DEFINE_PER_CPU(struct acpi_cpufreq_data *, acfreq_data);
-static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
-
/* acpi_perf_data is a pointer to percpu data. */
static struct acpi_processor_performance *acpi_perf_data;
@@ -239,45 +238,6 @@ static u32 get_cur_val(const struct cpumask *mask)
return cmd.val;
}
-/* Called via smp_call_function_single(), on the target CPU */
-static void read_measured_perf_ctrs(void *_cur)
-{
- struct aperfmperf *am = _cur;
-
- get_aperfmperf(am);
-}
-
-/*
- * Return the measured active (C0) frequency on this CPU since last call
- * to this function.
- * Input: cpu number
- * Return: Average CPU frequency in terms of max frequency (zero on error)
- *
- * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
- * over a period of time, while CPU is in C0 state.
- * IA32_MPERF counts at the rate of max advertised frequency
- * IA32_APERF counts at the rate of actual CPU frequency
- * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
- * no meaning should be associated with absolute values of these MSRs.
- */
-static unsigned int get_measured_perf(struct cpufreq_policy *policy,
- unsigned int cpu)
-{
- struct aperfmperf perf;
- unsigned long ratio;
- unsigned int retval;
-
- if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
- return 0;
-
- ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
- per_cpu(acfreq_old_perf, cpu) = perf;
-
- retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
-
- return retval;
-}
-
static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
{
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, cpu);
@@ -701,7 +661,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* Check for APERF/MPERF support in hardware */
if (cpu_has(c, X86_FEATURE_APERFMPERF))
- acpi_cpufreq_driver.getavg = get_measured_perf;
+ acpi_cpufreq_driver.getavg = cpufreq_get_measured_perf;
dprintk("CPU%u - ACPI performance management activated.\n", cpu);
for (i = 0; i < perf->state_count; i++)
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.c b/arch/x86/kernel/cpu/cpufreq/mperf.c
new file mode 100644
index 0000000..911e193
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.c
@@ -0,0 +1,51 @@
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/slab.h>
+
+#include "mperf.h"
+
+static DEFINE_PER_CPU(struct aperfmperf, acfreq_old_perf);
+
+/* Called via smp_call_function_single(), on the target CPU */
+static void read_measured_perf_ctrs(void *_cur)
+{
+ struct aperfmperf *am = _cur;
+
+ get_aperfmperf(am);
+}
+
+/*
+ * Return the measured active (C0) frequency on this CPU since last call
+ * to this function.
+ * Input: cpu number
+ * Return: Average CPU frequency in terms of max frequency (zero on error)
+ *
+ * We use IA32_MPERF and IA32_APERF MSRs to get the measured performance
+ * over a period of time, while CPU is in C0 state.
+ * IA32_MPERF counts at the rate of max advertised frequency
+ * IA32_APERF counts at the rate of actual CPU frequency
+ * Only IA32_APERF/IA32_MPERF ratio is architecturally defined and
+ * no meaning should be associated with absolute values of these MSRs.
+ */
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu)
+{
+ struct aperfmperf perf;
+ unsigned long ratio;
+ unsigned int retval;
+
+ if (smp_call_function_single(cpu, read_measured_perf_ctrs, &perf, 1))
+ return 0;
+
+ ratio = calc_aperfmperf_ratio(&per_cpu(acfreq_old_perf, cpu), &perf);
+ per_cpu(acfreq_old_perf, cpu) = perf;
+
+ retval = (policy->cpuinfo.max_freq * ratio) >> APERFMPERF_SHIFT;
+
+ return retval;
+}
+EXPORT_SYMBOL_GPL(cpufreq_get_measured_perf);
+MODULE_LICENSE("GPL");
diff --git a/arch/x86/kernel/cpu/cpufreq/mperf.h b/arch/x86/kernel/cpu/cpufreq/mperf.h
new file mode 100644
index 0000000..5dbf295
--- /dev/null
+++ b/arch/x86/kernel/cpu/cpufreq/mperf.h
@@ -0,0 +1,9 @@
+/*
+ * (c) 2010 Advanced Micro Devices, Inc.
+ * Your use of this code is subject to the terms and conditions of the
+ * GNU general public license version 2. See "COPYING" or
+ * http://www.gnu.org/licenses/gpl.html
+ */
+
+unsigned int cpufreq_get_measured_perf(struct cpufreq_policy *policy,
+ unsigned int cpu);
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index 74ca34b..52fce63 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -45,6 +45,7 @@
#define PFX "powernow-k8: "
#define VERSION "version 2.20.00"
#include "powernow-k8.h"
+#include "mperf.h"
/* serialize freq changes */
static DEFINE_MUTEX(fidvid_mutex);
@@ -57,6 +58,8 @@ static int cpu_family = CPU_OPTERON;
static bool cpb_capable, cpb_enabled;
static struct msr __percpu *msrs;
+static struct cpufreq_driver cpufreq_amd64_driver;
+
#ifndef CONFIG_SMP
static inline const struct cpumask *cpu_core_mask(int cpu)
{
@@ -1251,6 +1254,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
struct powernow_k8_data *data;
struct init_on_cpu init_on_cpu;
int rc;
+ struct cpuinfo_x86 *c = &cpu_data(pol->cpu);
if (!cpu_online(pol->cpu))
return -ENODEV;
@@ -1325,6 +1329,10 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
return -EINVAL;
}
+ /* Check for APERF/MPERF support in hardware */
+ if (cpu_has(c, X86_FEATURE_APERFMPERF))
+ cpufreq_amd64_driver.getavg = cpufreq_get_measured_perf;
+
cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu);
if (cpu_family == CPU_HW_PSTATE)
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] powernow-k8: Fix frequency reporting
2010-03-31 19:56 ` [PATCH 5/6] powernow-k8: Fix frequency reporting Borislav Petkov
@ 2010-04-09 22:19 ` tip-bot for Mark Langsdorf
2010-05-03 13:09 ` [tip:x86/urgent] " tip-bot for Mark Langsdorf
1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for Mark Langsdorf @ 2010-04-09 22:19 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, mark.langsdorf, trenn, tglx,
borislav.petkov
Commit-ID: 679370641e3675633cad222449262abbe93a4a2a
Gitweb: http://git.kernel.org/tip/679370641e3675633cad222449262abbe93a4a2a
Author: Mark Langsdorf <mark.langsdorf@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:45 +0200
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:07:47 -0700
powernow-k8: Fix frequency reporting
With F10, model 10, all valid frequencies are in the ACPI _PST table.
Cc: <stable@kernel.org> # 33.x 32.x
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
LKML-Reference: <1270065406-1814-6-git-send-email-bp@amd64.org>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index 52fce63..6f3dc8f 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -935,7 +935,8 @@ static int fill_powernow_table_pstate(struct powernow_k8_data *data,
powernow_table[i].index = index;
/* Frequency may be rounded for these */
- if (boot_cpu_data.x86 == 0x10 || boot_cpu_data.x86 == 0x11) {
+ if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
+ || boot_cpu_data.x86 == 0x11) {
powernow_table[i].frequency =
freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7);
} else
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] cpufreq: Unify sysfs attribute definition macros
2010-03-31 19:56 ` [PATCH 6/6] cpufreq: Unify sysfs attribute definition macros Borislav Petkov
@ 2010-04-09 22:20 ` tip-bot for Borislav Petkov
0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Borislav Petkov @ 2010-04-09 22:20 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, trenn, tglx, borislav.petkov
Commit-ID: 6dad2a29646ce3792c40cfc52d77e9b65a7bb143
Gitweb: http://git.kernel.org/tip/6dad2a29646ce3792c40cfc52d77e9b65a7bb143
Author: Borislav Petkov <borislav.petkov@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:46 +0200
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Fri, 9 Apr 2010 14:07:56 -0700
cpufreq: Unify sysfs attribute definition macros
Multiple modules used to define those which are with identical
functionality and were needlessly replicated among the different cpufreq
drivers. Push them into the header and remove duplication.
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-7-git-send-email-bp@amd64.org>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
---
drivers/cpufreq/cpufreq.c | 40 +++++++++-----------------
drivers/cpufreq/cpufreq_conservative.c | 48 ++++++++++---------------------
drivers/cpufreq/cpufreq_ondemand.c | 40 ++++++++------------------
include/linux/cpufreq.h | 30 ++++++++++++++++++++
4 files changed, 72 insertions(+), 86 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 2d5d575..e02e417 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -662,32 +662,20 @@ static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf)
return sprintf(buf, "%u\n", policy->cpuinfo.max_freq);
}
-#define define_one_ro(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-#define define_one_ro0400(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0400, show_##_name, NULL)
-
-#define define_one_rw(_name) \
-static struct freq_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_ro0400(cpuinfo_cur_freq);
-define_one_ro(cpuinfo_min_freq);
-define_one_ro(cpuinfo_max_freq);
-define_one_ro(cpuinfo_transition_latency);
-define_one_ro(scaling_available_governors);
-define_one_ro(scaling_driver);
-define_one_ro(scaling_cur_freq);
-define_one_ro(bios_limit);
-define_one_ro(related_cpus);
-define_one_ro(affected_cpus);
-define_one_rw(scaling_min_freq);
-define_one_rw(scaling_max_freq);
-define_one_rw(scaling_governor);
-define_one_rw(scaling_setspeed);
+cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400);
+cpufreq_freq_attr_ro(cpuinfo_min_freq);
+cpufreq_freq_attr_ro(cpuinfo_max_freq);
+cpufreq_freq_attr_ro(cpuinfo_transition_latency);
+cpufreq_freq_attr_ro(scaling_available_governors);
+cpufreq_freq_attr_ro(scaling_driver);
+cpufreq_freq_attr_ro(scaling_cur_freq);
+cpufreq_freq_attr_ro(bios_limit);
+cpufreq_freq_attr_ro(related_cpus);
+cpufreq_freq_attr_ro(affected_cpus);
+cpufreq_freq_attr_rw(scaling_min_freq);
+cpufreq_freq_attr_rw(scaling_max_freq);
+cpufreq_freq_attr_rw(scaling_governor);
+cpufreq_freq_attr_rw(scaling_setspeed);
static struct attribute *default_attrs[] = {
&cpuinfo_min_freq.attr,
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 599a40b..ce5248e 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -178,12 +178,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj,
return sprintf(buf, "%u\n", min_sampling_rate);
}
-#define define_one_ro(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-define_one_ro(sampling_rate_max);
-define_one_ro(sampling_rate_min);
+define_one_global_ro(sampling_rate_max);
+define_one_global_ro(sampling_rate_min);
/* cpufreq_conservative Governor Tunables */
#define show_one(file_name, object) \
@@ -221,12 +217,8 @@ show_one_old(freq_step);
show_one_old(sampling_rate_min);
show_one_old(sampling_rate_max);
-#define define_one_ro_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0444, show_##_name##_old, NULL)
-
-define_one_ro_old(sampling_rate_min_old, sampling_rate_min);
-define_one_ro_old(sampling_rate_max_old, sampling_rate_max);
+cpufreq_freq_attr_ro_old(sampling_rate_min);
+cpufreq_freq_attr_ro_old(sampling_rate_max);
/*** delete after deprecation time ***/
@@ -364,16 +356,12 @@ static ssize_t store_freq_step(struct kobject *a, struct attribute *b,
return count;
}
-#define define_one_rw(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_rw(sampling_rate);
-define_one_rw(sampling_down_factor);
-define_one_rw(up_threshold);
-define_one_rw(down_threshold);
-define_one_rw(ignore_nice_load);
-define_one_rw(freq_step);
+define_one_global_rw(sampling_rate);
+define_one_global_rw(sampling_down_factor);
+define_one_global_rw(up_threshold);
+define_one_global_rw(down_threshold);
+define_one_global_rw(ignore_nice_load);
+define_one_global_rw(freq_step);
static struct attribute *dbs_attributes[] = {
&sampling_rate_max.attr,
@@ -409,16 +397,12 @@ write_one_old(down_threshold);
write_one_old(ignore_nice_load);
write_one_old(freq_step);
-#define define_one_rw_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
-
-define_one_rw_old(sampling_rate_old, sampling_rate);
-define_one_rw_old(sampling_down_factor_old, sampling_down_factor);
-define_one_rw_old(up_threshold_old, up_threshold);
-define_one_rw_old(down_threshold_old, down_threshold);
-define_one_rw_old(ignore_nice_load_old, ignore_nice_load);
-define_one_rw_old(freq_step_old, freq_step);
+cpufreq_freq_attr_rw_old(sampling_rate);
+cpufreq_freq_attr_rw_old(sampling_down_factor);
+cpufreq_freq_attr_rw_old(up_threshold);
+cpufreq_freq_attr_rw_old(down_threshold);
+cpufreq_freq_attr_rw_old(ignore_nice_load);
+cpufreq_freq_attr_rw_old(freq_step);
static struct attribute *dbs_attributes_old[] = {
&sampling_rate_max_old.attr,
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index bd444dc..c00b25f 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -234,12 +234,8 @@ static ssize_t show_sampling_rate_min(struct kobject *kobj,
return sprintf(buf, "%u\n", min_sampling_rate);
}
-#define define_one_ro(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0444, show_##_name, NULL)
-
-define_one_ro(sampling_rate_max);
-define_one_ro(sampling_rate_min);
+define_one_global_ro(sampling_rate_max);
+define_one_global_ro(sampling_rate_min);
/* cpufreq_ondemand Governor Tunables */
#define show_one(file_name, object) \
@@ -274,12 +270,8 @@ show_one_old(powersave_bias);
show_one_old(sampling_rate_min);
show_one_old(sampling_rate_max);
-#define define_one_ro_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0444, show_##_name##_old, NULL)
-
-define_one_ro_old(sampling_rate_min_old, sampling_rate_min);
-define_one_ro_old(sampling_rate_max_old, sampling_rate_max);
+cpufreq_freq_attr_ro_old(sampling_rate_min);
+cpufreq_freq_attr_ro_old(sampling_rate_max);
/*** delete after deprecation time ***/
@@ -376,14 +368,10 @@ static ssize_t store_powersave_bias(struct kobject *a, struct attribute *b,
return count;
}
-#define define_one_rw(_name) \
-static struct global_attr _name = \
-__ATTR(_name, 0644, show_##_name, store_##_name)
-
-define_one_rw(sampling_rate);
-define_one_rw(up_threshold);
-define_one_rw(ignore_nice_load);
-define_one_rw(powersave_bias);
+define_one_global_rw(sampling_rate);
+define_one_global_rw(up_threshold);
+define_one_global_rw(ignore_nice_load);
+define_one_global_rw(powersave_bias);
static struct attribute *dbs_attributes[] = {
&sampling_rate_max.attr,
@@ -415,14 +403,10 @@ write_one_old(up_threshold);
write_one_old(ignore_nice_load);
write_one_old(powersave_bias);
-#define define_one_rw_old(object, _name) \
-static struct freq_attr object = \
-__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
-
-define_one_rw_old(sampling_rate_old, sampling_rate);
-define_one_rw_old(up_threshold_old, up_threshold);
-define_one_rw_old(ignore_nice_load_old, ignore_nice_load);
-define_one_rw_old(powersave_bias_old, powersave_bias);
+cpufreq_freq_attr_rw_old(sampling_rate);
+cpufreq_freq_attr_rw_old(up_threshold);
+cpufreq_freq_attr_rw_old(ignore_nice_load);
+cpufreq_freq_attr_rw_old(powersave_bias);
static struct attribute *dbs_attributes_old[] = {
&sampling_rate_max_old.attr,
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index 4de02b1..9f15150 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -278,6 +278,27 @@ struct freq_attr {
ssize_t (*store)(struct cpufreq_policy *, const char *, size_t count);
};
+#define cpufreq_freq_attr_ro(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0444, show_##_name, NULL)
+
+#define cpufreq_freq_attr_ro_perm(_name, _perm) \
+static struct freq_attr _name = \
+__ATTR(_name, _perm, show_##_name, NULL)
+
+#define cpufreq_freq_attr_ro_old(_name) \
+static struct freq_attr _name##_old = \
+__ATTR(_name, 0444, show_##_name##_old, NULL)
+
+#define cpufreq_freq_attr_rw(_name) \
+static struct freq_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+#define cpufreq_freq_attr_rw_old(_name) \
+static struct freq_attr _name##_old = \
+__ATTR(_name, 0644, show_##_name##_old, store_##_name##_old)
+
+
struct global_attr {
struct attribute attr;
ssize_t (*show)(struct kobject *kobj,
@@ -286,6 +307,15 @@ struct global_attr {
const char *c, size_t count);
};
+#define define_one_global_ro(_name) \
+static struct global_attr _name = \
+__ATTR(_name, 0444, show_##_name, NULL)
+
+#define define_one_global_rw(_name) \
+static struct global_attr _name = \
+__ATTR(_name, 0644, show_##_name, store_##_name)
+
+
/*********************************************************************
* CPUFREQ 2.6. INTERFACE *
*********************************************************************/
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/urgent] powernow-k8: Fix frequency reporting
2010-03-31 19:56 ` [PATCH 5/6] powernow-k8: Fix frequency reporting Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Mark Langsdorf
@ 2010-05-03 13:09 ` tip-bot for Mark Langsdorf
1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for Mark Langsdorf @ 2010-05-03 13:09 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, mark.langsdorf, tglx, trenn,
borislav.petkov, mingo
Commit-ID: b810e94c9d8e3fff6741b66cd5a6f099a7887871
Gitweb: http://git.kernel.org/tip/b810e94c9d8e3fff6741b66cd5a6f099a7887871
Author: Mark Langsdorf <mark.langsdorf@amd.com>
AuthorDate: Wed, 31 Mar 2010 21:56:45 +0200
Committer: Ingo Molnar <mingo@elte.hu>
CommitDate: Mon, 3 May 2010 15:04:18 +0200
powernow-k8: Fix frequency reporting
With F10, model 10, all valid frequencies are in the ACPI _PST table.
Cc: <stable@kernel.org> # 33.x 32.x
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
LKML-Reference: <1270065406-1814-6-git-send-email-bp@amd64.org>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Reviewed-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
arch/x86/kernel/cpu/cpufreq/powernow-k8.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
index d360b56..b6215b9 100644
--- a/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ b/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -929,7 +929,8 @@ static int fill_powernow_table_pstate(struct powernow_k8_data *data,
powernow_table[i].index = index;
/* Frequency may be rounded for these */
- if (boot_cpu_data.x86 == 0x10 || boot_cpu_data.x86 == 0x11) {
+ if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
+ || boot_cpu_data.x86 == 0x11) {
powernow_table[i].frequency =
freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7);
} else
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [tip:x86/cpu] x86, cpu: Make APERF/MPERF a normal table-driven flag
2010-03-31 19:56 ` [PATCH 3/6] x86: Unify APERF/MPERF support Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
@ 2010-05-03 23:21 ` tip-bot for H. Peter Anvin
1 sibling, 0 replies; 20+ messages in thread
From: tip-bot for H. Peter Anvin @ 2010-05-03 23:21 UTC (permalink / raw)
To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, trenn, tglx, borislav.petkov
Commit-ID: 097c1bd5673edaf2a162724636858b71f658fdd2
Gitweb: http://git.kernel.org/tip/097c1bd5673edaf2a162724636858b71f658fdd2
Author: H. Peter Anvin <hpa@zytor.com>
AuthorDate: Mon, 3 May 2010 15:49:31 -0700
Committer: H. Peter Anvin <hpa@zytor.com>
CommitDate: Mon, 3 May 2010 15:49:31 -0700
x86, cpu: Make APERF/MPERF a normal table-driven flag
APERF/MPERF can be handled via the table like all the other scattered
CPU flags.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Renninger <trenn@suse.de>
Cc: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <1270065406-1814-4-git-send-email-bp@amd64.org>
---
arch/x86/kernel/cpu/addon_cpuid_features.c | 23 ++++++++---------------
1 files changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/addon_cpuid_features.c b/arch/x86/kernel/cpu/addon_cpuid_features.c
index fd1fc19..10fa568 100644
--- a/arch/x86/kernel/cpu/addon_cpuid_features.c
+++ b/arch/x86/kernel/cpu/addon_cpuid_features.c
@@ -30,13 +30,14 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
const struct cpuid_bit *cb;
static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
- { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
- { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
- { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007 },
- { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a },
- { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a },
- { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a },
- { X86_FEATURE_NRIPS, CR_EDX, 3, 0x8000000a },
+ { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006 },
+ { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006 },
+ { X86_FEATURE_APERFMPERF, CR_ECX, 0, 0x00000006 },
+ { X86_FEATURE_CPB, CR_EDX, 9, 0x80000007 },
+ { X86_FEATURE_NPT, CR_EDX, 0, 0x8000000a },
+ { X86_FEATURE_LBRV, CR_EDX, 1, 0x8000000a },
+ { X86_FEATURE_SVML, CR_EDX, 2, 0x8000000a },
+ { X86_FEATURE_NRIPS, CR_EDX, 3, 0x8000000a },
{ 0, 0, 0, 0 }
};
@@ -54,14 +55,6 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
if (regs[cb->reg] & (1 << cb->bit))
set_cpu_cap(c, cb->feature);
}
-
- /*
- * common AMD/Intel features
- */
- if (c->cpuid_level >= 6) {
- if (cpuid_ecx(6) & 0x1)
- set_cpu_cap(c, X86_FEATURE_APERFMPERF);
- }
}
/* leaf 0xb SMT level */
^ permalink raw reply related [flat|nested] 20+ messages in thread
end of thread, other threads:[~2010-05-03 23:22 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-31 19:56 [-v3 PATCH 0/6] powernow-k8: Core Performance Boost and effective frequency support Borislav Petkov
2010-03-31 19:56 ` [PATCH 1/6] x86, cpu: Add AMD core boosting feature flag to /proc/cpuinfo Borislav Petkov
2010-04-09 22:18 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-03-31 19:56 ` [PATCH 2/6] powernow-k8: Add core performance boost support Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-03-31 19:56 ` [PATCH 3/6] x86: Unify APERF/MPERF support Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
2010-05-03 23:21 ` [tip:x86/cpu] x86, cpu: Make APERF/MPERF a normal table-driven flag tip-bot for H. Peter Anvin
2010-03-31 19:56 ` [PATCH 4/6] cpufreq: Add APERF/MPERF support for AMD processors Borislav Petkov
2010-04-01 9:01 ` Marvin
2010-04-01 10:36 ` Borislav Petkov
2010-04-01 11:46 ` Marvin
2010-04-01 14:00 ` Borislav Petkov
2010-04-01 14:19 ` [-v3.1 PATCH " Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] x86, " tip-bot for Mark Langsdorf
2010-03-31 19:56 ` [PATCH 5/6] powernow-k8: Fix frequency reporting Borislav Petkov
2010-04-09 22:19 ` [tip:x86/cpu] " tip-bot for Mark Langsdorf
2010-05-03 13:09 ` [tip:x86/urgent] " tip-bot for Mark Langsdorf
2010-03-31 19:56 ` [PATCH 6/6] cpufreq: Unify sysfs attribute definition macros Borislav Petkov
2010-04-09 22:20 ` [tip:x86/cpu] " tip-bot for Borislav Petkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).