linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: tip-bot for Andreas Herrmann <andreas.herrmann3@amd.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@kernel.org,
	andreas.herrmann3@amd.com, tglx@linutronix.de,
	hpa@linux.intel.com
Subject: [tip:x86/cpu] x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d on AMD
Date: Tue, 13 Nov 2012 14:01:44 -0800	[thread overview]
Message-ID: <tip-27d3a8a26ada7660116fdd6830096008c063ee96@git.kernel.org> (raw)
In-Reply-To: <20121019090209.GG26718@alberich>

Commit-ID:  27d3a8a26ada7660116fdd6830096008c063ee96
Gitweb:     http://git.kernel.org/tip/27d3a8a26ada7660116fdd6830096008c063ee96
Author:     Andreas Herrmann <andreas.herrmann3@amd.com>
AuthorDate: Fri, 19 Oct 2012 11:02:09 +0200
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Tue, 13 Nov 2012 11:22:31 -0800

x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d on AMD

The patch is based on a patch submitted by Hans Rosenfeld.
See http://marc.info/?l=linux-kernel&m=133908777200931

Note that  CPUID Fn8000_001D_EAX slightly differs to Intel's CPUID function 4.

Bits 14-25 contain NumSharingCache. Actual number of cores sharing
           this cache. SW to add value of one to get result.

The corresponding bits on Intel are defined as "maximum number of threads
sharing this cache" (with a "plus 1" encoding).

Thus a different method to determine which cores are sharing a cache
level has to be used.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Link: http://lkml.kernel.org/r/20121019090209.GG26718@alberich
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/kernel/cpu/intel_cacheinfo.c |   41 +++++++++++++++++++++-----------
 1 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c
index cd2e1cc..fe9edec 100644
--- a/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ b/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -750,37 +750,50 @@ static DEFINE_PER_CPU(struct _cpuid4_info *, ici_cpuid4_info);
 static int __cpuinit cache_shared_amd_cpu_map_setup(unsigned int cpu, int index)
 {
 	struct _cpuid4_info *this_leaf;
-	int ret, i, sibling;
-	struct cpuinfo_x86 *c = &cpu_data(cpu);
+	int i, sibling;
 
-	ret = 0;
-	if (index == 3) {
-		ret = 1;
-		for_each_cpu(i, cpu_llc_shared_mask(cpu)) {
+	if (cpu_has_topoext) {
+		unsigned int apicid, nshared, first, last;
+
+		if (!per_cpu(ici_cpuid4_info, cpu))
+			return 0;
+
+		this_leaf = CPUID4_INFO_IDX(cpu, index);
+		nshared = this_leaf->base.eax.split.num_threads_sharing + 1;
+		apicid = cpu_data(cpu).apicid;
+		first = apicid - (apicid % nshared);
+		last = first + nshared - 1;
+
+		for_each_online_cpu(i) {
+			apicid = cpu_data(i).apicid;
+			if ((apicid < first) || (apicid > last))
+				continue;
 			if (!per_cpu(ici_cpuid4_info, i))
 				continue;
 			this_leaf = CPUID4_INFO_IDX(i, index);
-			for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) {
-				if (!cpu_online(sibling))
+
+			for_each_online_cpu(sibling) {
+				apicid = cpu_data(sibling).apicid;
+				if ((apicid < first) || (apicid > last))
 					continue;
 				set_bit(sibling, this_leaf->shared_cpu_map);
 			}
 		}
-	} else if ((c->x86 == 0x15) && ((index == 1) || (index == 2))) {
-		ret = 1;
-		for_each_cpu(i, cpu_sibling_mask(cpu)) {
+	} else if (index == 3) {
+		for_each_cpu(i, cpu_llc_shared_mask(cpu)) {
 			if (!per_cpu(ici_cpuid4_info, i))
 				continue;
 			this_leaf = CPUID4_INFO_IDX(i, index);
-			for_each_cpu(sibling, cpu_sibling_mask(cpu)) {
+			for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) {
 				if (!cpu_online(sibling))
 					continue;
 				set_bit(sibling, this_leaf->shared_cpu_map);
 			}
 		}
-	}
+	} else
+		return 0;
 
-	return ret;
+	return 1;
 }
 
 static void __cpuinit cache_shared_cpu_map_setup(unsigned int cpu, int index)

  reply	other threads:[~2012-11-13 22:01 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-19  8:55 [PATCH 0/4] x86, cacheinfo: Use AMD topology extensions Andreas Herrmann
2012-10-19  8:58 ` [PATCH 1/4] x86: Add cpu_has_topoext Andreas Herrmann
2012-11-13 21:58   ` [tip:x86/cpu] " tip-bot for Andreas Herrmann
2012-10-19  8:59 ` [PATCH 2/4] x86, cacheinfo: Determine number of cache leafs using CPUID 0x8000001d on AMD Andreas Herrmann
2012-11-13 21:59   ` [tip:x86/cpu] " tip-bot for Andreas Herrmann
2012-10-19  9:00 ` [PATCH 3/4] x86, cacheinfo: Make use of CPUID 0x8000001d for cache information " Andreas Herrmann
2012-11-13 22:00   ` [tip:x86/cpu] " tip-bot for Andreas Herrmann
2012-10-19  9:02 ` [PATCH 4/4] x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d " Andreas Herrmann
2012-11-13 22:01   ` tip-bot for Andreas Herrmann [this message]
2012-11-06 20:47 ` [PATCH 0/4] x86, cacheinfo: Use AMD topology extensions Jacob Shin
2012-11-07  9:48   ` H. Peter Anvin
2012-11-13 18:28     ` Jacob Shin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-27d3a8a26ada7660116fdd6830096008c063ee96@git.kernel.org \
    --to=andreas.herrmann3@amd.com \
    --cc=hpa@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).