From: Borislav Petkov <bp@alien8.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Huang Rui <ray.huang@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
Guenter Roeck <linux@roeck-us.net>,
Jean Delvare <jdelvare@suse.de>,
linux-hwmon@vger.kernel.org, linux-kernel@vger.kernel.org,
spg_linux_kernel@amd.com
Subject: Re: [PATCH v5 2/6] hwmon: (fam15h_power) Add compute unit accumulated power
Date: Tue, 29 Mar 2016 09:57:49 +0200 [thread overview]
Message-ID: <20160329075749.GB3705@pd.tnic> (raw)
In-Reply-To: <20160329073158.GC3408@twins.programming.kicks-ass.net>
On Tue, Mar 29, 2016 at 09:31:58AM +0200, Peter Zijlstra wrote:
> This will not in fact work for Intel, nor if I manage to one day
> randomize our CPU numbers on AMD.
Oh, I know why. I have this 64 CPUs box here:
$ grep "core id" /proc/cpuinfo | uniq
core id : 0
core id : 8
core id : 2
core id : 10
core id : 1
core id : 9
core id : 3
core id : 11
core id : 0
core id : 8
core id : 2
core id : 10
core id : 1
core id : 9
core id : 3
core id : 11
Those core IDs repeat and are almost random too :)
I guess we'll need a mask. Maybe as a future exercise...
That box's topology has other funsies like this:
$ grep -E -B 2 "core id\s+: 0" /proc/cpuinfo
physical id : 0
siblings : 16
core id : 0
--
physical id : 1
siblings : 16
core id : 0
--
physical id : 2
siblings : 16
core id : 0
--
physical id : 3
siblings : 16
core id : 0
--
physical id : 0
siblings : 16
core id : 0
--
physical id : 1
siblings : 16
core id : 0
--
physical id : 2
siblings : 16
core id : 0
--
physical id : 3
siblings : 16
core id : 0
So in order to dig out which HT threads belong together, I need to look
at the (core id, physical id) pair.
I guess this is how we "fix" the schedulers of other OSes - by playing
topology games...
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
next prev parent reply other threads:[~2016-03-29 7:57 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-28 5:32 [PATCH v5 0/6] hwmon: (fam15h_power) Introduce an accumulated power reporting algorithm Huang Rui
2016-03-28 5:32 ` [PATCH v5 1/6] hwmon: (fam15h_power) Add CPU_SUP_AMD as the dependence Huang Rui
2016-03-28 8:59 ` Borislav Petkov
2016-03-28 5:32 ` [PATCH v5 2/6] hwmon: (fam15h_power) Add compute unit accumulated power Huang Rui
2016-03-28 9:29 ` Borislav Petkov
2016-03-29 3:02 ` Huang Rui
2016-03-29 7:31 ` Peter Zijlstra
2016-03-29 7:57 ` Borislav Petkov [this message]
2016-03-28 5:32 ` [PATCH v5 3/6] hwmon: (fam15h_power) Add ptsc counter value for " Huang Rui
2016-03-28 5:32 ` [PATCH v5 4/6] hwmon: (fam15h_power) Introduce a cpu accumulated power reporting algorithm Huang Rui
2016-03-28 9:33 ` Borislav Petkov
2016-03-29 3:28 ` Huang Rui
2016-03-29 7:25 ` Borislav Petkov
2016-03-28 5:32 ` [PATCH v5 5/6] hwmon: (fam15h_power) Add documentation for TDP and accumulated power algorithm Huang Rui
2016-03-28 5:32 ` [PATCH v5 6/6] hwmon: (fam15h_power) Add platform check function Huang Rui
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160329075749.GB3705@pd.tnic \
--to=bp@alien8.de \
--cc=jdelvare@suse.de \
--cc=linux-hwmon@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@roeck-us.net \
--cc=peterz@infradead.org \
--cc=ray.huang@amd.com \
--cc=spg_linux_kernel@amd.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox