From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
keir@xen.org, Jan Beulich <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/2] Credit2: fix per-socket runqueue setup
Date: Tue, 2 Sep 2014 18:46:17 +0200 [thread overview]
Message-ID: <1409676377.2673.12.camel@Solace.lan> (raw)
In-Reply-To: <54047BBB.3050507@eu.citrix.com>
[-- Attachment #1.1: Type: text/plain, Size: 4205 bytes --]
On lun, 2014-09-01 at 14:59 +0100, George Dunlap wrote:
> On 08/25/2014 09:31 AM, Jan Beulich wrote:
> >>>> On 22.08.14 at 19:15, <dario.faggioli@citrix.com> wrote:
> >> root@tg03:~# xl dmesg |grep -i runqueue
> >> (XEN) Adding cpu 0 to runqueue 1
> >> (XEN) First cpu on runqueue, activating
> >> (XEN) Adding cpu 1 to runqueue 1
> >> (XEN) Adding cpu 2 to runqueue 1
> >> (XEN) Adding cpu 3 to runqueue 1
> >> (XEN) Adding cpu 4 to runqueue 1
> >> (XEN) Adding cpu 5 to runqueue 1
> >> (XEN) Adding cpu 6 to runqueue 1
> >> (XEN) Adding cpu 7 to runqueue 1
> >> (XEN) Adding cpu 8 to runqueue 1
> >> (XEN) Adding cpu 9 to runqueue 1
> >> (XEN) Adding cpu 10 to runqueue 1
> >> (XEN) Adding cpu 11 to runqueue 1
> >> (XEN) Adding cpu 12 to runqueue 0
> >> (XEN) First cpu on runqueue, activating
> >> (XEN) Adding cpu 13 to runqueue 0
> >> (XEN) Adding cpu 14 to runqueue 0
> >> (XEN) Adding cpu 15 to runqueue 0
> >> (XEN) Adding cpu 16 to runqueue 0
> >> (XEN) Adding cpu 17 to runqueue 0
> >> (XEN) Adding cpu 18 to runqueue 0
> >> (XEN) Adding cpu 19 to runqueue 0
> >> (XEN) Adding cpu 20 to runqueue 0
> >> (XEN) Adding cpu 21 to runqueue 0
> >> (XEN) Adding cpu 22 to runqueue 0
> >> (XEN) Adding cpu 23 to runqueue 0
> >>
> >> Which makes a lot more sense. :-)
> > But it looks suspicious that the low numbered CPUs get assigned to
> > runqueue 1. Is there an explanation for this, or are surprises to be
> > expected on larger than dual-socket systems?
>
Not sure what kind of surprises you're thinking to, but I have a big box
handy. I'll test the new version of the series on it, and report what
happens.
> Well the explanation is most likely from the cpu_topology info from the
> cover letter (0/2): On his machine, cpus 0-11 are on socket 1, and cpus
> 12-23 are on socket 0.
>
Exactly, here it is again, coming from `xl info -n'.
cpu_topology :
cpu: core socket node
0: 0 1 0
1: 0 1 0
2: 1 1 0
3: 1 1 0
4: 2 1 0
5: 2 1 0
6: 8 1 0
7: 8 1 0
8: 9 1 0
9: 9 1 0
10: 10 1 0
11: 10 1 0
12: 0 0 1
13: 0 0 1
14: 1 0 1
15: 1 0 1
16: 2 0 1
17: 2 0 1
18: 8 0 1
19: 8 0 1
20: 9 0 1
21: 9 0 1
22: 10 0 1
23: 10 0 1
> Why that's the topology reported (I presume in
> ACPI?) I'm not sure.
>
Me neither. BTW, on baremetal, here's what I see:
root@tg03:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22
node 0 size: 18432 MB
node 0 free: 17927 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
node 1 size: 18419 MB
node 1 free: 17926 MB
node distances:
node 0 1
0: 10 20
1: 20 10
Also:
root@tg03:~# for i in `seq 0 23`;do echo "CPU$i is on socket `cat /sys/bus/cpu/devices/cpu$i/topology/physical_package_id`";done
CPU0 is on socket 1
CPU1 is on socket 0
CPU2 is on socket 1
CPU3 is on socket 0
CPU4 is on socket 1
CPU5 is on socket 0
CPU6 is on socket 1
CPU7 is on socket 0
CPU8 is on socket 1
CPU9 is on socket 0
CPU10 is on socket 1
CPU11 is on socket 0
CPU12 is on socket 1
CPU13 is on socket 0
CPU14 is on socket 1
CPU15 is on socket 0
CPU16 is on socket 1
CPU17 is on socket 0
CPU18 is on socket 1
CPU19 is on socket 0
CPU20 is on socket 1
CPU21 is on socket 0
CPU22 is on socket 1
CPU23 is on socket 0
I've noticed this before, but, TBH, I never dug the cause of the
discrepancy between us and Linux.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2014-09-02 16:48 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-22 17:15 [PATCH 0/2] Credit2: fix per-socket runqueue setup Dario Faggioli
2014-08-22 17:15 ` [PATCH 1/2] x86: during boot, anticipate identifying the boot cpu Dario Faggioli
2014-08-22 17:28 ` Andrew Cooper
2014-08-22 18:40 ` Dario Faggioli
2014-08-25 8:35 ` Jan Beulich
2014-08-25 8:39 ` Jan Beulich
2014-09-01 15:12 ` George Dunlap
2014-09-01 15:24 ` Jan Beulich
2014-08-22 17:15 ` [PATCH 2/2] sched: credit2: use boot CPU info for CPU #0 Dario Faggioli
2014-08-25 8:41 ` Jan Beulich
2014-08-25 8:31 ` [PATCH 0/2] Credit2: fix per-socket runqueue setup Jan Beulich
2014-09-01 13:59 ` George Dunlap
2014-09-02 16:46 ` Dario Faggioli [this message]
2014-09-03 10:00 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1409676377.2673.12.camel@Solace.lan \
--to=dario.faggioli@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=keir@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).