From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
esb@ics.hawaii.edu, henric@hawaii.edu,
juergen.gross@ts.fujitsu.com
Subject: [PATCH v3] Xen sched: Fix multiple runqueues in credit2
Date: Sat, 8 Feb 2014 15:57:46 -1000 [thread overview]
Message-ID: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu> (raw)
This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.
CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.
Socket information is available for each individual CPU when each
gets the STARTING callback (socket information is also available
for CPU 0 by that time). Each are assigned to a run queue
based on their socket.
Signed-off-by: Justin Weaver <jtweaver@hawaii.edu>
---
Changes from v2:
* removed extra blank line
Changes from v1:
* moved comments to the top of the section in one long comment block
* collapsed code to improve readability
* fixed else if indentation style
* updated comment about the runqueue plan
---
xen/common/sched_credit2.c | 40 ++++++++++++++++++++++++++--------------
1 file changed, 26 insertions(+), 14 deletions(-)
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..14d2e37 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,8 @@
* to a small value, and a fixed credit is added to everyone.
*
* The plan is for all cores that share an L2 will share the same
- * runqueue. At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. At the moment, all cores that share a socket share the same
+ * runqueue.
*/
/*
@@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
static void init_pcpu(const struct scheduler *ops, int cpu)
{
int rqi;
+ int cpu0_socket;
+ int cpu_socket;
unsigned long flags;
struct csched_private *prv = CSCHED_PRIV(ops);
struct csched_runqueue_data *rqd;
@@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
return;
}
- /* Figure out which runqueue to put it in */
+ /*
+ * Choose which run queue to add cpu to based on its socket.
+ * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
+ * callback and socket information is not yet available for it).
+ * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0.
+ * Else If cpu is on socket 0, add it to a run queue based on the socket
+ * CPU 0 is actually on.
+ * Else add it to a run queue based on its own socket.
+ */
rqi = 0;
+ cpu_socket = cpu_to_socket(cpu);
+ cpu0_socket = cpu_to_socket(0);
- /* Figure out which runqueue to put it in */
- /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
- if ( cpu == 0 )
- rqi = 0;
+ if ( cpu == 0 || cpu_socket == cpu0_socket )
+ rqi = 0;
+ else if ( cpu_socket == 0 )
+ rqi = cpu0_socket;
else
- rqi = cpu_to_socket(cpu);
+ rqi = cpu_socket;
if ( rqi < 0 )
{
@@ -2010,13 +2022,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
static void *
csched_alloc_pdata(const struct scheduler *ops, int cpu)
{
- /* Check to see if the cpu is online yet */
- /* Note: cpu 0 doesn't get a STARTING callback */
- if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+ /* This function is only for calling init_pcpu on CPU 0
+ * because it does not get a STARTING callback */
+
+ if ( cpu == 0 )
init_pcpu(ops, cpu);
- else
- printk("%s: cpu %d not online yet, deferring initializatgion\n",
- __func__, cpu);
return (void *)1;
}
@@ -2072,6 +2082,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
static int
csched_cpu_starting(int cpu)
{
+ /* This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
struct scheduler *ops;
/* Hope this is safe from cpupools switching things around. :-) */
--
1.7.10.4
next reply other threads:[~2014-02-09 1:57 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-09 1:57 Justin Weaver [this message]
2014-02-10 8:52 ` [PATCH v3] Xen sched: Fix multiple runqueues in credit2 Jan Beulich
2014-02-10 9:52 ` Dario Faggioli
2014-02-10 10:01 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1391911066-2572-1-git-send-email-jtweaver@hawaii.edu \
--to=jtweaver@hawaii.edu \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=esb@ics.hawaii.edu \
--cc=george.dunlap@eu.citrix.com \
--cc=henric@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).