xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
	Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Li Yechen <lccycc123@gmail.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Justin Weaver <jtweaver@hawaii.edu>, Matt Wilson <msw@amazon.com>,
	Elena Ufimtseva <ufimtseva@gmail.com>
Subject: [PATCH v2 06/16] xen: sched: make space for cpu_soft_affinity
Date: Wed, 13 Nov 2013 20:11:52 +0100	[thread overview]
Message-ID: <20131113191151.18086.37783.stgit@Solace> (raw)
In-Reply-To: <20131113190852.18086.5437.stgit@Solace>

Before this change, each vcpu had its own vcpu-affinity
(in v->cpu_affinity), representing the set of pcpus where
the vcpu is allowed to run. Since when NUMA-aware scheduling
was introduced the (credit1 only, for now) scheduler also
tries as much as it can to run all the vcpus of a domain
on one of the nodes that constitutes the domain's
node-affinity.

The idea here is making the mechanism more general by:
 * allowing for this 'preference' for some pcpus/nodes to be
   expressed on a per-vcpu basis, instead than for the domain
   as a whole. That is to say, each vcpu should have its own
   set of preferred pcpus/nodes, instead than it being the
   very same for all the vcpus of the domain;
 * generalizing the idea of 'preferred pcpus' to not only NUMA
   awareness and support. That is to say, independently from
   it being or not (mostly) useful on NUMA systems, it should
   be possible to specify, for each vcpu, a set of pcpus where
   it prefers to run (in addition, and possibly unrelated to,
   the set of pcpus where it is allowed to run).

We will be calling this set of *preferred* pcpus the vcpu's
soft affinity, and this change introduces, allocates, frees
and initializes the data structure required to host that in
struct vcpu (cpu_soft_affinity).

Also, the new field is not used anywhere yet, so no real
functional change yet.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v1:
 * this patch does something similar to what, in v1, was
   being done in "5/12 xen: numa-sched: make space for
   per-vcpu node-affinity"
---
 xen/common/domain.c     |    3 +++
 xen/common/keyhandler.c |    2 ++
 xen/common/schedule.c   |    2 ++
 xen/include/xen/sched.h |    3 +++
 4 files changed, 10 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2cbc489..c33b876 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -128,6 +128,7 @@ struct vcpu *alloc_vcpu(
     if ( !zalloc_cpumask_var(&v->cpu_affinity) ||
          !zalloc_cpumask_var(&v->cpu_affinity_tmp) ||
          !zalloc_cpumask_var(&v->cpu_affinity_saved) ||
+         !zalloc_cpumask_var(&v->cpu_soft_affinity) ||
          !zalloc_cpumask_var(&v->vcpu_dirty_cpumask) )
         goto fail_free;
 
@@ -159,6 +160,7 @@ struct vcpu *alloc_vcpu(
         free_cpumask_var(v->cpu_affinity);
         free_cpumask_var(v->cpu_affinity_tmp);
         free_cpumask_var(v->cpu_affinity_saved);
+        free_cpumask_var(v->cpu_soft_affinity);
         free_cpumask_var(v->vcpu_dirty_cpumask);
         free_vcpu_struct(v);
         return NULL;
@@ -737,6 +739,7 @@ static void complete_domain_destroy(struct rcu_head *head)
             free_cpumask_var(v->cpu_affinity);
             free_cpumask_var(v->cpu_affinity_tmp);
             free_cpumask_var(v->cpu_affinity_saved);
+            free_cpumask_var(v->cpu_soft_affinity);
             free_cpumask_var(v->vcpu_dirty_cpumask);
             free_vcpu_struct(v);
         }
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 8e4b3f8..33c9a37 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -298,6 +298,8 @@ static void dump_domains(unsigned char key)
             printk("dirty_cpus=%s ", tmpstr);
             cpuset_print(tmpstr, sizeof(tmpstr), v->cpu_affinity);
             printk("cpu_affinity=%s\n", tmpstr);
+            cpuset_print(tmpstr, sizeof(tmpstr), v->cpu_soft_affinity);
+            printk("cpu_soft_affinity=%s\n", tmpstr);
             printk("    pause_count=%d pause_flags=%lx\n",
                    atomic_read(&v->pause_count), v->pause_flags);
             arch_dump_vcpu_info(v);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0f45f07..5731622 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -198,6 +198,8 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
     else
         cpumask_setall(v->cpu_affinity);
 
+    cpumask_setall(v->cpu_soft_affinity);
+
     /* Initialise the per-vcpu timers. */
     init_timer(&v->periodic_timer, vcpu_periodic_timer_fn,
                v, v->processor);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cbdf377..7e00caf 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -198,6 +198,9 @@ struct vcpu
     /* Used to restore affinity across S3. */
     cpumask_var_t    cpu_affinity_saved;
 
+    /* Bitmask of CPUs on which this VCPU prefers to run. */
+    cpumask_var_t    cpu_soft_affinity;
+
     /* Bitmask of CPUs which are holding onto this VCPU's state. */
     cpumask_var_t    vcpu_dirty_cpumask;

  parent reply	other threads:[~2013-11-13 19:11 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-13 19:10 [PATCH v2 00/16] Implement vcpu soft affinity for credit1 Dario Faggioli
2013-11-13 19:11 ` [PATCH v2 01/16] xl: match output of vcpu-list with pinning syntax Dario Faggioli
2013-11-14 10:50   ` George Dunlap
2013-11-14 11:11     ` Dario Faggioli
2013-11-14 11:14       ` George Dunlap
2013-11-14 11:13     ` Dario Faggioli
2013-11-14 12:44     ` Ian Jackson
2013-11-14 14:19   ` Ian Jackson
2013-11-13 19:11 ` [PATCH v2 02/16] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-14 11:02   ` George Dunlap
2013-11-14 14:24   ` Ian Jackson
2013-11-14 14:37     ` Dario Faggioli
2013-11-13 19:11 ` [PATCH v2 03/16] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-13 19:11 ` [PATCH v2 04/16] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-13 19:11 ` [PATCH v2 05/16] xen: fix leaking of v->cpu_affinity_saved Dario Faggioli
2013-11-14 11:11   ` George Dunlap
2013-11-14 11:58     ` Dario Faggioli
2013-11-14 14:25   ` Ian Jackson
2013-11-13 19:11 ` Dario Faggioli [this message]
2013-11-14 15:03   ` [PATCH v2 06/16] xen: sched: make space for cpu_soft_affinity George Dunlap
2013-11-14 16:14     ` Dario Faggioli
2013-11-15 10:07       ` George Dunlap
2013-11-13 19:12 ` [PATCH v2 07/16] xen: sched: rename v->cpu_affinity into v->cpu_hard_affinity Dario Faggioli
2013-11-14 14:17   ` George Dunlap
2013-11-13 19:12 ` [PATCH v2 08/16] xen: derive NUMA node affinity from hard and soft CPU affinity Dario Faggioli
2013-11-14 15:21   ` George Dunlap
2013-11-14 16:30     ` Dario Faggioli
2013-11-15 10:52       ` George Dunlap
2013-11-15 14:17         ` Dario Faggioli
2013-11-13 19:12 ` [PATCH v2 09/16] xen: sched: DOMCTL_*vcpuaffinity works with hard and soft affinity Dario Faggioli
2013-11-14 14:42   ` George Dunlap
2013-11-14 16:21     ` Dario Faggioli
2013-11-13 19:12 ` [PATCH v2 10/16] xen: sched: use soft-affinity instead of domain's node-affinity Dario Faggioli
2013-11-14 15:30   ` George Dunlap
2013-11-15  0:39     ` Dario Faggioli
2013-11-15 11:23       ` George Dunlap
2013-11-13 19:12 ` [PATCH v2 11/16] libxc: get and set soft and hard affinity Dario Faggioli
2013-11-14 14:58   ` Ian Jackson
2013-11-14 16:18     ` Dario Faggioli
2013-11-14 15:38   ` George Dunlap
2013-11-14 16:41     ` Dario Faggioli
2013-11-13 19:12 ` [PATCH v2 12/16] libxl: get and set soft affinity Dario Faggioli
2013-11-13 19:16   ` Dario Faggioli
2013-11-14 15:11   ` Ian Jackson
2013-11-14 15:55     ` George Dunlap
2013-11-14 16:25       ` Ian Jackson
2013-11-15  5:13         ` Dario Faggioli
2013-11-15 12:02         ` George Dunlap
2013-11-15 17:29           ` Dario Faggioli
2013-11-15  3:45     ` Dario Faggioli
2013-11-13 19:12 ` [PATCH v2 13/16] xl: show soft affinity in `xl vcpu-list' Dario Faggioli
2013-11-14 15:12   ` Ian Jackson
2013-11-13 19:13 ` [PATCH v2 14/16] xl: enable setting soft affinity Dario Faggioli
2013-11-13 19:13 ` [PATCH v2 15/16] xl: enable for specifying node-affinity in the config file Dario Faggioli
2013-11-14 15:14   ` Ian Jackson
2013-11-14 16:12     ` Dario Faggioli
2013-11-13 19:13 ` [PATCH v2 16/16] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
2013-11-14 15:17   ` Ian Jackson
2013-11-14 16:11     ` Dario Faggioli
2013-11-14 16:03   ` George Dunlap
2013-11-14 16:48     ` Dario Faggioli
2013-11-14 17:49       ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131113191151.18086.37783.stgit@Solace \
    --to=dario.faggioli@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=Marcus.Granado@eu.citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jtweaver@hawaii.edu \
    --cc=juergen.gross@ts.fujitsu.com \
    --cc=keir@xen.org \
    --cc=lccycc123@gmail.com \
    --cc=msw@amazon.com \
    --cc=ufimtseva@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).