From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xen.org
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
Keir Fraser <keir@xen.org>,
Ian Campbell <Ian.Campbell@citrix.com>,
Li Yechen <lccycc123@gmail.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
Juergen Gross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Jan Beulich <JBeulich@suse.com>,
Justin Weaver <jtweaver@hawaii.edu>, Matt Wilson <msw@amazon.com>,
Elena Ufimtseva <ufimtseva@gmail.com>
Subject: [PATCH v5 12/17] xen/libxc: sched: DOMCTL_*vcpuaffinity works with hard and soft affinity
Date: Mon, 02 Dec 2013 19:29:08 +0100 [thread overview]
Message-ID: <20131202182908.29026.23720.stgit@Solace> (raw)
In-Reply-To: <20131202180129.29026.81543.stgit@Solace>
by adding a flag for the caller to specify which one he cares about.
At the same time, enable the caller to get back the "effective affinity"
of the vCPU. That is the intersection between cpupool's cpus, the (new)
hard affinity and, for soft affinity, the (new) soft affinity. In fact,
despite what has been successfully set with the DOMCTL_setvcpuaffinity
hypercall, the Xen scheduler will never run a vCPU outside of its hard
affinity or of its domain's cpupool.
This happens by adding another cpumap to the interface and making both
the cpumaps IN/OUT parameters (for DOMCTL_setvcpuaffinity, they're of
course out-only for DOMCTL_getvcpuaffinity).
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
---
Changes since v4
* make both the cpumaps IN/OUT and use them for reporting back
both effective hard and soft affinity, as requested during
review;
* fix the arguments' names, the comments and the annotations in
the public header accordingly, as requested during review.
Changes since v3:
* no longer discarding possible errors. Also, rollback setting
hard affinity if setting soft affinity fails afterwards, so
that the caller really sees no changes when the call fails,
as requested during review;
* fixed -EFAULT --> -ENOMEM in case of a failed memory allocation,
as requested during review;
* removed non necessary use of pointer to pointer, as requested
during review.
Changes from v2:
* in DOMCTL_[sg]etvcpuaffinity, flag is really a flag now,
i.e., we accept request for setting and getting: (1) only
hard affinity; (2) only soft affinity; (3) both; as
suggested during review.
---
tools/libxc/xc_domain.c | 12 +++--
xen/arch/x86/traps.c | 4 +-
xen/common/domctl.c | 111 +++++++++++++++++++++++++++++++++++++++----
xen/common/schedule.c | 35 +++++++++-----
xen/common/wait.c | 6 +-
xen/include/public/domctl.h | 17 ++++++-
xen/include/xen/sched.h | 3 +
7 files changed, 154 insertions(+), 34 deletions(-)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 1ccafc5..8c807a6 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -215,13 +215,14 @@ int xc_vcpu_setaffinity(xc_interface *xch,
domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
domctl.domain = (domid_t)domid;
- domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
memcpy(local, cpumap, cpusize);
- set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
+ set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
- domctl.u.vcpuaffinity.cpumap.nr_bits = cpusize * 8;
+ domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
ret = do_domctl(xch, &domctl);
@@ -259,9 +260,10 @@ int xc_vcpu_getaffinity(xc_interface *xch,
domctl.cmd = XEN_DOMCTL_getvcpuaffinity;
domctl.domain = (domid_t)domid;
domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
- set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
- domctl.u.vcpuaffinity.cpumap.nr_bits = cpusize * 8;
+ set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
+ domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
ret = do_domctl(xch, &domctl);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 157031e..ff4523b 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3103,7 +3103,7 @@ static void nmi_mce_softirq(void)
* Make sure to wakeup the vcpu on the
* specified processor.
*/
- vcpu_set_affinity(st->vcpu, cpumask_of(st->processor));
+ vcpu_set_hard_affinity(st->vcpu, cpumask_of(st->processor));
/* Affinity is restored in the iret hypercall. */
}
@@ -3132,7 +3132,7 @@ void async_exception_cleanup(struct vcpu *curr)
if ( !cpumask_empty(curr->cpu_hard_affinity_tmp) &&
!cpumask_equal(curr->cpu_hard_affinity_tmp, curr->cpu_hard_affinity) )
{
- vcpu_set_affinity(curr, curr->cpu_hard_affinity_tmp);
+ vcpu_set_hard_affinity(curr, curr->cpu_hard_affinity_tmp);
cpumask_clear(curr->cpu_hard_affinity_tmp);
}
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 5e0ac5c..9eecb5e 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -287,6 +287,16 @@ void domctl_lock_release(void)
spin_unlock(¤t->domain->hypercall_deadlock_mutex);
}
+static inline
+int vcpuaffinity_params_invalid(const xen_domctl_vcpuaffinity_t *vcpuaff)
+{
+ return vcpuaff->flags == 0 ||
+ (vcpuaff->flags & XEN_VCPUAFFINITY_HARD &&
+ guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
+ (vcpuaff->flags & XEN_VCPUAFFINITY_SOFT &&
+ guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
+}
+
long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
{
long ret = 0;
@@ -605,31 +615,112 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
case XEN_DOMCTL_getvcpuaffinity:
{
struct vcpu *v;
+ xen_domctl_vcpuaffinity_t *vcpuaff = &op->u.vcpuaffinity;
ret = -EINVAL;
- if ( op->u.vcpuaffinity.vcpu >= d->max_vcpus )
+ if ( vcpuaff->vcpu >= d->max_vcpus )
break;
ret = -ESRCH;
- if ( (v = d->vcpu[op->u.vcpuaffinity.vcpu]) == NULL )
+ if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
break;
if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
{
- cpumask_var_t new_affinity;
+ cpumask_var_t new_affinity, old_affinity;
+ cpumask_t *online = cpupool_online_cpumask(v->domain->cpupool);;
+
+ /*
+ * We want to be able to restore hard affinity if we are trying
+ * setting both and changing soft affinity (which happens later,
+ * when hard affinity has been succesfully chaged already) fails.
+ */
+ if ( !alloc_cpumask_var(&old_affinity) )
+ {
+ ret = -ENOMEM;
+ break;
+ }
+ cpumask_copy(old_affinity, v->cpu_hard_affinity);
- ret = xenctl_bitmap_to_cpumask(
- &new_affinity, &op->u.vcpuaffinity.cpumap);
- if ( !ret )
+ if ( !alloc_cpumask_var(&new_affinity) )
{
- ret = vcpu_set_affinity(v, new_affinity);
- free_cpumask_var(new_affinity);
+ free_cpumask_var(old_affinity);
+ ret = -ENOMEM;
+ break;
}
+
+ ret = -EINVAL;
+ if (vcpuaffinity_params_invalid(vcpuaff))
+ goto setvcpuaffinity_out;
+
+ /*
+ * We both set a new affinity and report back to the caller what
+ * the scheduler will be effectively using.
+ */
+ if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+ {
+ ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+ &vcpuaff->cpumap_hard,
+ vcpuaff->cpumap_hard.nr_bits);
+ if ( !ret )
+ ret = vcpu_set_hard_affinity(v, new_affinity);
+ if ( ret )
+ goto setvcpuaffinity_out;
+
+ /*
+ * For hard affinity, what we return is the intersection of
+ * cpupool's online mask and the new hard affinity.
+ */
+ cpumask_and(new_affinity, online, v->cpu_hard_affinity);
+ ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
+ new_affinity);
+ }
+ if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+ {
+ ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+ &vcpuaff->cpumap_soft,
+ vcpuaff->cpumap_soft.nr_bits);
+ if ( !ret)
+ ret = vcpu_set_soft_affinity(v, new_affinity);
+ if ( ret )
+ {
+ /*
+ * Since we're returning error, the caller expects nothing
+ * happened, so we rollback the changes to hard affinity
+ * (if any).
+ */
+ if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+ vcpu_set_hard_affinity(v, old_affinity);
+ goto setvcpuaffinity_out;
+ }
+
+ /*
+ * For soft affinity, we return the intersection between the
+ * new soft affinity, the cpupool's online map and the (new)
+ * hard affinity.
+ */
+ cpumask_and(new_affinity, new_affinity, online);
+ cpumask_and(new_affinity, new_affinity, v->cpu_hard_affinity);
+ ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
+ new_affinity);
+ }
+
+ setvcpuaffinity_out:
+ free_cpumask_var(new_affinity);
+ free_cpumask_var(old_affinity);
}
else
{
- ret = cpumask_to_xenctl_bitmap(
- &op->u.vcpuaffinity.cpumap, v->cpu_hard_affinity);
+ ret = -EINVAL;
+ if (vcpuaffinity_params_invalid(vcpuaff))
+ break;
+
+ if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+ ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
+ v->cpu_hard_affinity);
+ if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+ ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
+ v->cpu_soft_affinity);
}
}
break;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c9ae521..b1e9b08 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -654,22 +654,14 @@ void sched_set_node_affinity(struct domain *d, nodemask_t *mask)
SCHED_OP(DOM2OP(d), set_node_affinity, d, mask);
}
-int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
+static int vcpu_set_affinity(
+ struct vcpu *v, const cpumask_t *affinity, cpumask_t *which)
{
- cpumask_t online_affinity;
- cpumask_t *online;
spinlock_t *lock;
- if ( v->domain->is_pinned )
- return -EINVAL;
- online = VCPU2ONLINE(v);
- cpumask_and(&online_affinity, affinity, online);
- if ( cpumask_empty(&online_affinity) )
- return -EINVAL;
-
lock = vcpu_schedule_lock_irq(v);
- cpumask_copy(v->cpu_hard_affinity, affinity);
+ cpumask_copy(which, affinity);
/* Always ask the scheduler to re-evaluate placement
* when changing the affinity */
@@ -688,6 +680,27 @@ int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity)
return 0;
}
+int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity)
+{
+ cpumask_t online_affinity;
+ cpumask_t *online;
+
+ if ( v->domain->is_pinned )
+ return -EINVAL;
+
+ online = VCPU2ONLINE(v);
+ cpumask_and(&online_affinity, affinity, online);
+ if ( cpumask_empty(&online_affinity) )
+ return -EINVAL;
+
+ return vcpu_set_affinity(v, affinity, v->cpu_hard_affinity);
+}
+
+int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
+{
+ return vcpu_set_affinity(v, affinity, v->cpu_soft_affinity);
+}
+
/* Block the currently-executing domain until a pertinent event occurs. */
void vcpu_block(void)
{
diff --git a/xen/common/wait.c b/xen/common/wait.c
index 3f6ff41..1f6b597 100644
--- a/xen/common/wait.c
+++ b/xen/common/wait.c
@@ -135,7 +135,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
/* Save current VCPU affinity; force wakeup on *this* CPU only. */
wqv->wakeup_cpu = smp_processor_id();
cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
- if ( vcpu_set_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
+ if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
{
gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
domain_crash_synchronous();
@@ -166,7 +166,7 @@ static void __prepare_to_wait(struct waitqueue_vcpu *wqv)
static void __finish_wait(struct waitqueue_vcpu *wqv)
{
wqv->esp = NULL;
- (void)vcpu_set_affinity(current, &wqv->saved_affinity);
+ (void)vcpu_set_hard_affinity(current, &wqv->saved_affinity);
}
void check_wakeup_from_wait(void)
@@ -184,7 +184,7 @@ void check_wakeup_from_wait(void)
/* Re-set VCPU affinity and re-enter the scheduler. */
struct vcpu *curr = current;
cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
- if ( vcpu_set_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
+ if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
{
gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
domain_crash_synchronous();
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 01a3652..d44a775 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -300,8 +300,21 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_nodeaffinity_t);
/* XEN_DOMCTL_setvcpuaffinity */
/* XEN_DOMCTL_getvcpuaffinity */
struct xen_domctl_vcpuaffinity {
- uint32_t vcpu; /* IN */
- struct xenctl_bitmap cpumap; /* IN/OUT */
+ /* IN variables. */
+ uint32_t vcpu;
+ /* Set/get the hard affinity for vcpu */
+#define _XEN_VCPUAFFINITY_HARD 0
+#define XEN_VCPUAFFINITY_HARD (1U<<_XEN_VCPUAFFINITY_HARD)
+ /* Set/get the soft affinity for vcpu */
+#define _XEN_VCPUAFFINITY_SOFT 1
+#define XEN_VCPUAFFINITY_SOFT (1U<<_XEN_VCPUAFFINITY_SOFT)
+ uint32_t flags;
+ /*
+ * IN/OUT variables. Both are IN/OUT for XEN_DOMCTL_setvcpuaffinity
+ * and OUT-only for XEN_DOMCTL_getvcpuaffinity.
+ */
+ struct xenctl_bitmap cpumap_hard;
+ struct xenctl_bitmap cpumap_soft;
};
typedef struct xen_domctl_vcpuaffinity xen_domctl_vcpuaffinity_t;
DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpuaffinity_t);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3575312..0f728b3 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -755,7 +755,8 @@ void scheduler_free(struct scheduler *sched);
int schedule_cpu_switch(unsigned int cpu, struct cpupool *c);
void vcpu_force_reschedule(struct vcpu *v);
int cpu_disable_scheduler(unsigned int cpu);
-int vcpu_set_affinity(struct vcpu *v, const cpumask_t *affinity);
+int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
+int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
void restore_vcpu_affinity(struct domain *d);
void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
next prev parent reply other threads:[~2013-12-02 18:29 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-02 18:27 [PATCH v5 00/17] Implement vcpu soft affinity for credit1 Dario Faggioli
2013-12-02 18:27 ` [PATCH v5 01/17] xl: match output of vcpu-list with pinning syntax Dario Faggioli
2013-12-02 18:27 ` [PATCH v5 02/17] libxl: better name for last parameter of libxl_list_vcpu Dario Faggioli
2013-12-04 11:40 ` Ian Jackson
2013-12-06 14:40 ` Dario Faggioli
2013-12-02 18:27 ` [PATCH v5 03/17] libxl: fix memory leak in libxl_list_vcpu Dario Faggioli
2013-12-05 12:07 ` Ian Jackson
2013-12-02 18:27 ` [PATCH v5 04/17] libxc/libxl: sanitize error handling in *_get_max_{cpus, nodes} Dario Faggioli
2013-12-05 12:10 ` Ian Jackson
2013-12-06 10:34 ` Dario Faggioli
2013-12-06 11:52 ` Ian Jackson
2013-12-02 18:27 ` [PATCH v5 05/17] libxc/libxl: allow to retrieve the number of online pCPUs Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 06/17] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 07/17] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 08/17] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 09/17] xen: sched: rename v->cpu_affinity into v->cpu_hard_affinity Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 10/17] xen: sched: introduce soft-affinity and use it instead d->node-affinity Dario Faggioli
2013-12-02 18:28 ` [PATCH v5 11/17] xen: derive NUMA node affinity from hard and soft CPU affinity Dario Faggioli
2013-12-02 18:29 ` Dario Faggioli [this message]
2013-12-03 10:02 ` [PATCH v5 12/17] xen/libxc: sched: DOMCTL_*vcpuaffinity works with hard and soft affinity Jan Beulich
2013-12-03 10:06 ` Jan Beulich
2013-12-03 11:08 ` Dario Faggioli
2013-12-03 13:25 ` Dario Faggioli
2013-12-03 18:21 ` George Dunlap
2013-12-03 18:29 ` Dario Faggioli
2013-12-03 18:37 ` George Dunlap
2013-12-03 19:06 ` Dario Faggioli
2013-12-04 9:03 ` Dario Faggioli
2013-12-04 15:49 ` George Dunlap
2013-12-04 16:03 ` Dario Faggioli
2013-12-04 16:20 ` Jan Beulich
2013-12-11 11:33 ` Jan Beulich
2013-12-03 10:59 ` Dario Faggioli
2013-12-03 11:20 ` Jan Beulich
2013-12-03 11:30 ` Dario Faggioli
2013-12-02 18:29 ` [PATCH v5 13/17] libxc: get and set soft and hard affinity Dario Faggioli
2013-12-02 18:29 ` [PATCH v5 14/17] libxl: get and set soft affinity Dario Faggioli
2013-12-02 18:29 ` [PATCH v5 15/17] xl: enable getting and setting soft Dario Faggioli
2013-12-02 18:29 ` [PATCH v5 16/17] xl: enable for specifying node-affinity in the config file Dario Faggioli
2013-12-02 18:29 ` [PATCH v5 17/17] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
2013-12-03 14:05 ` [PATCH v5 00/17] Implement vcpu soft affinity for credit1 George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131202182908.29026.23720.stgit@Solace \
--to=dario.faggioli@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).