From: Wei Liu <wei.liu2@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
"Wei Liu" <wei.liu2@citrix.com>,
"Jan Beulich" <JBeulich@suse.com>,
"Roger Pau Monné" <roger.pau@citrix.com>
Subject: [PATCH v3 2/5] x86: modify setup_dom0_vcpu to use dom0_cpus internally
Date: Mon, 20 Mar 2017 14:14:23 +0000 [thread overview]
Message-ID: <20170320141426.20780-3-wei.liu2@citrix.com> (raw)
In-Reply-To: <20170320141426.20780-1-wei.liu2@citrix.com>
We will later move dom0 builders to different directories. To avoid the
need of making dom0_cpus visible outside dom0_builder.c, modify
setup_dom0_vcpus to cycle through dom0_cpus internally instead of
relying on the callers to do that.
No functional change.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
v3: new
---
xen/arch/x86/dom0_build.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 1c723c9ef1..7ca847e19b 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -158,8 +158,9 @@ static cpumask_t __initdata dom0_cpus;
static struct vcpu *__init setup_dom0_vcpu(struct domain *d,
unsigned int vcpu_id,
- unsigned int cpu)
+ unsigned int prev_cpu)
{
+ unsigned int cpu = cpumask_cycle(prev_cpu, &dom0_cpus);
struct vcpu *v = alloc_vcpu(d, vcpu_id, cpu);
if ( v )
@@ -215,7 +216,8 @@ struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0)
return NULL;
dom0->max_vcpus = max_vcpus;
- return setup_dom0_vcpu(dom0, 0, cpumask_first(&dom0_cpus));
+ return setup_dom0_vcpu(dom0, 0,
+ cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */);
}
#ifdef CONFIG_SHADOW_PAGING
@@ -1155,8 +1157,10 @@ static int __init construct_dom0_pv(
cpu = v->processor;
for ( i = 1; i < d->max_vcpus; i++ )
{
- cpu = cpumask_cycle(cpu, &dom0_cpus);
- setup_dom0_vcpu(d, i, cpu);
+ struct vcpu *p = setup_dom0_vcpu(d, i, cpu);
+
+ if ( p )
+ cpu = p->processor;
}
d->arch.paging.mode = 0;
@@ -1902,8 +1906,10 @@ static int __init pvh_setup_cpus(struct domain *d, paddr_t entry,
cpu = v->processor;
for ( i = 1; i < d->max_vcpus; i++ )
{
- cpu = cpumask_cycle(cpu, &dom0_cpus);
- setup_dom0_vcpu(d, i, cpu);
+ struct vcpu *p = setup_dom0_vcpu(d, i, cpu);
+
+ if ( p )
+ cpu = p->processor;
}
rc = arch_set_info_hvm_guest(v, &cpu_ctx);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-20 14:15 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-20 14:14 [PATCH v3 0/5] Refactor x86 dom0 builder Wei Liu
2017-03-20 14:14 ` [PATCH v3 1/5] x86: rename domain_build.c to dom0_build.c Wei Liu
2017-03-20 14:14 ` Wei Liu [this message]
2017-03-20 14:21 ` [PATCH v3 2/5] x86: modify setup_dom0_vcpu to use dom0_cpus internally Andrew Cooper
2017-03-20 15:19 ` Jan Beulich
2017-03-20 15:20 ` Wei Liu
2017-03-20 14:14 ` [PATCH v3 3/5] x86: split PV dom0 builder to pv/dom0_builder.c Wei Liu
2017-03-20 14:43 ` Andrew Cooper
2017-03-20 14:14 ` [PATCH v3 4/5] x86: split PVH dom0 builder to hvm/dom0_build.c Wei Liu
2017-03-20 14:51 ` Andrew Cooper
2017-03-20 14:14 ` [PATCH v3 5/5] x86: clean up header files in dom0_build.c Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170320141426.20780-3-wei.liu2@citrix.com \
--to=wei.liu2@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).