From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jan, "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [PATCHEs]: support more than 32 VCPUs in guests
Date: Mon, 14 Jun 2010 19:49:26 -0700 [thread overview]
Message-ID: <20100614194926.2f81ed3d@mantra.us.oracle.com> (raw)
In-Reply-To: <4C15F85A.1050804@goop.org>
[-- Attachment #1: Type: text/plain, Size: 1358 bytes --]
On Mon, 14 Jun 2010 10:37:30 +0100
Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> On 06/10/2010 03:13 AM, Mukesh Rathor wrote:
> > Well, BUG_ON is only triggered if booting more than 32 VCPUs on a
> > *very old* xen (pre xen 3.1.0).
> >
> > Looking at code closely, we could just set setup_max_cpus to 32 some
> > where in xen function, perhaps even in xen_vcpu_setup(). That way
> > later in smp_init() it would just be ok.
> >
>
> Yes.
>
> > One thing tho, the per cpus areas are already setup at that point,
> > so that would need to be cleaned. BTW, I don't understand why
> > have_vcpu_info_placement is set to 0 in xen_guest_init()?
> >
>
> xen_guest_init is used by the pvhvm path, and hvm domains don't have a
> notion of vcpu info placement.
>
> > What minimum version of xen is required to run pvops kernel?
> >
>
> In theory it should be back-compatible for all Xen 3, but in practice
> it tweaks lots of bugs in older Xens (particularly 32-on-64). I
> don't know that anyone has definitively established an earliest
> version. I implemented vcpu info placement for use in pvops kernels,
> but it was never my intention that it be an absolute requirement.
>
> J
Ok, attached patch without BUG_ON. Please feel free to modify
to your liking also.
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
thanks,
Mukesh
[-- Attachment #2: pvops.diff --]
[-- Type: text/x-patch, Size: 2954 bytes --]
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 615897c..5dc7667 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -111,40 +111,46 @@ static int have_vcpu_info_placement = 1;
static void xen_vcpu_setup(int cpu)
{
- struct vcpu_register_vcpu_info info;
- int err;
- struct vcpu_info *vcpup;
-
- BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
- per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];
-
- if (!have_vcpu_info_placement)
- return; /* already tested, not available */
-
- vcpup = &per_cpu(xen_vcpu_info, cpu);
-
- info.mfn = arbitrary_virt_to_mfn(vcpup);
- info.offset = offset_in_page(vcpup);
-
- printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n",
- cpu, vcpup, info.mfn, info.offset);
-
- /* Check to see if the hypervisor will put the vcpu_info
- structure where we want it, which allows direct access via
- a percpu-variable. */
- err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
-
- if (err) {
- printk(KERN_DEBUG "register_vcpu_info failed: err=%d\n", err);
- have_vcpu_info_placement = 0;
- } else {
- /* This cpu is using the registered vcpu info, even if
- later ones fail to. */
- per_cpu(xen_vcpu, cpu) = vcpup;
-
- printk(KERN_DEBUG "cpu %d using vcpu_info at %p\n",
- cpu, vcpup);
- }
+ struct vcpu_register_vcpu_info info;
+ int err;
+ struct vcpu_info *vcpup;
+
+ BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
+
+ if (cpu < MAX_VIRT_CPUS)
+ per_cpu(xen_vcpu,cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];
+
+ if (!have_vcpu_info_placement) {
+ if (cpu >= MAX_VIRT_CPUS && setup_max_cpus > MAX_VIRT_CPUS)
+ setup_max_cpus = MAX_VIRT_CPUS;
+ return;
+ }
+
+ vcpup = &per_cpu(xen_vcpu_info, cpu);
+ info.mfn = arbitrary_virt_to_mfn(vcpup);
+ info.offset = offset_in_page(vcpup);
+
+ printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n",
+ cpu, vcpup, info.mfn, info.offset);
+
+ /* Check to see if the hypervisor will put the vcpu_info
+ structure where we want it, which allows direct access via
+ a percpu-variable. */
+ err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
+
+ if (err) {
+ printk(KERN_DEBUG "register_vcpu_info failed: err=%d\n", err);
+ have_vcpu_info_placement = 0;
+ if (setup_max_cpus > MAX_VIRT_CPUS)
+ setup_max_cpus = MAX_VIRT_CPUS;
+ } else {
+ /* This cpu is using the registered vcpu info, even if
+ later ones fail to. */
+ per_cpu(xen_vcpu, cpu) = vcpup;
+
+ printk(KERN_DEBUG "cpu %d using vcpu_info at %p\n",
+ cpu, vcpup);
+ }
}
/*
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2010-06-15 2:49 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-09 23:09 [PATCHEs]: support more than 32 VCPUs in guests Mukesh Rathor
2010-06-09 23:44 ` Jeremy Fitzhardinge
2010-06-10 0:08 ` Mukesh Rathor
2010-06-10 0:49 ` Jeremy Fitzhardinge
2010-06-10 2:13 ` Mukesh Rathor
2010-06-14 9:37 ` Jeremy Fitzhardinge
2010-06-15 2:49 ` Mukesh Rathor [this message]
2010-06-15 5:02 ` Konrad Rzeszutek Wilk
2010-06-15 8:30 ` Jeremy Fitzhardinge
2010-06-15 18:45 ` Mukesh Rathor
2010-07-17 1:06 ` Mukesh Rathor
2010-07-17 1:09 ` Jeremy Fitzhardinge
2010-07-17 1:11 ` Mukesh Rathor
2010-07-26 22:57 ` Jeremy Fitzhardinge
2010-07-27 0:37 ` Jeremy Fitzhardinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100614194926.2f81ed3d@mantra.us.oracle.com \
--to=mukesh.rathor@oracle.com \
--cc=Xen-devel@lists.xensource.com \
--cc=jeremy@goop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).