From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mukesh Rathor Subject: Re: [PATCHEs]: support more than 32 VCPUs in guests Date: Mon, 14 Jun 2010 19:49:26 -0700 Message-ID: <20100614194926.2f81ed3d@mantra.us.oracle.com> References: <20100609160920.1445fbbe@mantra.us.oracle.com> <4C102742.3010108@goop.org> <20100609170825.06a67ff9@mantra.us.oracle.com> <4C1036B0.4060905@goop.org> <20100609191332.588a15d1@mantra.us.oracle.com> <4C15F85A.1050804@goop.org> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="MP_/Qi4+rj4m6UzIzQ7bj3FzRne" Return-path: In-Reply-To: <4C15F85A.1050804@goop.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Jeremy Fitzhardinge Cc: Jan, "Xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org --MP_/Qi4+rj4m6UzIzQ7bj3FzRne Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Content-Disposition: inline On Mon, 14 Jun 2010 10:37:30 +0100 Jeremy Fitzhardinge wrote: > On 06/10/2010 03:13 AM, Mukesh Rathor wrote: > > Well, BUG_ON is only triggered if booting more than 32 VCPUs on a > > *very old* xen (pre xen 3.1.0). > > > > Looking at code closely, we could just set setup_max_cpus to 32 some > > where in xen function, perhaps even in xen_vcpu_setup(). That way > > later in smp_init() it would just be ok. > > > > Yes. > > > One thing tho, the per cpus areas are already setup at that point, > > so that would need to be cleaned. BTW, I don't understand why > > have_vcpu_info_placement is set to 0 in xen_guest_init()? > > > > xen_guest_init is used by the pvhvm path, and hvm domains don't have a > notion of vcpu info placement. > > > What minimum version of xen is required to run pvops kernel? > > > > In theory it should be back-compatible for all Xen 3, but in practice > it tweaks lots of bugs in older Xens (particularly 32-on-64). I > don't know that anyone has definitively established an earliest > version. I implemented vcpu info placement for use in pvops kernels, > but it was never my intention that it be an absolute requirement. > > J Ok, attached patch without BUG_ON. Please feel free to modify to your liking also. Signed-off-by: Mukesh Rathor thanks, Mukesh --MP_/Qi4+rj4m6UzIzQ7bj3FzRne Content-Type: text/x-patch Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=pvops.diff diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 615897c..5dc7667 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -111,40 +111,46 @@ static int have_vcpu_info_placement = 1; static void xen_vcpu_setup(int cpu) { - struct vcpu_register_vcpu_info info; - int err; - struct vcpu_info *vcpup; - - BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info); - per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu]; - - if (!have_vcpu_info_placement) - return; /* already tested, not available */ - - vcpup = &per_cpu(xen_vcpu_info, cpu); - - info.mfn = arbitrary_virt_to_mfn(vcpup); - info.offset = offset_in_page(vcpup); - - printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n", - cpu, vcpup, info.mfn, info.offset); - - /* Check to see if the hypervisor will put the vcpu_info - structure where we want it, which allows direct access via - a percpu-variable. */ - err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info); - - if (err) { - printk(KERN_DEBUG "register_vcpu_info failed: err=%d\n", err); - have_vcpu_info_placement = 0; - } else { - /* This cpu is using the registered vcpu info, even if - later ones fail to. */ - per_cpu(xen_vcpu, cpu) = vcpup; - - printk(KERN_DEBUG "cpu %d using vcpu_info at %p\n", - cpu, vcpup); - } + struct vcpu_register_vcpu_info info; + int err; + struct vcpu_info *vcpup; + + BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info); + + if (cpu < MAX_VIRT_CPUS) + per_cpu(xen_vcpu,cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu]; + + if (!have_vcpu_info_placement) { + if (cpu >= MAX_VIRT_CPUS && setup_max_cpus > MAX_VIRT_CPUS) + setup_max_cpus = MAX_VIRT_CPUS; + return; + } + + vcpup = &per_cpu(xen_vcpu_info, cpu); + info.mfn = arbitrary_virt_to_mfn(vcpup); + info.offset = offset_in_page(vcpup); + + printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n", + cpu, vcpup, info.mfn, info.offset); + + /* Check to see if the hypervisor will put the vcpu_info + structure where we want it, which allows direct access via + a percpu-variable. */ + err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info); + + if (err) { + printk(KERN_DEBUG "register_vcpu_info failed: err=%d\n", err); + have_vcpu_info_placement = 0; + if (setup_max_cpus > MAX_VIRT_CPUS) + setup_max_cpus = MAX_VIRT_CPUS; + } else { + /* This cpu is using the registered vcpu info, even if + later ones fail to. */ + per_cpu(xen_vcpu, cpu) = vcpup; + + printk(KERN_DEBUG "cpu %d using vcpu_info at %p\n", + cpu, vcpup); + } } /* --MP_/Qi4+rj4m6UzIzQ7bj3FzRne Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --MP_/Qi4+rj4m6UzIzQ7bj3FzRne--