From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: backport requests for 4.x-testing Date: Thu, 29 Mar 2012 12:23:18 -0400 Message-ID: <20120329162318.GA9045@phenom.dumpdata.com> References: <20120324172757.GA29504@phenom.dumpdata.com> <20120329151121.GG12008@phenom.dumpdata.com> <20120329155631.GA32387@phenom.dumpdata.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Teck Choon Giam Cc: Andrew Cooper , keir@xen.org, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On Fri, Mar 30, 2012 at 12:20:05AM +0800, Teck Choon Giam wrote: > On Thu, Mar 29, 2012 at 11:56 PM, Konrad Rzeszutek Wilk > wrote: > >> >> > Applied 23225 and 24013. The other, toolstack-related, patches I = will leave > >> >> > for a tools maintainer to ack or apply. > >> >> > >> > Hey Teck, > >> > > >> > Thanks for reporting! > >> > > >> >> With the two backport patches committed in xen-4.1-testing (changes= et > >> >> 23271:13741fd6253b), xl list or xl create domU will cause 100% CPU = and > >> > > >> > xl list? > >> > >> After a reboot with no domU running, xl list is fine but if I start a > >> hvm domU will be stuck and caused high load then open another ssh > >> terminal to issue xl list will stuck as well. > > > > This fix fixes it for me: > > > > diff -r 13741fd6253b xen/arch/x86/domain.c > > --- a/xen/arch/x86/domain.c =A0 =A0 Thu Mar 29 10:20:58 2012 +0100 > > +++ b/xen/arch/x86/domain.c =A0 =A0 Thu Mar 29 11:44:54 2012 -0400 > > @@ -558,9 +558,9 @@ int arch_domain_create(struct domain *d, > > =A0 =A0 =A0 =A0 d->arch.is_32bit_pv =3D d->arch.has_32bit_shinfo =3D > > =A0 =A0 =A0 =A0 =A0 =A0 (CONFIG_PAGING_LEVELS !=3D 4); > > > > - =A0 =A0 =A0 =A0spin_lock_init(&d->arch.e820_lock); > > =A0 =A0 } > > > > + =A0 =A0spin_lock_init(&d->arch.e820_lock); > > =A0 =A0 memset(d->arch.cpuids, 0, sizeof(d->arch.cpuids)); > > =A0 =A0 for ( i =3D 0; i < MAX_CPUID_INPUT; i++ ) > > =A0 =A0 { > > @@ -605,8 +605,8 @@ void arch_domain_destroy(struct domain * > > > > =A0 =A0 if ( is_hvm_domain(d) ) > > =A0 =A0 =A0 =A0 hvm_domain_destroy(d); > > - =A0 =A0else > > - =A0 =A0 =A0 =A0xfree(d->arch.e820); > > + > > + =A0 =A0xfree(d->arch.e820); > > > > =A0 =A0 vmce_destroy_msr(d); > > =A0 =A0 free_domain_pirqs(d); > > > > > > The issue is that upstream we have two 'domain structs' - one for PV and > > one for HVM. In 4.1 it is just 'arch_domain' and the calls to create > > the guests are going through the same interface (at least using xl, with > > xm they are seperate). And I only initialized the spinlock in the PV ca= se, > > but not in the HVM case. This fix to the backport resolves the problem. > = > Thanks for your fast and prompt fix ;) > = > I am compiling with the fix patch you provided on top of > xen-4.1-testing changeset 23271:13741fd6253b. Will test and report > back if you are interested ;) Yes please! If you find other issues, please report them immediately! Thanks again for doing this.