From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stefan Bader Subject: Re: Shutdown panic in disable_nonboot_cpus after cpupool-numa-split Date: Mon, 28 Jul 2014 11:02:37 +0200 Message-ID: <53D611AD.5090900@canonical.com> References: <53BA857A.8070608@canonical.com> <53BA8BD1.4020506@citrix.com> <53BA94D0.80201@suse.com> <53BA9773.6090004@canonical.com> <53BA9AB7.70105@suse.com> <53BAA9DD.4040403@canonical.com> <53BAAE82.2090200@suse.com> <53BAB20C.1020409@canonical.com> <53D60B7D.1020104@canonical.com> <53D60EDD.2020800@suse.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7365422539129047234==" Return-path: In-Reply-To: <53D60EDD.2020800@suse.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: =?windows-1252?Q?J=FCrgen_Gro=DF?= , Andrew Cooper , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --===============7365422539129047234== Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="TsW3BnOrCOA2KjlhxfHRfNNDwDjEwQXro" This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --TsW3BnOrCOA2KjlhxfHRfNNDwDjEwQXro Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 28.07.2014 10:50, J=FCrgen Gro=DF wrote: > On 07/28/2014 10:36 AM, Stefan Bader wrote: >> On 07.07.2014 16:43, Stefan Bader wrote: >>> On 07.07.2014 16:28, Juergen Gross wrote: >>>> On 07/07/2014 04:08 PM, Stefan Bader wrote: >>>>> On 07.07.2014 15:03, J=FCrgen Gro=DF wrote: >>>>>> On 07/07/2014 02:49 PM, Stefan Bader wrote: >>>>>>> On 07.07.2014 14:38, J=FCrgen Gro=DF wrote: >>>>>>>> On 07/07/2014 02:00 PM, Andrew Cooper wrote: >>>>>>>>> On 07/07/14 12:33, Stefan Bader wrote: >>>>>>>>>> I recently noticed that I get a panic (rebooting the system) = on >>>>>>>>>> shutdown in >>>>>>>>>> some >>>>>>>>> > cases. This happened only on my AMD system and also not a= ll the >>>>>>>>> time. >>>>>>>>> Finally >>>>>>>>> > realized that it is related to the use of using cpupool-n= uma-split >>>>>>>>> > (libxl with xen-4.4 maybe, but not 100% sure 4.3 as well)= =2E >>>>>>>>> > >>>>>>>>> > What happens is that on shutdown the hypervisor runs >>>>>>>>> disable_nonboot_cpus which >>>>>>>>> > call cpu_down for each online cpu. There is a BUG_ON in t= he code for >>>>>>>>> the case of >>>>>>>>> > cpu_down returning -EBUSY. This happens in my case as soo= n as the >>>>>>>>> first cpu that >>>>>>>>> > has been moved to pool-1 by cpupool-numa-split is attempt= ed. The >>>>>>>>> error is >>>>>>>>> > returned by running the notifier_call_chain and I suspect= that ends >>>>>>>>> up calling >>>>>>>>> > cpupool_cpu_remove which always returns EBUSY for cpus no= t in pool0. >>>>>>>>> > >>>>>>>>> > I am not sure which end needs to be fixed but looping ove= r all >>>>>>>>> online >>>>>>>>> cpus in >>>>>>>>> > disable_nonboot_cpus sounds plausible. So maybe the check= for >>>>>>>>> pool-0 in >>>>>>>>> > cpupool_cpu_remove is wrong...? >>>>>>>>> > >>>>>>>>> > -Stefan >>>>>>>>> >>>>>>>>> Hmm yes - this looks completely broken. >>>>>>>>> >>>>>>>>> cpupool_cpu_remove() only has a single caller which is from cpu= _down(), >>>>>>>>> and will unconditionally fail for cpus outside of the default p= ool. >>>>>>>>> >>>>>>>>> It is not obvious at all how this is supposed to work, and the = comment >>>>>>>>> beside cpupool_cpu_remove() doesn't help. >>>>>>>>> >>>>>>>>> Can you try the following (only compile tested) patch, which lo= oks >>>>>>>>> plausibly like it might DTRT. The for_each_cpupool() is a litt= le nastly >>>>>>>>> but there appears to be no cpu_to_cpupool mapping available. >>>>>>>> >>>>>>>> Your patch has the disadvantage to support hot-unplug of the las= t cpu in >>>>>>>> a cpupool. The following should work, however: >>>>>>> >>>>>>> Disadvantage and support sounded a bit confusing. But I think it = means >>>>>>> hot-unplugging the last cpu of a pool is bad and should not be wo= rking. >>>>>> >>>>>> Correct. >>>>>> >>>>>>> >>>>>>>> >>>>>>>> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c >>>>>>>> index 4a0e569..73249d3 100644 >>>>>>>> --- a/xen/common/cpupool.c >>>>>>>> +++ b/xen/common/cpupool.c >>>>>>>> @@ -471,12 +471,24 @@ static void cpupool_cpu_add(unsigned int c= pu) >>>>>>>> */ >>>>>>>> static int cpupool_cpu_remove(unsigned int cpu) >>>>>>>> { >>>>>>>> - int ret =3D 0; >>>>>>>> + int ret =3D -EBUSY; >>>>>>>> + struct cpupool **c; >>>>>>>> >>>>>>>> spin_lock(&cpupool_lock); >>>>>>>> - if ( !cpumask_test_cpu(cpu, cpupool0->cpu_valid)) >>>>>>>> - ret =3D -EBUSY; >>>>>>>> + if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) >>>>>>>> + ret =3D 0; >>>>>>>> else >>>>>>>> + { >>>>>>>> + for_each_cpupool(c) >>>>>>>> + { >>>>>>>> + if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) ) >>>>>>> >>>>>>> The rest seems to keep the semantics the same as before (though d= oes that >>>>>>> mean >>>>>>> unplugging the last cpu of pool-0 is ok?) But why testing for sus= pended >>>>>>> here to >>>>>>> succeed (and not valid)? >>>>>> >>>>>> Testing valid would again enable to remove the last cpu of a cpupo= ol in >>>>>> case of hotplugging. cpu_suspended is set if all cpus are to be re= moved >>>>>> due to shutdown, suspend to ram/disk, ... >>>>> >>>>> Ah, ok. Thanks for the detail explanation. So I was trying this cha= nge in >>>>> parallel and can confirm that it gets rid of the panic on shutdown.= But when I >>>>> try to offline any cpu in pool1 (if echoing 0 into >>>>> /sys/devices/xen_cpu/xen_cpu? >>>>> is the correct method) I always get EBUSY. IOW I cannot hot-unplug = any cpu >>>>> that >>>>> is in a pool other than 0. It does only work after removing it from= pool1, >>>>> then >>>>> add it to pool0 and then echo 0 into online. >>>> >>>> That's how it was designed some years ago. I don't want to change th= e >>>> behavior in the hypervisor. Adding some tool support could make sens= e, >>>> however. >>> >>> Ok, so in that case everything works as expected and the change fixes= the >>> currently broken shutdown and could be properly submitted for inclusi= on (with my >>> tested-by). >>> >> >> Is this needing something from my side to do? I could re-submit the wh= ole patch >> but it since it is Juergen's work it felt a little rude to do so. >=20 > Patch is already in xen-unstable/staging. Argh, so it is. I always seem to forget about this branch. So I only chec= ked master. :/ Thanks, Stefan >=20 >=20 > Juergen >=20 --TsW3BnOrCOA2KjlhxfHRfNNDwDjEwQXro Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBCgAGBQJT1hGtAAoJEOhnXe7L7s6j6wsQANFjtESxdCC3mfF0j23yPZln TxvN2nx+nUJsmgvTbI1gJN3EgNw4L2+cmhi3sz0LWz12TwtfmgMAB57uct5B1zv3 LoR88KeG3ZP/3ToRXmTvx12wLUK6V+encQ6iUVJ0A2Dlr2wG1CiDscG3rBxjqjMz 7/2Sh1pPvPxev/3r0nsp/dxKqrLaDeVyq5mm6JJqXGXYYen9I2+SEG2IvYzY/oaS ioTxttxlSWKMYjh6hg/CbEOC59SO2a55HYoTonpm7aVLKnB9R4TdOFHBqIPCjDkz hYjclPaoppV4OvGUwkKz+EYh1TadfF8/znQDVDsMif/PxXLB2nFqkIK/Fch2ZMmH RaSiXm8S9anXERjRUcaw2GuGAjDioOn4EbRyHCxx/2BuHJR94OCJct8ZVr1fXCta sWGeNI6b9ZOAowJ3kbJdRaNSOuj2MUz/9mhJB6XRTrC2aTIij4rXg+frni7h6fuZ wTTX8WUkkybaOiSwOuwR4IiIMA3GEgM8IvqV9ZBhR+VItoC53EdQ0YO3SxBE2nqj p3YIDAKzv5hICs/rbpcK0CW23emhkuVGpwkEutWiVGzWvQ828Lxlukb8fmh3tdZa sBwuPEIvDM5aDxj4iIdtsdE6h8YBVHJBZHu6Y7ksXdGwj0vIBspz7/AMuH9B4lpL 4l57XDnQxKfBTTFL92Xc =v20+ -----END PGP SIGNATURE----- --TsW3BnOrCOA2KjlhxfHRfNNDwDjEwQXro-- --===============7365422539129047234== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============7365422539129047234==--