From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= Subject: Re: Shutdown panic in disable_nonboot_cpus after cpupool-numa-split Date: Mon, 28 Jul 2014 10:50:37 +0200 Message-ID: <53D60EDD.2020800@suse.com> References: <53BA857A.8070608@canonical.com> <53BA8BD1.4020506@citrix.com> <53BA94D0.80201@suse.com> <53BA9773.6090004@canonical.com> <53BA9AB7.70105@suse.com> <53BAA9DD.4040403@canonical.com> <53BAAE82.2090200@suse.com> <53BAB20C.1020409@canonical.com> <53D60B7D.1020104@canonical.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1"; Format="flowed" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <53D60B7D.1020104@canonical.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Stefan Bader , Andrew Cooper , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org On 07/28/2014 10:36 AM, Stefan Bader wrote: > On 07.07.2014 16:43, Stefan Bader wrote: >> On 07.07.2014 16:28, Juergen Gross wrote: >>> On 07/07/2014 04:08 PM, Stefan Bader wrote: >>>> On 07.07.2014 15:03, J=FCrgen Gro=DF wrote: >>>>> On 07/07/2014 02:49 PM, Stefan Bader wrote: >>>>>> On 07.07.2014 14:38, J=FCrgen Gro=DF wrote: >>>>>>> On 07/07/2014 02:00 PM, Andrew Cooper wrote: >>>>>>>> On 07/07/14 12:33, Stefan Bader wrote: >>>>>>>>> I recently noticed that I get a panic (rebooting the system) on = shutdown in >>>>>>>>> some >>>>>>>> > cases. This happened only on my AMD system and also not all = the time. >>>>>>>> Finally >>>>>>>> > realized that it is related to the use of using cpupool-numa= -split >>>>>>>> > (libxl with xen-4.4 maybe, but not 100% sure 4.3 as well). >>>>>>>> > >>>>>>>> > What happens is that on shutdown the hypervisor runs >>>>>>>> disable_nonboot_cpus which >>>>>>>> > call cpu_down for each online cpu. There is a BUG_ON in the = code for >>>>>>>> the case of >>>>>>>> > cpu_down returning -EBUSY. This happens in my case as soon a= s the >>>>>>>> first cpu that >>>>>>>> > has been moved to pool-1 by cpupool-numa-split is attempted.= The >>>>>>>> error is >>>>>>>> > returned by running the notifier_call_chain and I suspect th= at ends >>>>>>>> up calling >>>>>>>> > cpupool_cpu_remove which always returns EBUSY for cpus not i= n pool0. >>>>>>>> > >>>>>>>> > I am not sure which end needs to be fixed but looping over a= ll online >>>>>>>> cpus in >>>>>>>> > disable_nonboot_cpus sounds plausible. So maybe the check fo= r pool-0 in >>>>>>>> > cpupool_cpu_remove is wrong...? >>>>>>>> > >>>>>>>> > -Stefan >>>>>>>> >>>>>>>> Hmm yes - this looks completely broken. >>>>>>>> >>>>>>>> cpupool_cpu_remove() only has a single caller which is from cpu_do= wn(), >>>>>>>> and will unconditionally fail for cpus outside of the default pool. >>>>>>>> >>>>>>>> It is not obvious at all how this is supposed to work, and the com= ment >>>>>>>> beside cpupool_cpu_remove() doesn't help. >>>>>>>> >>>>>>>> Can you try the following (only compile tested) patch, which looks >>>>>>>> plausibly like it might DTRT. The for_each_cpupool() is a little = nastly >>>>>>>> but there appears to be no cpu_to_cpupool mapping available. >>>>>>> >>>>>>> Your patch has the disadvantage to support hot-unplug of the last c= pu in >>>>>>> a cpupool. The following should work, however: >>>>>> >>>>>> Disadvantage and support sounded a bit confusing. But I think it mea= ns >>>>>> hot-unplugging the last cpu of a pool is bad and should not be worki= ng. >>>>> >>>>> Correct. >>>>> >>>>>> >>>>>>> >>>>>>> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c >>>>>>> index 4a0e569..73249d3 100644 >>>>>>> --- a/xen/common/cpupool.c >>>>>>> +++ b/xen/common/cpupool.c >>>>>>> @@ -471,12 +471,24 @@ static void cpupool_cpu_add(unsigned int cpu) >>>>>>> */ >>>>>>> static int cpupool_cpu_remove(unsigned int cpu) >>>>>>> { >>>>>>> - int ret =3D 0; >>>>>>> + int ret =3D -EBUSY; >>>>>>> + struct cpupool **c; >>>>>>> >>>>>>> spin_lock(&cpupool_lock); >>>>>>> - if ( !cpumask_test_cpu(cpu, cpupool0->cpu_valid)) >>>>>>> - ret =3D -EBUSY; >>>>>>> + if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) >>>>>>> + ret =3D 0; >>>>>>> else >>>>>>> + { >>>>>>> + for_each_cpupool(c) >>>>>>> + { >>>>>>> + if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) ) >>>>>> >>>>>> The rest seems to keep the semantics the same as before (though does= that mean >>>>>> unplugging the last cpu of pool-0 is ok?) But why testing for suspen= ded here to >>>>>> succeed (and not valid)? >>>>> >>>>> Testing valid would again enable to remove the last cpu of a cpupool = in >>>>> case of hotplugging. cpu_suspended is set if all cpus are to be remov= ed >>>>> due to shutdown, suspend to ram/disk, ... >>>> >>>> Ah, ok. Thanks for the detail explanation. So I was trying this change= in >>>> parallel and can confirm that it gets rid of the panic on shutdown. Bu= t when I >>>> try to offline any cpu in pool1 (if echoing 0 into /sys/devices/xen_cp= u/xen_cpu? >>>> is the correct method) I always get EBUSY. IOW I cannot hot-unplug any= cpu that >>>> is in a pool other than 0. It does only work after removing it from po= ol1, then >>>> add it to pool0 and then echo 0 into online. >>> >>> That's how it was designed some years ago. I don't want to change the >>> behavior in the hypervisor. Adding some tool support could make sense, >>> however. >> >> Ok, so in that case everything works as expected and the change fixes the >> currently broken shutdown and could be properly submitted for inclusion = (with my >> tested-by). >> > > Is this needing something from my side to do? I could re-submit the whole= patch > but it since it is Juergen's work it felt a little rude to do so. Patch is already in xen-unstable/staging. Juergen