From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= Subject: Re: Shutdown panic in disable_nonboot_cpus after cpupool-numa-split Date: Mon, 07 Jul 2014 15:03:51 +0200 Message-ID: <53BA9AB7.70105@suse.com> References: <53BA857A.8070608@canonical.com> <53BA8BD1.4020506@citrix.com> <53BA94D0.80201@suse.com> <53BA9773.6090004@canonical.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1"; Format="flowed" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <53BA9773.6090004@canonical.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Stefan Bader , Andrew Cooper , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org On 07/07/2014 02:49 PM, Stefan Bader wrote: > On 07.07.2014 14:38, J=FCrgen Gro=DF wrote: >> On 07/07/2014 02:00 PM, Andrew Cooper wrote: >>> On 07/07/14 12:33, Stefan Bader wrote: >>>> I recently noticed that I get a panic (rebooting the system) on shutd= own in >>>> some >>> > cases. This happened only on my AMD system and also not all the tim= e. >>> Finally >>> > realized that it is related to the use of using cpupool-numa-split >>> > (libxl with xen-4.4 maybe, but not 100% sure 4.3 as well). >>> > >>> > What happens is that on shutdown the hypervisor runs >>> disable_nonboot_cpus which >>> > call cpu_down for each online cpu. There is a BUG_ON in the code for >>> the case of >>> > cpu_down returning -EBUSY. This happens in my case as soon as the >>> first cpu that >>> > has been moved to pool-1 by cpupool-numa-split is attempted. The er= ror is >>> > returned by running the notifier_call_chain and I suspect that ends >>> up calling >>> > cpupool_cpu_remove which always returns EBUSY for cpus not in pool0. >>> > >>> > I am not sure which end needs to be fixed but looping over all onli= ne >>> cpus in >>> > disable_nonboot_cpus sounds plausible. So maybe the check for pool-= 0 in >>> > cpupool_cpu_remove is wrong...? >>> > >>> > -Stefan >>> >>> Hmm yes - this looks completely broken. >>> >>> cpupool_cpu_remove() only has a single caller which is from cpu_down(), >>> and will unconditionally fail for cpus outside of the default pool. >>> >>> It is not obvious at all how this is supposed to work, and the comment >>> beside cpupool_cpu_remove() doesn't help. >>> >>> Can you try the following (only compile tested) patch, which looks >>> plausibly like it might DTRT. The for_each_cpupool() is a little nastly >>> but there appears to be no cpu_to_cpupool mapping available. >> >> Your patch has the disadvantage to support hot-unplug of the last cpu in >> a cpupool. The following should work, however: > > Disadvantage and support sounded a bit confusing. But I think it means > hot-unplugging the last cpu of a pool is bad and should not be working. Correct. > >> >> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c >> index 4a0e569..73249d3 100644 >> --- a/xen/common/cpupool.c >> +++ b/xen/common/cpupool.c >> @@ -471,12 +471,24 @@ static void cpupool_cpu_add(unsigned int cpu) >> */ >> static int cpupool_cpu_remove(unsigned int cpu) >> { >> - int ret =3D 0; >> + int ret =3D -EBUSY; >> + struct cpupool **c; >> >> spin_lock(&cpupool_lock); >> - if ( !cpumask_test_cpu(cpu, cpupool0->cpu_valid)) >> - ret =3D -EBUSY; >> + if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) ) >> + ret =3D 0; >> else >> + { >> + for_each_cpupool(c) >> + { >> + if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) ) > > The rest seems to keep the semantics the same as before (though does that= mean > unplugging the last cpu of pool-0 is ok?) But why testing for suspended h= ere to > succeed (and not valid)? Testing valid would again enable to remove the last cpu of a cpupool in case of hotplugging. cpu_suspended is set if all cpus are to be removed due to shutdown, suspend to ram/disk, ... Juergen > > >> + { >> + ret =3D 0; >> + break; >> + } >> + } >> + } >> + if ( !ret ) >> cpumask_set_cpu(cpu, &cpupool_locked_cpus); >> spin_unlock(&cpupool_lock); >> >> >> >> Juergen