From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: Scheduler regression in 4.7 Date: Fri, 12 Aug 2016 05:32:33 +0200 Message-ID: <1470972753.6250.79.camel@citrix.com> References: <0ebca08f-71bd-7b96-9182-caa66e4f370f@citrix.com> <07d7f503-f4e2-8955-0470-55ecbb532b88@citrix.com> <1470925692.6250.20.camel@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6820513564817509244==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Andrew Cooper , George Dunlap , George Dunlap Cc: Jan Beulich , Xen-devel List List-Id: xen-devel@lists.xenproject.org --===============6820513564817509244== Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="=-DHSN7X+GmAk4OEQx+ZJi" --=-DHSN7X+GmAk4OEQx+ZJi Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, 2016-08-11 at 16:42 +0100, Andrew Cooper wrote: > On 11/08/16 15:28, Dario Faggioli wrote: > > On Thu, 2016-08-11 at 14:39 +0100, Andrew Cooper wrote: > > > It will be IS_RUNQ_IDLE() which is the problem. > > >=20 > > Ok, that does one step of list traversing (the runq). What I didn't > > understand from your report is what crashed when. > IS_RUNQ_IDLE() was traversing a list, and it encountered an element > which was being concurrently deleted on a different pcpu. >=20 Yes, I figured it was a race like this, I was asking whether it was boot, domain creation, domain shutdown, etc (to which you replied below, so thanks :-) ). > > AFAICR, during domain destruction we basically move the domain to > > cpupool0, and without a patch that I sent recently, that is always > > done > > as a full fledged cpupool movement, even if the domain is _already_ > > in > > cpupool0. So, even if you are not using cpupools, and since you > > mention > > domain shutdown we probably are looking at 2). > XenServer doesn't use any cpupools, so all pcpus and vcpus are in > cpupool0. >=20 Right. But since you say the issue manifests during domain destruction, this may still be triggered by this callchain: =C2=A0 domain_kill(d) =C2=A0 =C2=A0 cpupool_move_domain(d, cpupool0) =C2=A0 =C2=A0 =C2=A0 cpupool_move_domain_locked() =C2=A0 =C2=A0 =C2=A0 =C2=A0 sched_move_domain() =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 SCHED_OP(insert_vcpu) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 csched_vcpu_insert() =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 csched_cpu_pick() Considering that, as I was saying, without=C2=A0f6bde162c4 (which is not in 4.7), even if the domain is already in cpupool0, sched_move_domain() is called, and it then calls insert_vcpu(), etc. Of course, f6bde162c4 is *not* the solution. It will mitigate the issue in such a way that it won't show up if you don't really use cpupools, but if there's a race, that still may happen when using cpupools and destroying a domain in a pool different than cpupool0. I have to say that, on staging-4.7, I am able to create, reboot and destroy a domain, without seeing this issue... but again, if it's a race, this certainly does not mean it's not there! Also, below, you say that you think we're in domain construction (which, I agree, is what seems to result from the stack trace). > It is a vm reboot of a an HVM domU (CentOS 7 64bit, although I doubt > that is relevant). >=20 > The testcase is vm lifecycle ops on a 32vcpu VM, on a host which > happens > to have 32pcpus. >=20 FWIW, I tried 16 vcpus on an host with 16 pcpus (and 16 vcpus dom0). > > The questions I'm asking above have the aim of figuring out what > > the > > status of the runq could be, and why adding a call to > > csched_cpu_pick() > > from insert_vcpu() is making things explode... > It turns out that the stack trace is rather less stack rubble than I > first thought.=C2=A0=C2=A0We are in domain construction, and specifically= the > XEN_DOMCTL_max_vcpus hypercall.=C2=A0=C2=A0All other pcpus are in idle. >=20 > =C2=A0=C2=A0=C2=A0=C2=A0for ( i =3D 0; i < max; i++ ) > =C2=A0=C2=A0=C2=A0=C2=A0{ > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0i= f ( d->vcpu[i] !=3D NULL ) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0continue; >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0c= pu =3D (i =3D=3D 0) ? > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0cpumask_any(online) : > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0cpumask_cycle(d->vcpu[i-1]->processor, online); >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0i= f ( alloc_vcpu(d, i, cpu) =3D=3D NULL ) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0goto maxvcpu_out; > =C2=A0=C2=A0=C2=A0=C2=A0} >=20 > The cpumask_cycle() call is complete and execution has moved into > alloc_vcpu() >=20 > Unfortunately, none of the code around here spills i or cpu onto the > stack, so I can't see which values the have from the stack dump. >=20 > However, I see that csched_vcpu_insert() plays with vc->processor, > which > surely invalidates the cycle logic behind this loop? >=20 Yes, but I don't think that is a problem. The purpose of calling csched_cpu_pick() from insert_vcpu() is to find a (the best?) placement for the vcpu. That will have to be put in v->processor (and the vcpu queued in the runq of such processor). But the point is that _csched_vcpu_pick(), in order to come up with a (potentially) new (and better!) cpu for the vcpu, *reads* v->processor,=20 to know where v is now, and take that into account in load balancing calculations (mostly, idleness/SMT stuff). So, those calls to cpumask_any() and cpumask_cycle() are useful as they give csched_cpu_pick() something to start with, and it's ok for it to be overridden. But fact is, IS_RUNQ_IDLE() *does* access the runq of that initial cpu and, as explained here, for Credit1, this happens without holding the proper spinlock: =C2=A0 =C2=A0 /* This is safe because vc isn't yet being scheduled */ =C2=A0=C2=A0=C2=A0=C2=A0vc->processor =3D csched_cpu_pick(ops, vc); =C2=A0=C2=A0=C2=A0=C2=A0lock =3D vcpu_schedule_lock_irq(vc); =C2=A0=C2=A0=C2=A0=C2=A0if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && != vc->is_running ) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0__runq_insert(svc); =C2=A0=C2=A0=C2=A0=C2=A0vcpu_schedule_unlock_irq(lock, vc); Which I think may well explain the race. So, the solution appears to me to be to move the call to csched_cpu_pick() inside the critical section. As a matter of fact, for Credit2 it's already like that (while for RTDS, that may indeed not be necessary). I guess the fact that the runq was actually being accessed also in the Credit1 case, was hidden enough inside IS_RUNQ_IDLE() for both George not to notice when doing the patch, as well as me when reviewing... sorry for that. I'll send a patch for staging (which then will have to be backported). Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-DHSN7X+GmAk4OEQx+ZJi Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCAAGBQJXrUNTAAoJEBZCeImluHPuzS8QAJdYgQzYEecYYmkIfWqP99Sa SA7uGIZr3vXmfVf661fY4SYx1jdyb2VF1jDd0tTEQ2QAVSbQiytfaZAAWt6CC+gh iW+Y70SXg8II5eZYl1SZqpG9izaD9wrVHuFaYNbDu4pcxFkblGII1CSELhe1MRWb UkFeDjgtHbBA/Oi/LG/B3xh0Yvs+Dj2s3xnkUCY5AIpiUHqEesWi57tYdpM8Y/1p vIW/BpSjMXAkBGIVfka2hZQRj/A6w6szg/pCoS2gn2wZ+35ETSQ0/xKqid6wnbqT Tz+9YXtDiFeMD72rRvgDNUQikBmPcjO7Zs6MZqFzeJwmr3jfA0gxsUnV+B3KpLd6 W+6JbtdRRxEzNy8qZu3gatuESL0gbZKaiQuWkHejpIdljSezhKuuDLYjMbQzy8ye GVtdZRy3UaBP6aZ8cbiGXWWP/K7CyD4T1OP1UEmRhoPAvm/MHOHOMnEAKWlCVD3t ojr52W2xE/ipjtTbpJvjc6G+vOI5PxSR9hlUrYVzQdX0nvPXva6KZSt9l4KOTLSq epnLh95hH0bI4oDyvmwrGX56X7kP8NiceZ6Nt8tWTOU3g58I+B9RYy6Um+VXzVed FYF0p29zjgfKg4SL3Z+A4h7DtwtO2zUvLCDQyqehDGqxBG7mBMdlBN0G08A6LNRw itf3ltUeQ7jxBvoSXH3o =aEZy -----END PGP SIGNATURE----- --=-DHSN7X+GmAk4OEQx+ZJi-- --===============6820513564817509244== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --===============6820513564817509244==--