From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH 2/3] xen: sched_null: support for hard affinity Date: Tue, 21 Mar 2017 09:47:10 +0100 Message-ID: <1490086030.15340.12.camel@citrix.com> References: <148977585611.29510.906390949919041674.stgit@Palanthas.fritz.box> <148977618592.29510.6991110994080248461.stgit@Palanthas.fritz.box> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4899391719222746569==" Return-path: Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cqFRm-0001Fb-D7 for xen-devel@lists.xenproject.org; Tue, 21 Mar 2017 08:47:18 +0000 In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Stefano Stabellini Cc: Jonathan Davies , George Dunlap , Marcus Granado , Stefano Stabellini , Julien Grall , xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org --===============4899391719222746569== Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="=-XottAj75R+pbnQZXG2vG" --=-XottAj75R+pbnQZXG2vG Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, 2017-03-20 at 16:46 -0700, Stefano Stabellini wrote: > On Fri, 17 Mar 2017, Dario Faggioli wrote: > >=20 > > --- a/xen/common/sched_null.c > > +++ b/xen/common/sched_null.c > > @@ -117,6 +117,14 @@ static inline struct null_dom *null_dom(const > > struct domain *d) > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0return d->sched_priv; > > =C2=A0} > > =C2=A0 > > +static inline bool check_nvc_affinity(struct null_vcpu *nvc, > > unsigned int cpu) > > +{ > > +=C2=A0=C2=A0=C2=A0=C2=A0cpumask_and(cpumask_scratch_cpu(cpu), nvc->vcp= u- > > >cpu_hard_affinity, > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0cpupool_domain_cpumask(nvc->vcpu->domain)); > > + > > +=C2=A0=C2=A0=C2=A0=C2=A0return cpumask_test_cpu(cpu, cpumask_scratch_c= pu(cpu)); > > +} >=20 > If you make it take a struct vcpu* as first argument, it will be more > generally usable >=20 Yes, that's probably a good idea. Thanks. > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0return cpu; > > =C2=A0 > > -=C2=A0=C2=A0=C2=A0=C2=A0/* If not, just go for a valid free pCPU, if a= ny */ > > +=C2=A0=C2=A0=C2=A0=C2=A0/* If not, just go for a free pCPU, within our= affinity, if > > any */ > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0cpumask_and(cpumask_scratch_cpu(cpu), &pr= v->cpus_free, cpus); > > +=C2=A0=C2=A0=C2=A0=C2=A0cpumask_and(cpumask_scratch_cpu(cpu), > > cpumask_scratch_cpu(cpu), > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0v->cpu_hard_affinity); >=20 > You can do this with one cpumask_and (in addition to the one above): >=20 > =C2=A0=C2=A0=C2=A0cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_c= pu(cpu), > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0&prv->cpus_free); >=20 Mmm... right. Wow... Quite an overlook on my side! :-P > > @@ -308,7 +320,10 @@ static unsigned int pick_cpu(struct > > null_private *prv, struct vcpu *v) > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0* only if the pCPU is free. > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0*/ > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0if ( unlikely(cpu =3D=3D nr_cpu_ids) ) > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0cpu =3D cpumask_any(cp= us); > > +=C2=A0=C2=A0=C2=A0=C2=A0{ > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0cpumask_and(cpumask_sc= ratch_cpu(cpu), cpus, v- > > >cpu_hard_affinity); >=20 > Could the intersection be 0? >=20 Not really. Because vcpu_set_hard_affinity() fails when trying to set the affinity to something that has a zero intersection with the set of cpus of the pool the domain is in. Other schedulers relies on this too. However, I need to re-check what happens if everything is ok when changing the affinity, but then all the cpus in the affinity itself are removed from the pool... I will do that as, as I said, this is something general (and this area is really a can of worms... And I keep finding weird corner cases! :-O) > > @@ -408,8 +426,7 @@ static void null_vcpu_insert(const struct > > scheduler *ops, struct vcpu *v) > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0*/ > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0vcpu_assign(prv, = v, cpu); > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0} > > -=C2=A0=C2=A0=C2=A0=C2=A0else if ( cpumask_intersects(&prv->cpus_free, > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0cpupool_domain_cpumas= k(v- > > >domain)) ) > > +=C2=A0=C2=A0=C2=A0=C2=A0else if ( cpumask_intersects(&prv->cpus_free, > > cpumask_scratch_cpu(cpu)) ) > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0{ > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0spin_unlock(lock)= ; > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0goto retry; > > @@ -462,7 +479,7 @@ static void null_vcpu_remove(const struct > > scheduler *ops, struct vcpu *v) > > =C2=A0 > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0spin_lock(&prv->waitq_lock); > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0wvc =3D list_first_entry_or_null(&prv->wa= itq, struct null_vcpu, > > waitq_elem); > > -=C2=A0=C2=A0=C2=A0=C2=A0if ( wvc ) > > +=C2=A0=C2=A0=C2=A0=C2=A0if ( wvc && cpumask_test_cpu(cpu, cpumask_scra= tch_cpu(cpu)) ) >=20 > shouldn't this be > =C2=A0=C2=A0=C2=A0=C2=A0 > =C2=A0=C2=A0=C2=A0=C2=A0check_nvc_affinity(wvc, cpu) >=20 > ? >=20 Mmm... well, considering that I never touch cpumask_scratch_cpu() in this function, this is clearly a bug. Will fix. :-/ Thanks and Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-XottAj75R+pbnQZXG2vG Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCAAGBQJY0OiOAAoJEBZCeImluHPui9MP/jKs1/3tPRTOnlluvscuBFEY vsib1UOWMtAZzplermF2S4e2a2etafDAkBxWc85+ZnLl8W6Hs3GwN5yYPxg82VNz IlL0QcBQT1M2bPzhg1Qr7vlp7BVcTKgPqV6I2+BRS210Ipcg5B/jkhJ5hmIZFypN GIPo5QB/GTByLbaQp16Vo3mVza9lwTBT3ye3/utSrzKOyX3pjeFV+ScJVvhLLf2P VLVkidmGgu3M8g2qJGEiIs/CjDdwW/dV4fw2532jxsoRZkmbfTjsAQDRXkU1NS5J KAQn1XGkb9Z2QRdjgf+F+a4wUIqwPr94K3Jo29HyyK/7H4Ubgo6trk7nZ0aVDGov 2rSLLpuuqrZNTzpu/a3aJgM5jTLJfpLw7YT62uf6Qf5lwCb97ldn/dDWe7ujna9S MfzfUwq+t2ClwpcqrgKzIi/S+DDCEXgvPfeTaAfoAqOrAaqU+n5r27KIeASw8wBO 90gabvP0dtQ3ouHZHSP6mbmpVHgn/erbNL40DjoilJsQ+cb/KrFVLJTs16LEqsXl NV1F2qE9W2yggOu/nP8ko7sk+LXl4HRLfqGZMsYFSRfmkFK7JFRrS0R1rE7tP2wK E4SPFrglH1e0f+VJhw6D1YiTngN/3OMrUsXq/JUoYYRHm7VPgT6zY7DNTdCF7ohQ 3NZXjPBsJOe0GG7Con9T =Yu7W -----END PGP SIGNATURE----- --=-XottAj75R+pbnQZXG2vG-- --===============4899391719222746569== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --===============4899391719222746569==--