From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH 1 of 4 v6/leftover] libxl: enable automatic placement of guests on NUMA nodes Date: Mon, 23 Jul 2012 18:16:13 +0200 Message-ID: <1343060173.4998.44.camel@Solace> References: <20493.14245.392052.643802@mariner.uk.xensource.com> <1343054379.4998.33.camel@Solace> <20493.29389.201770.581611@mariner.uk.xensource.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3972312758099596620==" Return-path: In-Reply-To: <20493.29389.201770.581611@mariner.uk.xensource.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Jackson Cc: Andre Przywara , Ian Campbell , Stefano Stabellini , George Dunlap , Juergen Gross , xen-devel List-Id: xen-devel@lists.xenproject.org --===============3972312758099596620== Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-NNx1g/o9A7Epo9irloQC" --=-NNx1g/o9A7Epo9irloQC Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, 2012-07-23 at 16:50 +0100, Ian Jackson wrote: > Dario Faggioli writes ("Re: [PATCH 1 of 4 v6/leftover] libxl: enable auto= matic placement of guests on NUMA nodes"): > > On Mon, 2012-07-23 at 12:38 +0100, Ian Jackson wrote: > > > This probably deserves a log message. > >=20 > > It's there: it's being printed by libxl__get_numa_candidate(). It's lik= e > > that to avoid printing more of them, which would be confusing, e.g., > > something like this: > >=20 > > libxl: ERROR Too many nodes > > libxl: ERROR No placement found > >=20 > > Is that acceptable? >=20 > Two messages is better than one vague one. =20 > I agree, but in this specific case, it looked particularly ugly, as we were telling the user that we decided not to run placement _and_ also that he should be worried because placement did not succeeded! :-O > One message would be > better but then you have to make sure of course that every path > prints exactly one message. > What I want is no path producing two (or more) conflicting indications.=20 Basically, putting the message saying that we haven't found any placement in the function that actually looks for the placement --instead than in its callers-- I ensure that either an error happens (and it's logged) before the placement itself could take place, or it manages in running, does not find anything a log that... And this all looks reasonable to me. Also, I tested the various path (by creating fake nodes, etc.), and it seems to behave as you ask. I'll double check that, if it turns out to be that way, are you fine with it? > > Well, it is for me, but as we agreed on 8, I went for that. What I > > wanted was just make sure we (or whoever else) know/remember that 16 > > cold work too, in case that turns out to be useful in future. That bein= g > > said, if you agree on raising the cap to 16 right now, I'll be glad to > > do that. :-D >=20 > I don't have a particular opinion on exactly what the cap should be. > It should be sufficiently tight to prevent runaways. A 2^16 worst > case computation on domain start is certainly arguably acceptable. >=20 Ok, I'll go for 16 then (and will fix the 65536-->65535). Thanks and Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-NNx1g/o9A7Epo9irloQC Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iEYEABECAAYFAlANeM0ACgkQk4XaBE3IOsQ7HACgkSrBCyEM4Jsc5QnxTesI0u8V jXoAnjYot9u5jbfJMeP9xBis5f8DgK/0 =uYC0 -----END PGP SIGNATURE----- --=-NNx1g/o9A7Epo9irloQC-- --===============3972312758099596620== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============3972312758099596620==--