From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [Hackathon minutes] PV frontends/backends and NUMA machines Date: Tue, 21 May 2013 12:17:39 +0200 Message-ID: <1369131459.12423.35.camel@Solace> References: <519B33D2.3020906@citrix.com> <20130521092424.GK32007@zion.uk.xensource.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4822157070350542398==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: George Dunlap Cc: "xen-devel@lists.xensource.com" , Wei Liu , Roger Pau =?ISO-8859-1?Q?Monn=E9?= , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============4822157070350542398== Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="=-h9Q9278Or6unP8dHMslE" --=-h9Q9278Or6unP8dHMslE Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On mar, 2013-05-21 at 10:53 +0100, George Dunlap wrote: > On Tue, May 21, 2013 at 10:24 AM, Wei Liu wrote: > > So the core thing in netback is almost ready, I trust Linux scheduler > > now and don't pin kthread at all but relevant code shuold be easy to > > add. I just checked my code, all memory allocation is already node > > awared. > > > > As for the toolstack part, I'm not sure writing the initial node to > > xenstore will be sufficient. Do we do inter-node migration? If so > > frontend / backend should also update xenstore information as it > > migrates? >=20 > We can of course migrate the vcpus, but migrating the actual memory > from one node to another is pretty tricky, particularly for PV guests. > It won't be something that happens very often; when it does, we will > need to sort out migrating the backend threads. >=20 Indeed. > > IIRC the memory of a guest is striped through nodes, if it is this case= , > > how can pinning benefit? (I might be talking crap as I don't know much > > about NUMA and its current status in Xen) >=20 > It's striped across nodes *of its NUMA affinity*. So if you have a > 4-node box, and you set its NUMA affinity to node 3, then the > allocator will try to get all of the memory from node 3. If its > affinity is set to {2,3}, then the allocator will stripe it across > nodes 2 and 3. >=20 Right. And other than that, the whole point of work items 1, 2 and 3 (in George's list, at the beginning of this thread) is to make this striping even "wiser". So, not only 'memory comes from nodes {2,3}' but '_this_ memory comes from node {2} and _that_ memory comes from {3}'. That's why we think pinning would do it, but you're right (Wei), that is not true right now, it will only be when we'll get those work items done. :-) Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-h9Q9278Or6unP8dHMslE Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) iEYEABECAAYFAlGbScMACgkQk4XaBE3IOsSWfQCgqJgJcFJ+bnHUOt/t7cp1oshc qogAn2/4OhdMocEz7I6bVEOYGuzJNeU3 =qoe/ -----END PGP SIGNATURE----- --=-h9Q9278Or6unP8dHMslE-- --===============4822157070350542398== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============4822157070350542398==--