From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s} Date: Fri, 23 Oct 2015 15:50:09 +0200 Message-ID: <1445608209.5117.125.camel@citrix.com> References: <1445596390-31298-1-git-send-email-julien.grall@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3926561699218372819==" Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Zpcjc-00064V-Cu for xen-devel@lists.xenproject.org; Fri, 23 Oct 2015 13:50:20 +0000 In-Reply-To: <1445596390-31298-1-git-send-email-julien.grall@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Julien Grall , xen-devel@lists.xenproject.org Cc: George Dunlap , Andrew Cooper , Keir Fraser , Jan Beulich List-Id: xen-devel@lists.xenproject.org --===============3926561699218372819== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-yC6KyhNjTAQjGYQlB+B8" --=-yC6KyhNjTAQjGYQlB+B8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, 2015-10-23 at 11:33 +0100, Julien Grall wrote: > The last parameter of alloc_domheap_page{s,} contain the memory flags > and > not the order of the allocation. >=20 > Use 0 for the call in p2m_pod_set_cache_target as it was before > 1069d63c5ef2510d08b83b2171af660e5bb18c63 "x86/mm/p2m: use defines for > page sizes". Note that PAGE_ORDER_4K is also equal to 0 so the > behavior > stays the same. >=20 > For the call in p2m_pod_offline_or_broken_replace we want to allocate > the new page on the same numa node as the previous page. So retrieve > the > numa node and pass it in the memory flags. >=20 > Signed-off-by: Julien Grall >=20 > --- >=20 > Note that the patch has only been build tested. >=20 I've done some basic testing. That means I: - created an HVM guest with memory < maxmem - played with `xl mem-set' and `xl mem-max' on it - local migrated it - played with `xl mem-set' and `xl mem-max' on it again - shutdown it All done on a NUMA host, with memory dancing (during the 'play' phases) up and down the amount of RAM present in each NUMA node. I'm not sure how I should trigger and test memory hotunplug, neither whether or not my testbox supports it. Since it seems that memory hotumplug is what was really necessary, I'm not sure it's appropriate to add the following tag: Tested-by: Dario Faggioli but I'll let you guys (Jan, mainly, I guess) decide. If the above is deemed enough, feel free to stick it there, if not, fine anyway. :-) Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-yC6KyhNjTAQjGYQlB+B8 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEABECAAYFAlYqOxEACgkQk4XaBE3IOsScCQCeICJabxjnGImIIxx2K43obkhr aPwAnjPFeCmSvcYFqKbMvaeGtdMK5vvQ =2Ajm -----END PGP SIGNATURE----- --=-yC6KyhNjTAQjGYQlB+B8-- --===============3926561699218372819== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============3926561699218372819==--