From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tamas K Lengyel Subject: Re: [PATCH] altp2m: Allow the hostp2m to be shared Date: Wed, 27 Apr 2016 09:37:48 -0600 Message-ID: References: <1461258632-3330-1-git-send-email-tamas@tklengyel.com> <5720D450.5020709@citrix.com> <5720DB5E.407@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3575588582914699339==" Return-path: Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1avRXG-0008Qf-MY for xen-devel@lists.xenproject.org; Wed, 27 Apr 2016 15:37:54 +0000 Received: by mail-wm0-f52.google.com with SMTP id a17so21375916wme.0 for ; Wed, 27 Apr 2016 08:37:50 -0700 (PDT) Received: from mail-wm0-f48.google.com (mail-wm0-f48.google.com. [74.125.82.48]) by smtp.gmail.com with ESMTPSA id gg7sm4678512wjd.10.2016.04.27.08.37.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Apr 2016 08:37:49 -0700 (PDT) Received: by mail-wm0-f48.google.com with SMTP id a17so21374941wme.0 for ; Wed, 27 Apr 2016 08:37:49 -0700 (PDT) In-Reply-To: <5720DB5E.407@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: George Dunlap Cc: Kevin Tian , Keir Fraser , Jan Beulich , George Dunlap , Andrew Cooper , Jun Nakajima , Xen-devel List-Id: xen-devel@lists.xenproject.org --===============3575588582914699339== Content-Type: multipart/alternative; boundary=001a114b319026e8ad05317931ef --001a114b319026e8ad05317931ef Content-Type: text/plain; charset=UTF-8 On Wed, Apr 27, 2016 at 9:31 AM, George Dunlap wrote: > On 27/04/16 16:18, Tamas K Lengyel wrote: > > On Wed, Apr 27, 2016 at 9:01 AM, George Dunlap > > > wrote: > > > >> On 21/04/16 18:10, Tamas K Lengyel wrote: > >>> Don't propagate altp2m changes from ept_set_entry for memshare as > >> memshare > >>> already has the lock. We call altp2m propagate changes once memshare > >>> successfully finishes. Also, allow the hostp2m entries to be of type > >>> p2m_ram_shared. > >>> > >>> Signed-off-by: Tamas K Lengyel > >> > >> Sorry for the delay in reviewing -- trying to get my head back around > >> the altp2m code. On the whole looks reasonable, but one question... > >> > >>> --- > >>> Cc: George Dunlap > >>> Cc: Keir Fraser > >>> Cc: Jan Beulich > >>> Cc: Andrew Cooper > >>> Cc: Jun Nakajima > >>> Cc: Kevin Tian > >>> --- > >>> xen/arch/x86/mm/mem_sharing.c | 11 +++++++++++ > >>> xen/arch/x86/mm/p2m-ept.c | 2 +- > >>> xen/arch/x86/mm/p2m.c | 7 +++---- > >>> 3 files changed, 15 insertions(+), 5 deletions(-) > >>> > >>> diff --git a/xen/arch/x86/mm/mem_sharing.c > >> b/xen/arch/x86/mm/mem_sharing.c > >>> index a522423..d5b4b2d 100644 > >>> --- a/xen/arch/x86/mm/mem_sharing.c > >>> +++ b/xen/arch/x86/mm/mem_sharing.c > >>> @@ -35,6 +35,7 @@ > >>> #include > >>> #include > >>> #include > >>> +#include > >>> #include > >>> > >>> #include "mm-locks.h" > >>> @@ -1026,6 +1027,16 @@ int mem_sharing_share_pages(struct domain *sd, > >> unsigned long sgfn, shr_handle_t > >>> /* We managed to free a domain page. */ > >>> atomic_dec(&nr_shared_mfns); > >>> atomic_inc(&nr_saved_mfns); > >>> + > >>> + if( altp2m_active(cd) ) > >>> + { > >>> + p2m_access_t a; > >>> + struct p2m_domain *p2m = p2m_get_hostp2m(cd); > >>> + p2m->get_entry(p2m, cgfn, NULL, &a, 0, NULL, NULL); > >>> + p2m_altp2m_propagate_change(cd, _gfn(cgfn), smfn, > PAGE_ORDER_4K, > >>> + p2m_ram_shared, a); > >>> + } > >>> + > >>> ret = 0; > >>> > >>> err_out: > >>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c > >>> index 3cb6868..1ac3018 100644 > >>> --- a/xen/arch/x86/mm/p2m-ept.c > >>> +++ b/xen/arch/x86/mm/p2m-ept.c > >>> @@ -846,7 +846,7 @@ out: > >>> if ( is_epte_present(&old_entry) ) > >>> ept_free_entry(p2m, &old_entry, target); > >>> > >>> - if ( rc == 0 && p2m_is_hostp2m(p2m) ) > >>> + if ( rc == 0 && p2m_is_hostp2m(p2m) && p2mt != p2m_ram_shared ) > >>> p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, > >> p2ma); > >>> > >>> return rc; > >>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c > >>> index b3fce1b..d2aebf7 100644 > >>> --- a/xen/arch/x86/mm/p2m.c > >>> +++ b/xen/arch/x86/mm/p2m.c > >>> @@ -1739,11 +1739,10 @@ int p2m_set_altp2m_mem_access(struct domain *d, > >> struct p2m_domain *hp2m, > >>> /* Check host p2m if no valid entry in alternate */ > >>> if ( !mfn_valid(mfn) ) > >>> { > >>> - mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, > >>> - P2M_ALLOC | P2M_UNSHARE, &page_order, > >> NULL); > >>> + mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, 0, &page_order, > >> NULL); > >> > >> Why are you getting rid of P2M_ALLOC here? What happens if the hp2m > >> entry is populate-on-demand? > >> > > > > There is a check further down here that only allows p2m_ram_rw and > > p2m_ram_shared. > > So what P2M_ALLOC means is, "If this is entry is PoD, then please > populate it so I get a ram page." So the only way you can get a > p2m_populate_on_demand type returned is if you remove this flag. Leave > it and (assuming there's enough ram to go around), you'll always get > p2m_ram_rw. :-) > > > On the non-altp2m path mem_access doesn't request P2M_ALLOC > > either (but doesn't check the type), so I would say mem_access is not > > compatible with PoD. > > Off the top of my head I can't see a reason why they couldn't co-exist > in principle, if you added P2M_ALLOC in a few key places. > Sure, I just rather do that in a separate patch and for now have the mem_access paths behaving the same way before doing that adjustment. Tamas --001a114b319026e8ad05317931ef Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Wed, Apr 27, 2016 at 9:31 AM, George Dunlap <george.dunlap@c= itrix.com> wrote:
On 27/04/16 16:18, Tamas K Lengyel wrote: > On Wed, Apr 27, 2016 at 9:01 AM, George Dunlap <george.dunlap@citrix.com>
> wrote:
>
>> On 21/04/16 18:10, Tamas K Lengyel wrote:
>>> Don't propagate altp2m changes from ept_set_entry for mems= hare as
>> memshare
>>> already has the lock. We call altp2m propagate changes once me= mshare
>>> successfully finishes. Also, allow the hostp2m entries to be o= f type
>>> p2m_ram_shared.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>>
>> Sorry for the delay in reviewing -- trying to get my head back aro= und
>> the altp2m code.=C2=A0 On the whole looks reasonable, but one ques= tion...
>>
>>> ---
>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>> Cc: Keir Fraser <keir@xen.o= rg>
>>> Cc: Jan Beulich <jbeul= ich@suse.com>
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>> Cc: Kevin Tian <kev= in.tian@intel.com>
>>> ---
>>>=C2=A0 xen/arch/x86/mm/mem_sharing.c | 11 +++++++++++
>>>=C2=A0 xen/arch/x86/mm/p2m-ept.c=C2=A0 =C2=A0 =C2=A0|=C2=A0 2 += -
>>>=C2=A0 xen/arch/x86/mm/p2m.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|= =C2=A0 7 +++----
>>>=C2=A0 3 files changed, 15 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/mm/mem_sharing.c
>> b/xen/arch/x86/mm/mem_sharing.c
>>> index a522423..d5b4b2d 100644
>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>> @@ -35,6 +35,7 @@
>>>=C2=A0 #include <asm/p2m.h>
>>>=C2=A0 #include <asm/atomic.h>
>>>=C2=A0 #include <asm/event.h>
>>> +#include <asm/altp2m.h>
>>>=C2=A0 #include <xsm/xsm.h>
>>>
>>>=C2=A0 #include "mm-locks.h"
>>> @@ -1026,6 +1027,16 @@ int mem_sharing_share_pages(struct doma= in *sd,
>> unsigned long sgfn, shr_handle_t
>>>=C2=A0 =C2=A0 =C2=A0 /* We managed to free a domain page. */ >>>=C2=A0 =C2=A0 =C2=A0 atomic_dec(&nr_shared_mfns);
>>>=C2=A0 =C2=A0 =C2=A0 atomic_inc(&nr_saved_mfns);
>>> +
>>> +=C2=A0 =C2=A0 if( altp2m_active(cd) )
>>> +=C2=A0 =C2=A0 {
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m_access_t a;
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct p2m_domain *p2m =3D p2m_ge= t_hostp2m(cd);
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m->get_entry(p2m, cgfn, NULL= , &a, 0, NULL, NULL);
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m_altp2m_propagate_change(cd, _= gfn(cgfn), smfn, PAGE_ORDER_4K,
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m_ram_= shared, a);
>>> +=C2=A0 =C2=A0 }
>>> +
>>>=C2=A0 =C2=A0 =C2=A0 ret =3D 0;
>>>
>>>=C2=A0 err_out:
>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-e= pt.c
>>> index 3cb6868..1ac3018 100644
>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>> @@ -846,7 +846,7 @@ out:
>>>=C2=A0 =C2=A0 =C2=A0 if ( is_epte_present(&old_entry) )
>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ept_free_entry(p2m, &old= _entry, target);
>>>
>>> -=C2=A0 =C2=A0 if ( rc =3D=3D 0 && p2m_is_hostp2m(p2m)= )
>>> +=C2=A0 =C2=A0 if ( rc =3D=3D 0 && p2m_is_hostp2m(p2m)= && p2mt !=3D p2m_ram_shared )
>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p2m_altp2m_propagate_change(= d, _gfn(gfn), mfn, order, p2mt,
>> p2ma);
>>>
>>>=C2=A0 =C2=A0 =C2=A0 return rc;
>>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>>> index b3fce1b..d2aebf7 100644
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -1739,11 +1739,10 @@ int p2m_set_altp2m_mem_access(struct d= omain *d,
>> struct p2m_domain *hp2m,
>>>=C2=A0 =C2=A0 =C2=A0 /* Check host p2m if no valid entry in alt= ernate */
>>>=C2=A0 =C2=A0 =C2=A0 if ( !mfn_valid(mfn) )
>>>=C2=A0 =C2=A0 =C2=A0 {
>>> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 mfn =3D hp2m->get_entry(hp2m, = gfn_l, &t, &old_a,
>>> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 P2M_ALLOC | P2M_UNSHARE, &= ;page_order,
>> NULL);
>>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mfn =3D hp2m->get_entry(hp2m, = gfn_l, &t, &old_a, 0, &page_order,
>> NULL);
>>
>> Why are you getting rid of P2M_ALLOC here?=C2=A0 What happens if t= he hp2m
>> entry is populate-on-demand?
>>
>
> There is a check further down here that only allows p2m_ram_rw and
> p2m_ram_shared.

So what P2M_ALLOC means is, "If this is entry is PoD, then= please
populate it so I get a ram page."=C2=A0 So the only way you can get a<= br> p2m_populate_on_demand type returned is if you remove this flag.=C2=A0 Leav= e
it and (assuming there's enough ram to go around), you'll always ge= t
p2m_ram_rw.=C2=A0 :-)

> On the non-altp2m path mem_access doesn't request P2M_ALLOC
> either (but doesn't check the type), so I would say mem_access is = not
> compatible with PoD.

Off the top of my head I can't see a reason why they couldn'= t co-exist
in principle, if you added P2M_ALLOC in a few key places.

=
Sure, I just rather do that in a separate patch and for now have the m= em_access paths behaving the same way before doing that adjustment.

=
Tamas
=C2=A0

--001a114b319026e8ad05317931ef-- --===============3575588582914699339== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y Zy94ZW4tZGV2ZWwK --===============3575588582914699339==--