From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Andres Lagar-Cavilla" Subject: Re: [PATCH 3 of 3] x86/mm: Teach paging to page table-based p2m Date: Thu, 22 Mar 2012 07:47:33 -0700 Message-ID: <31e7e73852deab22c95b5c9772fbd821.squirrel@webmail.lagarcavilla.org> References: <7704c9e0f5ffdf37a290.1332357780@xdev.gridcentric.ca> <20120322105536.GE37468@ocelot.phlegethon.org> Reply-To: andres@lagarcavilla.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20120322105536.GE37468@ocelot.phlegethon.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Tim Deegan Cc: olaf@aepfle.de, keir@xen.org, andres@gridcentric.ca, xen-devel@lists.xen.org, wei.wang2@amd.com, hongkaixing@huawei.com, adin@gridcentric.ca List-Id: xen-devel@lists.xenproject.org > Hi, > > The last version of this patch had the beginnings of an interlock to > avoid iommu-pt-sharing and p2m-fu happening at the same time. I > suggested taht it wasn't complete enough, but it seems to have gone away > entirely! Tim, a follow up patch would allow enabling paging on AMD, with the interlock in place. It would be of the form: diff -r 0f8002f1efad xen/arch/x86/mm/mem_event.c --- a/xen/arch/x86/mm/mem_event.c +++ b/xen/arch/x86/mm/mem_event.c @@ -563,8 +563,11 @@ int mem_event_domctl(struct domain *d, x if ( !hap_enabled(d) ) break; - /* Currently only EPT is supported */ - if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ) + /* Currently EPT or AMD with no iommu/hap page table sharing are + * supported */ + if ( !((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) || + ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) && + !need_iommu(d))) ) break; rc = -EXDEV; Right now I haven't posted that as paging on AMD doesn't fully work. The current patch moves us closer into that direction. The interlock itself would need more fleshing out methinks. AMD needs hap_pt_share to be disabled in order to enable paging (and in the fullness of time, mem access). But more generally, wouldn't both Intel and AMD require !need_iommu()? Also, is it enough to just check during enable-p2m-foo time? What about a "p2m-foo on" flag to prevent passthrough from happening at a later time, as well? > > Also: > > At 15:23 -0400 on 21 Mar (1332343380), Andres Lagar-Cavilla wrote: >> @@ -615,11 +618,12 @@ pod_retry_l1: >> sizeof(l1e)); >> >> if ( ret == 0 ) { >> + unsigned long l1e_mfn = l1e_get_pfn(l1e); >> p2mt = p2m_flags_to_type(l1e_get_flags(l1e)); >> - ASSERT(l1e_get_pfn(l1e) != INVALID_MFN || !p2m_is_ram(p2mt)); >> + ASSERT( (l1e_mfn != INVALID_MFN || !p2m_is_ram(p2mt)) || >> + (l1e_mfn == INVALID_MFN && p2m_is_paging(p2mt)) ); > > I guess, given the discussion in the other subthread, that this ASSERT > always passes, and should be using mfn_valid() instead? Correct right now. Depends on where we end up wrt to patch #2 of this series. Thanks Andres > > Cheers, > > Tim. >