From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andres Lagar-Cavilla Subject: [PATCH 1 of 8] x86/mm: Fix paging_load Date: Wed, 25 Jan 2012 22:53:25 -0500 Message-ID: <143e4982c9bf0da5d8fe.1327550005@xdev.gridcentric.ca> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: xen-devel@lists.xensource.com Cc: olaf@aepfle.de, tim@xen.org, andres@gridcentric.ca, adin@gridcentric.ca List-Id: xen-devel@lists.xenproject.org xen/arch/x86/mm/p2m.c | 18 ++++++++---------- 1 files changed, 8 insertions(+), 10 deletions(-) When restoring a p2m entry in the paging_load path, we were not updating the m2p entry correctly. Also take advantage of this to act on an old suggestion: once done with the load, promote the p2m entry to the final guest accessible type. This simplifies logic. Tested to work with xenpaging. Signed-off-by: Andres Lagar-Cavilla diff -r f09f62ae92b7 -r 143e4982c9bf xen/arch/x86/mm/p2m.c --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -975,7 +975,7 @@ void p2m_mem_paging_populate(struct doma int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer) { struct page_info *page; - p2m_type_t p2mt, target_p2mt; + p2m_type_t p2mt; p2m_access_t a; mfn_t mfn; struct p2m_domain *p2m = p2m_get_hostp2m(d); @@ -1033,15 +1033,13 @@ int p2m_mem_paging_prep(struct domain *d } } - target_p2mt = (p2mt == p2m_ram_paging_in_start) ? - /* If we kicked the pager with a populate event, the pager will send - * a resume event back */ - p2m_ram_paging_in : - /* If this was called asynchronously by the pager, then we can - * transition directly to the final guest-accessible type */ - (paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw); - /* Fix p2m mapping */ - set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, target_p2mt, a); + /* Make the page already guest-accessible. If the pager still has a + * pending resume operation, it will be idempotent p2m entry-wise, + * but will unpause the vcpu */ + set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, + paging_mode_log_dirty(d) ? p2m_ram_logdirty : + p2m_ram_rw, a); + set_gpfn_from_mfn(mfn_x(mfn), gfn); atomic_dec(&d->paged_pages);