From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: EPT page fault procedure Date: Thu, 31 Oct 2013 11:54:21 +0100 Message-ID: <527236DD.5000804@redhat.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit To: Arthur Chunqi Li , "kvm@vger.kernel.org" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:33350 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751975Ab3JaKyZ (ORCPT ); Thu, 31 Oct 2013 06:54:25 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: Il 31/10/2013 10:07, Arthur Chunqi Li ha scritto: > Sorry to disturb you with so many trivial questions in KVM EPT memory > management and thanks for your patience. No problem, please remain onlist though. Adding back kvm@vger.kernel.org. > I got confused in the EPT > page fault processing function (tdp_page_fault). I think when Qemu > registers the memory region for a VM, physical memory mapped to this > PVA region isn't allocated indeed. So the page fault procedure of EPT > violation which maps GFN to PFN should allocate the real physical > memory and establish the real mapping from PVA to PFA in Qemu's page Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :) > table. What is the point in tdp_page_fault() handling such mapping > from PVA to PFA? The EPT page table entry is created in __direct_map using the pfn returned by try_async_pf. try_async_pf itself gets the pfn from gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn with different arguments. __gfn_to_pfn first goes from GFN to HVA using the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot, __gfn_to_hva_many), then it calls hva_to_pfn. Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls functions from the kernel's get_user_page family. Paolo