* Re: EPT page fault procedure [not found] <CABpY8MKRSX+RLmqNAAp8pDG6vEGHOwa0AmdH5DHf7gAdeN4nRQ@mail.gmail.com> @ 2013-10-31 10:54 ` Paolo Bonzini 2013-11-04 1:05 ` Arthur Chunqi Li 0 siblings, 1 reply; 3+ messages in thread From: Paolo Bonzini @ 2013-10-31 10:54 UTC (permalink / raw) To: Arthur Chunqi Li, kvm@vger.kernel.org Il 31/10/2013 10:07, Arthur Chunqi Li ha scritto: > Sorry to disturb you with so many trivial questions in KVM EPT memory > management and thanks for your patience. No problem, please remain onlist though. Adding back kvm@vger.kernel.org. > I got confused in the EPT > page fault processing function (tdp_page_fault). I think when Qemu > registers the memory region for a VM, physical memory mapped to this > PVA region isn't allocated indeed. So the page fault procedure of EPT > violation which maps GFN to PFN should allocate the real physical > memory and establish the real mapping from PVA to PFA in Qemu's page Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :) > table. What is the point in tdp_page_fault() handling such mapping > from PVA to PFA? The EPT page table entry is created in __direct_map using the pfn returned by try_async_pf. try_async_pf itself gets the pfn from gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn with different arguments. __gfn_to_pfn first goes from GFN to HVA using the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot, __gfn_to_hva_many), then it calls hva_to_pfn. Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls functions from the kernel's get_user_page family. Paolo ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: EPT page fault procedure 2013-10-31 10:54 ` EPT page fault procedure Paolo Bonzini @ 2013-11-04 1:05 ` Arthur Chunqi Li 2013-11-04 12:20 ` Paolo Bonzini 0 siblings, 1 reply; 3+ messages in thread From: Arthur Chunqi Li @ 2013-11-04 1:05 UTC (permalink / raw) To: Paolo Bonzini; +Cc: kvm@vger.kernel.org Hi Paolo, On Thu, Oct 31, 2013 at 6:54 PM, Paolo Bonzini <pbonzini@redhat.com> wrote: > Il 31/10/2013 10:07, Arthur Chunqi Li ha scritto: >> Sorry to disturb you with so many trivial questions in KVM EPT memory >> management and thanks for your patience. > > No problem, please remain onlist though. Adding back kvm@vger.kernel.org. > > >> I got confused in the EPT >> page fault processing function (tdp_page_fault). I think when Qemu >> registers the memory region for a VM, physical memory mapped to this >> PVA region isn't allocated indeed. So the page fault procedure of EPT >> violation which maps GFN to PFN should allocate the real physical >> memory and establish the real mapping from PVA to PFA in Qemu's page > > Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :) I mean in this procedure, how is physical memory actually allocated? When qemu firstly initialized the mapping of its userspace memory region to VM, the physical memory corresponding to this region are not actually allocated. So I think KVM should do this allocation somewhere. > >> table. What is the point in tdp_page_fault() handling such mapping >> from PVA to PFA? > > The EPT page table entry is created in __direct_map using the pfn > returned by try_async_pf. try_async_pf itself gets the pfn from > gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn > with different arguments. __gfn_to_pfn first goes from GFN to HVA using > the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot, > __gfn_to_hva_many), then it calls hva_to_pfn. > > Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls > functions from the kernel's get_user_page family. What will KVM do if get_user_page() returns a page not really exists in physical memory? Thanks, Arthur > > Paolo -- Arthur Chunqi Li Department of Computer Science School of EECS Peking University Beijing, China ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: EPT page fault procedure 2013-11-04 1:05 ` Arthur Chunqi Li @ 2013-11-04 12:20 ` Paolo Bonzini 0 siblings, 0 replies; 3+ messages in thread From: Paolo Bonzini @ 2013-11-04 12:20 UTC (permalink / raw) To: Arthur Chunqi Li; +Cc: kvm@vger.kernel.org Il 04/11/2013 02:05, Arthur Chunqi Li ha scritto: >> > Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :) > I mean in this procedure, how is physical memory actually allocated? > When qemu firstly initialized the mapping of its userspace memory > region to VM, the physical memory corresponding to this region are not > actually allocated. So I think KVM should do this allocation > somewhere. >> > >>> >> table. What is the point in tdp_page_fault() handling such mapping >>> >> from PVA to PFA? >> > >> > The EPT page table entry is created in __direct_map using the pfn >> > returned by try_async_pf. try_async_pf itself gets the pfn from >> > gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn >> > with different arguments. __gfn_to_pfn first goes from GFN to HVA using >> > the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot, >> > __gfn_to_hva_many), then it calls hva_to_pfn. >> > >> > Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls >> > functions from the kernel's get_user_page family. > What will KVM do if get_user_page() returns a page not really exists > in physical memory? In non-atomic context, hva_to_pfn_slow will swap that page in before returning (or start the swap-in and return immediately if the guest supports asynchronous page faults). In atomic context, hva_to_pfn would fail, but that only happens in debugging code (arch/x86/kvm/mmu_audit.c). Paolo ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-11-04 12:20 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CABpY8MKRSX+RLmqNAAp8pDG6vEGHOwa0AmdH5DHf7gAdeN4nRQ@mail.gmail.com>
2013-10-31 10:54 ` EPT page fault procedure Paolo Bonzini
2013-11-04 1:05 ` Arthur Chunqi Li
2013-11-04 12:20 ` Paolo Bonzini
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox