linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: "Kirti Wankhede" <kwankhede@nvidia.com>,
	"Neo Jia" <cjia@nvidia.com>,
	"Xiao Guangrong" <guangrong.xiao@linux.intel.com>,
	"Andrea Arcangeli" <aarcange@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>
Subject: [PATCH 2/2] KVM: MMU: try to fix up page faults before giving up
Date: Thu, 30 Jun 2016 15:01:51 +0200	[thread overview]
Message-ID: <1467291711-3230-3-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1467291711-3230-1-git-send-email-pbonzini@redhat.com>

The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device.  The fault handler
then can use remap_pfn_range to place some non-reserved pages in the VMA.

This kind of non-linear VM_PFNMAP mapping is not handled by KVM, but
follow_pfn and fixup_user_fault together help supporting it.  Because
these pages are not reserved, they are subject to reference counting,
but there is already a helper (kvm_get_pfn) that gets this right.

Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Reported-by: Kirti Wankhede <kwankhede@nvidia.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 mm/gup.c            |  1 +
 virt/kvm/kvm_main.c | 41 ++++++++++++++++++++++++++++++++++++++---
 2 files changed, 39 insertions(+), 3 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index c057784c8444..e3ac22f90fa4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -720,6 +720,7 @@ retry:
 	}
 	return 0;
 }
+EXPORT_SYMBOL_GPL(fixup_user_fault);
 
 static __always_inline long __get_user_pages_locked(struct task_struct *tsk,
 						struct mm_struct *mm,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5aae59e00bef..2927fb9ca062 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1446,9 +1446,41 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
 			       unsigned long addr, bool *async,
 			       bool write_fault, kvm_pfn_t *p_pfn)
 {
-	*p_pfn = ((addr - vma->vm_start) >> PAGE_SHIFT) +
-		vma->vm_pgoff;
-	BUG_ON(!kvm_is_reserved_pfn(*p_pfn));
+	unsigned long pfn;
+	int r;
+
+	r = follow_pfn(vma, addr, &pfn);
+	if (r) {
+		/*
+		 * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does
+		 * not call the fault handler, so do it here.
+		 */
+		bool unlocked = false;
+		r = fixup_user_fault(current, current->mm, addr,
+				     (write_fault ? FAULT_FLAG_WRITE : 0),
+				     &unlocked);
+		if (unlocked)
+			return -EAGAIN;
+		if (r)
+			return r;
+
+		r = follow_pfn(vma, addr, &pfn);
+		if (r)
+			return r;
+
+	}
+
+	/*
+	 * For pages mapped under VM_PFNMAP we assume that whoever called
+	 * remap_pfn_range will also call e.g. unmap_mapping_range before
+	 * the underlying pfns are freed, so that our MMU notifier gets
+	 * called.  We still have to get a reference here to the page,
+	 * because the callers of *hva_to_pfn* and *gfn_to_pfn* ultimately
+	 * end up doing a kvm_release_pfn_clean on the returned pfn.
+	 */
+	kvm_get_pfn(pfn);
+
+	*p_pfn = pfn;
 	return 0;
 }
 
@@ -1493,12 +1525,15 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
 		goto exit;
 	}
 
+retry:
 	vma = find_vma_intersection(current->mm, addr, addr + 1);
 
 	if (vma == NULL)
 		pfn = KVM_PFN_ERR_FAULT;
 	else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) {
 		r = hva_to_pfn_remapped(vma, addr, async, write_fault, &pfn);
+		if (r == -EAGAIN)
+			goto retry;
 		if (r < 0)
 			pfn = KVM_PFN_ERR_FAULT;
 	} else {
-- 
1.8.3.1

  parent reply	other threads:[~2016-06-30 13:02 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-30 13:01 [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Paolo Bonzini
2016-06-30 13:01 ` [PATCH 1/2] KVM: MMU: prepare to support mapping of VM_IO and VM_PFNMAP frames Paolo Bonzini
2016-06-30 13:01 ` Paolo Bonzini [this message]
2016-06-30 21:59 ` [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Neo Jia
2016-07-04  6:39 ` Xiao Guangrong
2016-07-04  7:03   ` Neo Jia
2016-07-04  7:37     ` Xiao Guangrong
2016-07-04  7:48       ` Paolo Bonzini
2016-07-04  7:59         ` Xiao Guangrong
2016-07-04  8:14           ` Paolo Bonzini
2016-07-04  8:21             ` Xiao Guangrong
2016-07-04  8:48               ` Paolo Bonzini
2016-07-04  7:53       ` Neo Jia
2016-07-04  8:19         ` Xiao Guangrong
2016-07-04  8:41           ` Neo Jia
2016-07-04  8:45             ` Xiao Guangrong
2016-07-04  8:54               ` Xiao Guangrong
2016-07-04  9:16               ` Neo Jia
2016-07-04 10:16                 ` Xiao Guangrong
2016-07-04 15:33                   ` Neo Jia
2016-07-05  1:19                     ` Xiao Guangrong
2016-07-05  1:35                       ` Neo Jia
2016-07-05  4:02                         ` Xiao Guangrong
2016-07-05  5:16                           ` Neo Jia
2016-07-05  6:26                             ` Xiao Guangrong
2016-07-05  7:30                               ` Neo Jia
2016-07-05  9:02                                 ` Xiao Guangrong
2016-07-05 15:07                                   ` Neo Jia
2016-07-06  2:22                                     ` Xiao Guangrong
2016-07-06  4:01                                       ` Neo Jia
2016-07-04  7:38   ` Paolo Bonzini
2016-07-04  7:40     ` Xiao Guangrong
2016-07-05  5:41 ` Neo Jia
2016-07-05 12:18   ` Paolo Bonzini
2016-07-05 14:02     ` Neo Jia
2016-07-06  2:00     ` Xiao Guangrong
2016-07-06  2:18       ` Neo Jia
2016-07-06  2:35         ` Xiao Guangrong
2016-07-06  2:57           ` Neo Jia
2016-07-06  4:02             ` Xiao Guangrong
2016-07-06 11:48               ` Paolo Bonzini
2016-07-07  2:36                 ` Xiao Guangrong
2016-07-06  6:05       ` Paolo Bonzini
2016-07-06 15:50         ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1467291711-3230-3-git-send-email-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=cjia@nvidia.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).