From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 79C4D6B50F5 for ; Thu, 30 Aug 2018 06:44:18 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id x2-v6so4871845pgp.4 for ; Thu, 30 Aug 2018 03:44:18 -0700 (PDT) Received: from mga03.intel.com (mga03.intel.com. [134.134.136.65]) by mx.google.com with ESMTPS id i26-v6si6587385pgn.589.2018.08.30.03.44.17 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Aug 2018 03:44:17 -0700 (PDT) Date: Fri, 31 Aug 2018 03:23:13 +0800 From: Yi Zhang Subject: Re: [PATCH V4 4/4] kvm: add a check if pfn is from NVDIMM pmem. Message-ID: <20180830192312.GA84758@tiger-server> References: <380594559.7598107.1535537748154.JavaMail.zimbra@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <380594559.7598107.1535537748154.JavaMail.zimbra@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: Pankaj Gupta Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, pbonzini@redhat.com, dan j williams , dave jiang , yu c zhang , david@redhat.com, jack@suse.cz, hch@lst.de, linux-mm@kvack.org, rkrcmar@redhat.com, jglisse@redhat.com, yi z zhang On 2018-08-29 at 06:15:48 -0400, Pankaj Gupta wrote: > > > > > For device specific memory space, when we move these area of pfn to > > memory zone, we will set the page reserved flag at that time, some of > > these reserved for device mmio, and some of these are not, such as > > NVDIMM pmem. > > > > Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM > > backend, since these pages are reserved, the check of > > kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we > > introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX, > > to identify these pages are from NVDIMM pmem and let kvm treat these > > as normal pages. > > > > Without this patch, many operations will be missed due to this > > mistreatment to pmem pages, for example, a page may not have chance to > > be unpinned for KVM guest(in kvm_release_pfn_clean), not able to be > > marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc. > > > > Signed-off-by: Zhang Yi > > --- > > virt/kvm/kvm_main.c | 8 ++++++-- > > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index c44c406..969b6ca 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -147,8 +147,12 @@ __weak void > > kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, > > > > bool kvm_is_reserved_pfn(kvm_pfn_t pfn) > > { > > - if (pfn_valid(pfn)) > > - return PageReserved(pfn_to_page(pfn)); > > + struct page *page; > > + > > + if (pfn_valid(pfn)) { > > + page = pfn_to_page(pfn); > > + return PageReserved(page) && !is_dax_page(page); > > + } > > > > return true; > > } > > Acked-by: Pankaj Gupta Thanks for your kindly review, Pankaj, as all the patch [1,2,3,4]/4 got the reviewed[acked]-by, can we Queue this by now? > > > -- > > 2.7.4 > > > >