From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53016) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c1Q1d-0005V4-E3 for qemu-devel@nongnu.org; Mon, 31 Oct 2016 23:46:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c1Q1Y-0001sz-E6 for qemu-devel@nongnu.org; Mon, 31 Oct 2016 23:46:13 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:41524) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c1Q1Y-0001sW-6J for qemu-devel@nongnu.org; Mon, 31 Oct 2016 23:46:08 -0400 Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id uA13hne9139585 for ; Mon, 31 Oct 2016 23:46:05 -0400 Received: from e38.co.us.ibm.com (e38.co.us.ibm.com [32.97.110.159]) by mx0a-001b2d01.pphosted.com with ESMTP id 26eab518dt-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 31 Oct 2016 23:46:05 -0400 Received: from localhost by e38.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 31 Oct 2016 21:46:04 -0600 Date: Tue, 1 Nov 2016 11:45:58 +0800 From: Dong Jia Shi References: <1477517366-27871-1-git-send-email-kwankhede@nvidia.com> <1477517366-27871-11-git-send-email-kwankhede@nvidia.com> <5812FF66.6020801@intel.com> <20161028064045.0e8ca7dc@t450s.home> <20161028143350.45df29c1@t450s.home> <9cfebf8f-7c30-6d2c-a1ec-cc9c9ee1bdd7@nvidia.com> <20161029080301.5e464435@t450s.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161029080301.5e464435@t450s.home> Message-Id: <20161101034558.GA7186@bjsdjshi@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v10 10/19] vfio iommu: Add blocking notifier to notify DMA_UNMAP List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Williamson Cc: Kirti Wankhede , Jike Song , pbonzini@redhat.com, kraxel@redhat.com, cjia@nvidia.com, qemu-devel@nongnu.org, kvm@vger.kernel.org, kevin.tian@intel.com, bjsdjshi@linux.vnet.ibm.com, linux-kernel@vger.kernel.org * Alex Williamson [2016-10-29 08:03:01 -0600]: > On Sat, 29 Oct 2016 16:07:05 +0530 > Kirti Wankhede wrote: > > > On 10/29/2016 2:03 AM, Alex Williamson wrote: > > > On Sat, 29 Oct 2016 01:32:35 +0530 > > > Kirti Wankhede wrote: > > > > > >> On 10/28/2016 6:10 PM, Alex Williamson wrote: > > >>> On Fri, 28 Oct 2016 15:33:58 +0800 > > >>> Jike Song wrote: > > >>> > > ... > > >>>>> > > >>>>> +/* > > >>>>> + * This function finds pfn in domain->external_addr_space->pfn_list for given > > >>>>> + * iova range. If pfn exist, notify pfn to registered notifier list. On > > >>>>> + * receiving notifier callback, vendor driver should invalidate the mapping and > > >>>>> + * call vfio_unpin_pages() to unpin this pfn. With that vfio_pfn for this pfn > > >>>>> + * gets removed from rb tree of pfn_list. That re-arranges rb tree, so while > > >>>>> + * searching for next vfio_pfn in rb tree, start search from first node again. > > >>>>> + * If any vendor driver doesn't unpin that pfn, vfio_pfn would not get removed > > >>>>> + * from rb tree and so in next search vfio_pfn would be same as previous > > >>>>> + * vfio_pfn. In that case, exit from loop. > > >>>>> + */ > > >>>>> +static void vfio_notifier_call_chain(struct vfio_iommu *iommu, > > >>>>> + struct vfio_iommu_type1_dma_unmap *unmap) > > >>>>> +{ > > >>>>> + struct vfio_domain *domain = iommu->external_domain; > > >>>>> + struct rb_node *n; > > >>>>> + struct vfio_pfn *vpfn = NULL, *prev_vpfn; > > >>>>> + > > >>>>> + do { > > >>>>> + prev_vpfn = vpfn; > > >>>>> + mutex_lock(&domain->external_addr_space->pfn_list_lock); > > >>>>> + > > >>>>> + n = rb_first(&domain->external_addr_space->pfn_list); > > >>>>> + > > >>>>> + for (; n; n = rb_next(n), vpfn = NULL) { > > >>>>> + vpfn = rb_entry(n, struct vfio_pfn, node); > > >>>>> + > > >>>>> + if ((vpfn->iova >= unmap->iova) && > > >>>>> + (vpfn->iova < unmap->iova + unmap->size)) > > >>>>> + break; > > >>>>> + } > > >>>>> + > > >>>>> + mutex_unlock(&domain->external_addr_space->pfn_list_lock); > > >>>>> + > > >>>>> + /* Notify any listeners about DMA_UNMAP */ > > >>>>> + if (vpfn) > > >>>>> + blocking_notifier_call_chain(&iommu->notifier, > > >>>>> + VFIO_IOMMU_NOTIFY_DMA_UNMAP, > > >>>>> + &vpfn->pfn); > > >>>> > > >>>> Hi Kirti, > > >>>> > > >>>> The information carried by notifier is only a pfn. > > >>>> > > >>>> Since your pin/unpin interfaces design, it's the vendor driver who should > > >>>> guarantee pin/unpin same times. To achieve that, the vendor driver must > > >>>> cache it's iova->pfn mapping on its side, to avoid pinning a same page > > >>>> for multiple times. > > >>>> > > >>>> With the notifier carrying only a pfn, to find the iova by this pfn, > > >>>> the vendor driver must *also* keep a reverse-mapping. That's a bit > > >>>> too much. > > >>>> > > >>>> Since the vendor could also suffer from IOMMU-compatible problem, > > >>>> which means a local cache is always helpful, so I'd like to have the > > >>>> iova carried to the notifier. > > >>>> > > >>>> What'd you say? > > >>> > > >>> I agree, the pfn is not unique, multiple guest pfns (iovas) might be > > >>> backed by the same host pfn. DMA_UNMAP calls are based on iova, the > > >>> notifier through to the vendor driver must be based on the same. > > >> > > >> Host pfn should be unique, right? > > > > > > Let's say a user does a malloc of a single page and does 100 calls to > > > MAP_DMA populating 100 pages of IOVA space all backed by the same > > > malloc'd page. This is valid, I have unit tests that do essentially > > > this. Those will all have the same pfn. The user then does an > > > UNMAP_DMA to a single one of those IOVA pages. Did the user unmap > > > everything matching that pfn? Of course not, they only unmapped that > > > one IOVA page. There is no guarantee of a 1:1 mapping of pfn to IOVA. > > > UNMAP_DMA works based on IOVA. Invalidation broadcasts to the vendor > > > driver MUST therefore also work based on IOVA. This is not an academic > > > problem, address space aliases exist in real VMs, imagine a virtual > > > IOMMU. Thanks, > > > > > > > > > So struct vfio_iommu_type1_dma_unmap should be passed as argument to > > notifier callback: > > > > if (unmapped && iommu->external_domain) > > - vfio_notifier_call_chain(iommu, unmap); > > + blocking_notifier_call_chain(&iommu->notifier, > > + VFIO_IOMMU_NOTIFY_DMA_UNMAP, > > + unmap); > > > > Then vendor driver should find pfns he has pinned from this range of > > iovas, then invalidate and unpin pfns. Right? > > That seems like a valid choice. It's probably better than calling the > notifier for each page of iova. Thanks, > > Alex > Hi Kirti, This version requires the *vendor driver* call vfio_register_notifier for an mdev device before any pinning operations. I guess all of the vendor drivers may have some alike code for notifier registration/unregistration. My question is, how about letting the mdev framework managing the notifier registration/unregistration process? We could add a notifier_fn_t callback to "struct parent_ops", then the mdev framework should make sure that the vendor driver assigned a value to this callback. The mdev core could initiate a notifier_block for each parent driver with its callback, and register/unregister it to vfio in the right time. -- Dong Jia