From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34704) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cVkU9-0000HE-Cw for qemu-devel@nongnu.org; Mon, 23 Jan 2017 14:41:02 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cVkU3-00051a-3W for qemu-devel@nongnu.org; Mon, 23 Jan 2017 14:41:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51464) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cVkU2-00051V-RA for qemu-devel@nongnu.org; Mon, 23 Jan 2017 14:40:55 -0500 Date: Mon, 23 Jan 2017 12:40:53 -0700 From: Alex Williamson Message-ID: <20170123124053.4b2c895f@t450s.home> In-Reply-To: References: <1484917736-32056-1-git-send-email-peterx@redhat.com> <1484917736-32056-19-git-send-email-peterx@redhat.com> <490bbb84-213b-1b2a-5a1b-fa42a5c6a359@redhat.com> <20170122090425.GB26526@pxdev.xzpeter.org> <1dd223d1-dc02-bddc-02ea-78d267dd40a4@redhat.com> <20170123033429.GF26526@pxdev.xzpeter.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: Peter Xu , tianyu.lan@intel.com, kevin.tian@intel.com, mst@redhat.com, jan.kiszka@siemens.com, bd.aviv@gmail.com, qemu-devel@nongnu.org On Mon, 23 Jan 2017 18:23:44 +0800 Jason Wang wrote: > On 2017=E5=B9=B401=E6=9C=8823=E6=97=A5 11:34, Peter Xu wrote: > > On Mon, Jan 23, 2017 at 09:55:39AM +0800, Jason Wang wrote: =20 > >> > >> On 2017=E5=B9=B401=E6=9C=8822=E6=97=A5 17:04, Peter Xu wrote: =20 > >>> On Sun, Jan 22, 2017 at 04:08:04PM +0800, Jason Wang wrote: > >>> > >>> [...] > >>> =20 > >>>>> +static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s, > >>>>> + uint16_t domain_id, hwa= ddr addr, > >>>>> + uint8_t am) > >>>>> +{ > >>>>> + IntelIOMMUNotifierNode *node; > >>>>> + VTDContextEntry ce; > >>>>> + int ret; > >>>>> + > >>>>> + QLIST_FOREACH(node, &(s->notifiers_list), next) { > >>>>> + VTDAddressSpace *vtd_as =3D node->vtd_as; > >>>>> + ret =3D vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bu= s), > >>>>> + vtd_as->devfn, &ce); > >>>>> + if (!ret && domain_id =3D=3D VTD_CONTEXT_ENTRY_DID(ce.hi))= { > >>>>> + vtd_page_walk(&ce, addr, addr + (1 << am) * VTD_PAGE_S= IZE, > >>>>> + vtd_page_invalidate_notify_hook, > >>>>> + (void *)&vtd_as->iommu, true); =20 > >>>> Why not simply trigger the notifier here? (or is this vfio required?= ) =20 > >>> Because we may only want to notify part of the region - we are with > >>> mask here, but not exact size. > >>> > >>> Consider this: guest (with caching mode) maps 12K memory (4K*3 pages), > >>> the mask will be extended to 16K in the guest. In that case, we need > >>> to explicitly go over the page entry to know that the 4th page should > >>> not be notified. =20 > >> I see. Then it was required by vfio only, I think we can add a fast pa= th for > >> !CM in this case by triggering the notifier directly. =20 > > I noted this down (to be further investigated in my todo), but I don't > > know whether this can work, due to the fact that I think it is still > > legal that guest merge more than one PSIs into one. For example, I > > don't know whether below is legal: > > > > - guest invalidate page (0, 4k) > > - guest map new page (4k, 8k) > > - guest send single PSI of (0, 8k) > > > > In that case, it contains both map/unmap, and looks like it didn't > > disobay the spec as well? =20 >=20 > Not sure I get your meaning, you mean just send single PSI instead of two? >=20 > > =20 > >> Another possible issue is, consider (with CM) a 16K contiguous iova wi= th the > >> last page has already been mapped. In this case, if we want to map fir= st > >> three pages, when handling IOTLB invalidation, am would be 16K, then t= he > >> last page will be mapped twice. Can this lead some issue? =20 > > I don't know whether guest has special handling of this kind of > > request. =20 >=20 > This seems quite usual I think? E.g iommu_flush_iotlb_psi() did: >=20 > static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, > struct dmar_domain *domain, > unsigned long pfn, unsigned int pages, > int ih, int map) > { > unsigned int mask =3D ilog2(__roundup_pow_of_two(pages)); > uint64_t addr =3D (uint64_t)pfn << VTD_PAGE_SHIFT; > u16 did =3D domain->iommu_did[iommu->seq_id]; > ... >=20 >=20 > > > > Besides, imho to completely solve this problem, we still need that > > per-domain tree. Considering that currently the tree is inside vfio, I > > see this not a big issue as well. =20 >=20 > Another issue I found is: with this series, VFIO_IOMMU_MAP_DMA seems=20 > become guest trigger-able. And since VFIO allocate its own structure to=20 > record dma mapping, this seems open a window for evil guest to exhaust=20 > host memory which is even worse. You're thinking of pci-assign, vfio does page accounting such that a user can only lock pages up to their locked memory limit. Exposing the mapping ioctl within the guest is not a different problem from exposing the ioctl to the host user from a vfio perspective. Thanks, Alex