From: Yan Zhao <yan.y.zhao@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "zhenyuw@linux.intel.com" <zhenyuw@linux.intel.com>,
"intel-gvt-dev@lists.freedesktop.org"
<intel-gvt-dev@lists.freedesktop.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"Tian, Kevin" <kevin.tian@intel.com>,
"peterx@redhat.com" <peterx@redhat.com>
Subject: Re: [PATCH v3 2/7] vfio: introduce vfio_dma_rw to read/write a range of IOVAs
Date: Sun, 8 Mar 2020 21:00:56 -0400 [thread overview]
Message-ID: <20200309010055.GA18137@joy-OptiPlex-7040> (raw)
In-Reply-To: <20200306092746.088a01a3@x1.home>
On Sat, Mar 07, 2020 at 12:27:46AM +0800, Alex Williamson wrote:
> On Thu, 5 Mar 2020 20:21:48 -0500
> Yan Zhao <yan.y.zhao@intel.com> wrote:
>
> > On Mon, Feb 24, 2020 at 04:47:15PM +0800, Zhao, Yan Y wrote:
> > > vfio_dma_rw will read/write a range of user space memory pointed to by
> > > IOVA into/from a kernel buffer without enforcing pinning the user space
> > > memory.
> > >
> > > TODO: mark the IOVAs to user space memory dirty if they are written in
> > > vfio_dma_rw().
> > >
> > > Cc: Kevin Tian <kevin.tian@intel.com>
> > > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
> > > ---
> > > drivers/vfio/vfio.c | 49 +++++++++++++++++++++
> > > drivers/vfio/vfio_iommu_type1.c | 77 +++++++++++++++++++++++++++++++++
> > > include/linux/vfio.h | 5 +++
> > > 3 files changed, 131 insertions(+)
> > >
> > > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> > > index 914bdf4b9d73..902867627cbf 100644
> > > --- a/drivers/vfio/vfio.c
> > > +++ b/drivers/vfio/vfio.c
> > > @@ -1998,6 +1998,55 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
> > > }
> > > EXPORT_SYMBOL(vfio_unpin_pages);
> > >
> > > +
> > > +/*
> > > + * This interface allows the CPUs to perform some sort of virtual DMA on
> > > + * behalf of the device.
> > > + *
> > > + * CPUs read/write a range of IOVAs pointing to user space memory into/from
> > > + * a kernel buffer.
> > > + *
> > > + * As the read/write of user space memory is conducted via the CPUs and is
> > > + * not a real device DMA, it is not necessary to pin the user space memory.
> > > + *
> > > + * The caller needs to call vfio_group_get_external_user() or
> > > + * vfio_group_get_external_user_from_dev() prior to calling this interface,
> > > + * so as to prevent the VFIO group from disposal in the middle of the call.
> > > + * But it can keep the reference to the VFIO group for several calls into
> > > + * this interface.
> > > + * After finishing using of the VFIO group, the caller needs to release the
> > > + * VFIO group by calling vfio_group_put_external_user().
> > > + *
> > > + * @group [in]: vfio group of a device
> > > + * @iova [in] : base IOVA of a user space buffer
> > > + * @data [in] : pointer to kernel buffer
> > > + * @len [in] : kernel buffer length
> > > + * @write : indicate read or write
> > > + * Return error code on failure or 0 on success.
> > > + */
> > > +int vfio_dma_rw(struct vfio_group *group, dma_addr_t iova,
> > > + void *data, size_t len, bool write)
> > hi Alex
> > May I rename this interface to vfio_dma_rw_from_group() that takes
> > VFIO group as arg and add another interface vfio_dma_rw(struct device *dev...) ?
> > That might be easier for a driver to use the second one if it does not care about
> > performance much.
>
> Perhaps vfio_group_dma_rw() and vfio_dev_dma_rw()? I'd be reluctant to
> add the latter, if a caller doesn't care about performance then they
> won't mind making a couple calls to get and release the group reference.
> Thanks,
>
yes, it makes sense. Then I withdraw this request :)
Thanks
Yan
next prev parent reply other threads:[~2020-03-09 1:10 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-24 8:43 [PATCH v3 0/7] use vfio_dma_rw to read/write IOVAs from CPU side Yan Zhao
2020-02-24 8:46 ` [PATCH v3 1/7] vfio: allow external user to get vfio group from device Yan Zhao
2020-02-24 19:15 ` Alex Williamson
2020-02-25 3:35 ` Yan Zhao
2020-03-05 19:01 ` Alex Williamson
2020-03-06 1:12 ` Yan Zhao
2020-02-24 8:47 ` [PATCH v3 2/7] vfio: introduce vfio_dma_rw to read/write a range of IOVAs Yan Zhao
2020-02-24 19:14 ` Alex Williamson
2020-02-25 3:44 ` Yan Zhao
2020-03-06 1:21 ` Yan Zhao
2020-03-06 16:27 ` Alex Williamson
2020-03-09 1:00 ` Yan Zhao [this message]
2020-02-24 8:47 ` [PATCH v3 3/7] vfio: avoid inefficient lookup of VFIO group in vfio_pin/unpin_pages Yan Zhao
2020-02-24 8:47 ` [PATCH v3 4/7] drm/i915/gvt: hold reference of VFIO group during opening of vgpu Yan Zhao
2020-02-24 8:48 ` [PATCH v3 5/7] drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw Yan Zhao
2020-02-24 8:48 ` [PATCH v3 6/7] drm/i915/gvt: avoid unnecessary lookup in each vfio pin & unpin pages Yan Zhao
2020-02-24 8:48 ` [PATCH v3 7/7] drm/i915/gvt: rw more pages a time for shadow context Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200309010055.GA18137@joy-OptiPlex-7040 \
--to=yan.y.zhao@intel.com \
--cc=alex.williamson@redhat.com \
--cc=intel-gvt-dev@lists.freedesktop.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=zhenyuw@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox