qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Dong Jia <bjsdjshi@linux.vnet.ibm.com>
To: Neo Jia <cjia@nvidia.com>
Cc: Kirti Wankhede <kwankhede@nvidia.com>,
	alex.williamson@redhat.com, pbonzini@redhat.com,
	kraxel@redhat.com, qemu-devel@nongnu.org, kvm@vger.kernel.org,
	kevin.tian@intel.com, shuai.ruan@intel.com, jike.song@intel.com,
	zhiyuan.lv@intel.com
Subject: Re: [Qemu-devel] [RFC PATCH v4 3/3] VFIO Type1 IOMMU: Add support for mediated devices
Date: Fri, 3 Jun 2016 16:32:04 +0800	[thread overview]
Message-ID: <20160603163204.1792f62a@oc7835276234> (raw)
In-Reply-To: <20160602075647.GA15333@nvidia.com>

On Thu, 2 Jun 2016 00:56:47 -0700
Neo Jia <cjia@nvidia.com> wrote:

> On Wed, Jun 01, 2016 at 04:40:19PM +0800, Dong Jia wrote:
> > On Wed, 25 May 2016 01:28:17 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> > 
> > > +
> > > +/*
> > > + * Pin a set of guest PFNs and return their associated host PFNs for API
> > > + * supported domain only.
> > > + * @vaddr [in]: array of guest PFNs
> > > + * @npage [in]: count of array elements
> > > + * @prot [in] : protection flags
> > > + * @pfn_base[out] : array of host PFNs
> > > + */
> > > +long vfio_pin_pages(void *iommu_data, dma_addr_t *vaddr, long npage,
> > > +		   int prot, dma_addr_t *pfn_base)
> > > +{
> > > +	struct vfio_iommu *iommu = iommu_data;
> > > +	struct vfio_domain *domain = NULL;
> > > +	int i = 0, ret = 0;
> > > +	long retpage;
> > > +	unsigned long remote_vaddr = 0;
> > > +	dma_addr_t *pfn = pfn_base;
> > > +	struct vfio_dma *dma;
> > > +
> > > +	if (!iommu || !vaddr || !pfn_base)
> > > +		return -EINVAL;
> > > +
> > > +	mutex_lock(&iommu->lock);
> > > +
> > > +	if (!iommu->mediated_domain) {
> > > +		ret = -EINVAL;
> > > +		goto pin_done;
> > > +	}
> > > +
> > > +	domain = iommu->mediated_domain;
> > > +
> > > +	for (i = 0; i < npage; i++) {
> > > +		struct vfio_pfn *p, *lpfn;
> > > +		unsigned long tpfn;
> > > +		dma_addr_t iova;
> > > +		long pg_cnt = 1;
> > > +
> > > +		iova = vaddr[i] << PAGE_SHIFT;
> > Dear Kirti:
> > 
> > Got one question for the vaddr-iova conversion here.
> > Is this a common rule that can be applied to all architectures?
> > AFAIK, this is wrong for the s390 case. Or I must miss something...
> 
> I need more details about the "wrong" part. 
> IIUC, you are thinking about the guest iommu case?
> 
Dear Neo:

Sorry for the mistake I made. When I saw 'vaddr', I intuitively thought
it is an user-space virtual address. Now I saw the comment which says it
is the "array of guest PFNs".

After I modify my patches according to the right usage of this
argument, they worked fine. :>

> Thanks,
> Neo
> 
> > 
> > If the answer to the above question is 'no', should we introduce a new
> > argument to pass in the iovas? Say 'dma_addr_t *iova'.
> > 
> > > +
> > > +		dma = vfio_find_dma(iommu, iova, 0 /*  size */);
> > > +		if (!dma) {
> > > +			ret = -EINVAL;
> > > +			goto pin_done;
> > > +		}
> > > +
> > > +		remote_vaddr = dma->vaddr + iova - dma->iova;
> > > +
> > > +		retpage = vfio_pin_pages_internal(domain, remote_vaddr,
> > > +						  pg_cnt, prot, &tpfn);
> > > +		if (retpage <= 0) {
> > > +			WARN_ON(!retpage);
> > > +			ret = (int)retpage;
> > > +			goto pin_done;
> > > +		}
> > > +
> > > +		pfn[i] = tpfn;
> > > +
> > > +		/* search if pfn exist */
> > > +		p = vfio_find_pfn(domain, tpfn);
> > > +		if (p) {
> > > +			atomic_inc(&p->ref_count);
> > > +			continue;
> > > +		}
> > > +
> > > +		/* add to pfn_list */
> > > +		lpfn = kzalloc(sizeof(*lpfn), GFP_KERNEL);
> > > +		if (!lpfn) {
> > > +			ret = -ENOMEM;
> > > +			goto pin_done;
> > > +		}
> > > +		lpfn->vaddr = remote_vaddr;
> > > +		lpfn->iova = iova;
> > > +		lpfn->pfn = pfn[i];
> > > +		lpfn->npage = 1;
> > > +		lpfn->prot = prot;
> > > +		atomic_inc(&lpfn->ref_count);
> > > +		vfio_link_pfn(domain, lpfn);
> > > +	}
> > > +
> > > +	ret = i;
> > > +
> > > +pin_done:
> > > +	mutex_unlock(&iommu->lock);
> > > +	return ret;
> > > +}
> > > +EXPORT_SYMBOL(vfio_pin_pages);
> > 
> > 
> > --------
> > Dong Jia
> > 
> 



--------
Dong Jia

  reply	other threads:[~2016-06-03  8:32 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-24 19:58 [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support] Kirti Wankhede
2016-05-24 19:58 ` [Qemu-devel] [RFC PATCH v4 1/3] Mediated device Core driver Kirti Wankhede
2016-05-25  7:55   ` Tian, Kevin
2016-05-25 14:47     ` Kirti Wankhede
2016-05-27  9:00       ` Tian, Kevin
2016-05-25 22:39   ` Alex Williamson
2016-05-26  9:03     ` Kirti Wankhede
2016-05-26 14:06       ` Alex Williamson
2016-06-03  8:57   ` Dong Jia
2016-06-03  9:40     ` Tian, Kevin
2016-06-06  2:24       ` Dong Jia
2016-06-06  5:27     ` Kirti Wankhede
2016-06-06  6:01       ` Dong Jia
2016-06-06  6:27         ` Neo Jia
2016-06-06  8:29           ` Dong Jia
2016-06-06 17:44             ` Neo Jia
2016-06-06 19:31               ` Alex Williamson
2016-06-07  3:03                 ` Tian, Kevin
2016-06-07 22:42                   ` Alex Williamson
2016-06-08  1:18                     ` Tian, Kevin
2016-06-08  1:39                       ` Alex Williamson
2016-06-08  3:18                         ` Dong Jia
2016-06-08  3:48                           ` Neo Jia
2016-06-08  6:13                             ` Dong Jia
2016-06-08  6:22                               ` Neo Jia
2016-06-08  4:29                           ` Alex Williamson
2016-06-15  6:37                             ` Dong Jia
2016-05-24 19:58 ` [Qemu-devel] [RFC PATCH v4 2/3] VFIO driver for mediated PCI device Kirti Wankhede
2016-05-25  8:15   ` Tian, Kevin
2016-05-25 13:04     ` Kirti Wankhede
2016-05-27 10:03       ` Tian, Kevin
2016-05-27 15:13         ` Alex Williamson
2016-05-24 19:58 ` [Qemu-devel] [RFC PATCH v4 3/3] VFIO Type1 IOMMU: Add support for mediated devices Kirti Wankhede
2016-06-01  8:40   ` Dong Jia
2016-06-02  7:56     ` Neo Jia
2016-06-03  8:32       ` Dong Jia [this message]
2016-06-03  8:37         ` Tian, Kevin
2016-05-25  7:13 ` [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support] Tian, Kevin
2016-05-25 13:43   ` Alex Williamson
2016-05-27 11:02     ` Tian, Kevin
2016-05-27 14:54       ` Alex Williamson
2016-05-27 22:43         ` Tian, Kevin
2016-05-28 14:56           ` Alex Williamson
2016-05-31  2:29             ` Jike Song
2016-05-31 14:29               ` Alex Williamson
2016-06-02  2:11                 ` Jike Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160603163204.1792f62a@oc7835276234 \
    --to=bjsdjshi@linux.vnet.ibm.com \
    --cc=alex.williamson@redhat.com \
    --cc=cjia@nvidia.com \
    --cc=jike.song@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kraxel@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuai.ruan@intel.com \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).