From: Alex Williamson <alex.williamson@redhat.com>
To: Kirti Wankhede <kwankhede@nvidia.com>
Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com,
yi.l.liu@intel.com, cjia@nvidia.com, kvm@vger.kernel.org,
eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org,
cohuck@redhat.com, shuangtai.tst@alibaba-inc.com,
dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com,
pasic@linux.ibm.com, aik@ozlabs.ru, eauger@redhat.com,
felipe@nutanix.com, jonathan.davies@nutanix.com,
yan.y.zhao@intel.com, changpeng.liu@intel.com, Ken.Xue@amd.com
Subject: Re: [PATCH Kernel v24 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
Date: Tue, 2 Jun 2020 10:44:07 -0600 [thread overview]
Message-ID: <20200602104407.31e87e08@x1.home> (raw)
In-Reply-To: <1591113090-23640-1-git-send-email-kwankhede@nvidia.com>
On Tue, 2 Jun 2020 21:21:30 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:
> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start dirty pages tracking while migration is active
> - Stop dirty pages tracking.
> - Get dirty pages bitmap. Its user space application's responsibility to
> copy content of dirty pages from source to destination during migration.
>
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning functions when tracking is enabled. When user application queries
> bitmap, check if requested page size is same as page size used to
> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> error.
>
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
>
> Fixed error reported by build bot by changing pgsize type from uint64_t
> to size_t.
> Reported-by: kbuild test robot <lkp@intel.com>
> ---
>
> Fixed errors and sparse warnings reported by kbuild test robot
> Reported-by: kbuild test robot <lkp@intel.com>
>
> ld: drivers/vfio/vfio_iommu_type1.o: in function `vfio_dma_populate_bitmap':
> >> vfio_iommu_type1.c:(.text+0x666): undefined reference to `__udivdi3'
Hi Kirti,
This is already in linux-next, could you please send just the
incremental fix with a Fixes: tag? Thanks,
Alex
> drivers/vfio/vfio_iommu_type1.c | 315 +++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 309 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 814c795a2543..8362a36c0de4 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -72,6 +72,7 @@ struct vfio_iommu {
> uint64_t pgsize_bitmap;
> bool v2;
> bool nesting;
> + bool dirty_page_tracking;
> };
>
> struct vfio_domain {
> @@ -92,6 +93,7 @@ struct vfio_dma {
> bool lock_cap; /* capable(CAP_IPC_LOCK) */
> struct task_struct *task;
> struct rb_root pfn_list; /* Ex-user pinned pfn list */
> + unsigned long *bitmap;
> };
>
> struct vfio_group {
> @@ -126,6 +128,19 @@ struct vfio_regions {
> #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
> (!list_empty(&iommu->domain_list))
>
> +#define DIRTY_BITMAP_BYTES(n) (ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> +
> +/*
> + * Input argument of number of bits to bitmap_set() is unsigned integer, which
> + * further casts to signed integer for unaligned multi-bit operation,
> + * __bitmap_set().
> + * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
> + * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
> + * system.
> + */
> +#define DIRTY_BITMAP_PAGES_MAX ((u64)INT_MAX)
> +#define DIRTY_BITMAP_SIZE_MAX DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
> +
> static int put_pfn(unsigned long pfn, int prot);
>
> /*
> @@ -176,6 +191,81 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> rb_erase(&old->node, &iommu->dma_list);
> }
>
> +
> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
> +{
> + uint64_t npages = dma->size / pgsize;
> +
> + if (npages > DIRTY_BITMAP_PAGES_MAX)
> + return -EINVAL;
> +
> + /*
> + * Allocate extra 64 bits that are used to calculate shift required for
> + * bitmap_shift_left() to manipulate and club unaligned number of pages
> + * in adjacent vfio_dma ranges.
> + */
> + dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages) + sizeof(u64),
> + GFP_KERNEL);
> + if (!dma->bitmap)
> + return -ENOMEM;
> +
> + return 0;
> +}
> +
> +static void vfio_dma_bitmap_free(struct vfio_dma *dma)
> +{
> + kfree(dma->bitmap);
> + dma->bitmap = NULL;
> +}
> +
> +static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> +{
> + struct rb_node *p;
> + unsigned long pgshift = __ffs(pgsize);
> +
> + for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> + struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
> +
> + bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) >> pgshift, 1);
> + }
> +}
> +
> +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
> +{
> + struct rb_node *n;
> +
> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> + int ret;
> +
> + ret = vfio_dma_bitmap_alloc(dma, pgsize);
> + if (ret) {
> + struct rb_node *p;
> +
> + for (p = rb_prev(n); p; p = rb_prev(p)) {
> + struct vfio_dma *dma = rb_entry(n,
> + struct vfio_dma, node);
> +
> + vfio_dma_bitmap_free(dma);
> + }
> + return ret;
> + }
> + vfio_dma_populate_bitmap(dma, pgsize);
> + }
> + return 0;
> +}
> +
> +static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
> +{
> + struct rb_node *n;
> +
> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> + vfio_dma_bitmap_free(dma);
> + }
> +}
> +
> /*
> * Helper Functions for host iova-pfn list
> */
> @@ -598,6 +688,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> vfio_unpin_page_external(dma, iova, do_accounting);
> goto pin_unwind;
> }
> +
> + if (iommu->dirty_page_tracking) {
> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> +
> + /*
> + * Bitmap populated with the smallest supported page
> + * size
> + */
> + bitmap_set(dma->bitmap,
> + (iova - dma->iova) >> pgshift, 1);
> + }
> }
>
> ret = i;
> @@ -832,6 +933,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> vfio_unmap_unpin(iommu, dma, true);
> vfio_unlink_dma(iommu, dma);
> put_task_struct(dma->task);
> + vfio_dma_bitmap_free(dma);
> kfree(dma);
> iommu->dma_avail++;
> }
> @@ -859,6 +961,94 @@ static void vfio_update_pgsize_bitmap(struct vfio_iommu *iommu)
> }
> }
>
> +static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
> + dma_addr_t base_iova, size_t pgsize)
> +{
> + unsigned long pgshift = __ffs(pgsize);
> + unsigned long nbits = dma->size >> pgshift;
> + unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
> + unsigned long copy_offset = bit_offset / BITS_PER_LONG;
> + unsigned long shift = bit_offset % BITS_PER_LONG;
> + unsigned long leftover;
> +
> + /* mark all pages dirty if all pages are pinned and mapped. */
> + if (dma->iommu_mapped)
> + bitmap_set(dma->bitmap, 0, nbits);
> +
> + if (shift) {
> + bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
> + nbits + shift);
> +
> + if (copy_from_user(&leftover,
> + (void __user *)(bitmap + copy_offset),
> + sizeof(leftover)))
> + return -EFAULT;
> +
> + bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
> + }
> +
> + if (copy_to_user((void __user *)(bitmap + copy_offset), dma->bitmap,
> + DIRTY_BITMAP_BYTES(nbits + shift)))
> + return -EFAULT;
> +
> + return 0;
> +}
> +
> +static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
> + dma_addr_t iova, size_t size, size_t pgsize)
> +{
> + struct vfio_dma *dma;
> + struct rb_node *n;
> + unsigned long pgshift = __ffs(pgsize);
> + int ret;
> +
> + /*
> + * GET_BITMAP request must fully cover vfio_dma mappings. Multiple
> + * vfio_dma mappings may be clubbed by specifying large ranges, but
> + * there must not be any previous mappings bisected by the range.
> + * An error will be returned if these conditions are not met.
> + */
> + dma = vfio_find_dma(iommu, iova, 1);
> + if (dma && dma->iova != iova)
> + return -EINVAL;
> +
> + dma = vfio_find_dma(iommu, iova + size - 1, 0);
> + if (dma && dma->iova + dma->size != iova + size)
> + return -EINVAL;
> +
> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> + if (dma->iova < iova)
> + continue;
> +
> + if (dma->iova > iova + size - 1)
> + break;
> +
> + ret = update_user_bitmap(bitmap, dma, iova, pgsize);
> + if (ret)
> + return ret;
> +
> + /*
> + * Re-populate bitmap to include all pinned pages which are
> + * considered as dirty but exclude pages which are unpinned and
> + * pages which are marked dirty by vfio_dma_rw()
> + */
> + bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> + vfio_dma_populate_bitmap(dma, pgsize);
> + }
> + return 0;
> +}
> +
> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> +{
> + if (!npages || !bitmap_size || (bitmap_size > DIRTY_BITMAP_SIZE_MAX) ||
> + (bitmap_size < DIRTY_BITMAP_BYTES(npages)))
> + return -EINVAL;
> +
> + return 0;
> +}
> +
> static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> struct vfio_iommu_type1_dma_unmap *unmap)
> {
> @@ -1076,7 +1266,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
> unsigned long vaddr = map->vaddr;
> size_t size = map->size;
> int ret = 0, prot = 0;
> - uint64_t mask;
> + size_t pgsize;
> struct vfio_dma *dma;
>
> /* Verify that none of our __u64 fields overflow */
> @@ -1091,11 +1281,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>
> mutex_lock(&iommu->lock);
>
> - mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
> + pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
>
> - WARN_ON(mask & PAGE_MASK);
> + WARN_ON((pgsize - 1) & PAGE_MASK);
>
> - if (!prot || !size || (size | iova | vaddr) & mask) {
> + if (!prot || !size || (size | iova | vaddr) & (pgsize - 1)) {
> ret = -EINVAL;
> goto out_unlock;
> }
> @@ -1172,6 +1362,12 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
> else
> ret = vfio_pin_map_dma(iommu, dma, size);
>
> + if (!ret && iommu->dirty_page_tracking) {
> + ret = vfio_dma_bitmap_alloc(dma, pgsize);
> + if (ret)
> + vfio_remove_dma(iommu, dma);
> + }
> +
> out_unlock:
> mutex_unlock(&iommu->lock);
> return ret;
> @@ -2318,6 +2514,104 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>
> return copy_to_user((void __user *)arg, &unmap, minsz) ?
> -EFAULT : 0;
> + } else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> + struct vfio_iommu_type1_dirty_bitmap dirty;
> + uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> + VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> + int ret = 0;
> +
> + if (!iommu->v2)
> + return -EACCES;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> + flags);
> +
> + if (copy_from_user(&dirty, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (dirty.argsz < minsz || dirty.flags & ~mask)
> + return -EINVAL;
> +
> + /* only one flag should be set at a time */
> + if (__ffs(dirty.flags) != __fls(dirty.flags))
> + return -EINVAL;
> +
> + if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> + size_t pgsize;
> +
> + mutex_lock(&iommu->lock);
> + pgsize = 1 << __ffs(iommu->pgsize_bitmap);
> + if (!iommu->dirty_page_tracking) {
> + ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
> + if (!ret)
> + iommu->dirty_page_tracking = true;
> + }
> + mutex_unlock(&iommu->lock);
> + return ret;
> + } else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> + mutex_lock(&iommu->lock);
> + if (iommu->dirty_page_tracking) {
> + iommu->dirty_page_tracking = false;
> + vfio_dma_bitmap_free_all(iommu);
> + }
> + mutex_unlock(&iommu->lock);
> + return 0;
> + } else if (dirty.flags &
> + VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> + struct vfio_iommu_type1_dirty_bitmap_get range;
> + unsigned long pgshift;
> + size_t data_size = dirty.argsz - minsz;
> + size_t iommu_pgsize;
> +
> + if (!data_size || data_size < sizeof(range))
> + return -EINVAL;
> +
> + if (copy_from_user(&range, (void __user *)(arg + minsz),
> + sizeof(range)))
> + return -EFAULT;
> +
> + if (range.iova + range.size < range.iova)
> + return -EINVAL;
> + if (!access_ok((void __user *)range.bitmap.data,
> + range.bitmap.size))
> + return -EINVAL;
> +
> + pgshift = __ffs(range.bitmap.pgsize);
> + ret = verify_bitmap_size(range.size >> pgshift,
> + range.bitmap.size);
> + if (ret)
> + return ret;
> +
> + mutex_lock(&iommu->lock);
> +
> + iommu_pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
> +
> + /* allow only smallest supported pgsize */
> + if (range.bitmap.pgsize != iommu_pgsize) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> + if (range.iova & (iommu_pgsize - 1)) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> + if (!range.size || range.size & (iommu_pgsize - 1)) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> +
> + if (iommu->dirty_page_tracking)
> + ret = vfio_iova_dirty_bitmap(range.bitmap.data,
> + iommu, range.iova, range.size,
> + range.bitmap.pgsize);
> + else
> + ret = -EINVAL;
> +out_unlock:
> + mutex_unlock(&iommu->lock);
> +
> + return ret;
> + }
> }
>
> return -ENOTTY;
> @@ -2385,10 +2679,19 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>
> vaddr = dma->vaddr + offset;
>
> - if (write)
> + if (write) {
> *copied = copy_to_user((void __user *)vaddr, data,
> count) ? 0 : count;
> - else
> + if (*copied && iommu->dirty_page_tracking) {
> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> + /*
> + * Bitmap populated with the smallest supported page
> + * size
> + */
> + bitmap_set(dma->bitmap, offset >> pgshift,
> + *copied >> pgshift);
> + }
> + } else
> *copied = copy_from_user(data, (void __user *)vaddr,
> count) ? 0 : count;
> if (kthread)
next prev parent reply other threads:[~2020-06-02 16:45 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-28 20:30 [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
2020-05-28 20:30 ` [PATCH Kernel v24 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
2020-09-28 16:51 ` Stefan Hajnoczi
2020-05-28 20:30 ` [PATCH Kernel v24 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
2020-05-28 20:30 ` [PATCH Kernel v24 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
2020-05-28 20:30 ` [PATCH Kernel v24 4/8] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
2020-09-28 17:06 ` Stefan Hajnoczi
2020-05-28 20:30 ` [PATCH Kernel v24 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
2020-06-02 15:51 ` Kirti Wankhede
2020-06-02 16:44 ` Alex Williamson [this message]
2020-05-28 20:30 ` [PATCH Kernel v24 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
2020-05-28 20:30 ` [PATCH Kernel v24 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
2020-05-28 20:30 ` [PATCH Kernel v24 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
2020-05-29 21:09 ` [PATCH Kernel v24 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
2020-07-21 2:43 ` Xiang Zheng
2020-07-21 22:43 ` Alex Williamson
2020-07-22 2:55 ` Xiang Zheng
2020-07-22 3:24 ` Tian, Kevin
2020-09-29 8:27 ` Stefan Hajnoczi
2020-09-29 19:52 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200602104407.31e87e08@x1.home \
--to=alex.williamson@redhat.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@Alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).