From: Yan Zhao <yan.y.zhao@intel.com>
To: Kirti Wankhede <kwankhede@nvidia.com>
Cc: "Zhengxiao.zx@Alibaba-inc.com" <Zhengxiao.zx@Alibaba-inc.com>,
"Tian, Kevin" <kevin.tian@intel.com>,
"Liu, Yi L" <yi.l.liu@intel.com>,
"cjia@nvidia.com" <cjia@nvidia.com>,
"eskultet@redhat.com" <eskultet@redhat.com>,
"Yang, Ziye" <ziye.yang@intel.com>,
"yulei.zhang@intel.com" <yulei.zhang@intel.com>,
"cohuck@redhat.com" <cohuck@redhat.com>,
"shuangtai.tst@alibaba-inc.com" <shuangtai.tst@alibaba-inc.com>,
"dgilbert@redhat.com" <dgilbert@redhat.com>,
"Wang, Zhi A" <zhi.a.wang@intel.com>,
"mlevitsk@redhat.com" <mlevitsk@redhat.com>,
"pasic@linux.ibm.com" <pasic@linux.ibm.com>,
"aik@ozlabs.ru" <aik@ozlabs.ru>,
"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
"eauger@redhat.com" <eauger@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"felipe@nutanix.com" <felipe@nutanix.com>,
"jonathan.davies@nutanix.com" <jonathan.davies@nutanix.com>,
"Liu, Changpeng" <changpeng.liu@intel.com>,
"Ken.Xue@amd.com" <Ken.Xue@amd.com>
Subject: Re: [Qemu-devel] [PATCH v4 10/13] vfio: Add function to get dirty page list
Date: Tue, 25 Jun 2019 20:40:39 -0400 [thread overview]
Message-ID: <20190626004039.GE6971@joy-OptiPlex-7040> (raw)
In-Reply-To: <1561041461-22326-11-git-send-email-kwankhede@nvidia.com>
On Thu, Jun 20, 2019 at 10:37:38PM +0800, Kirti Wankhede wrote:
> Dirty page tracking (.log_sync) is part of RAM copying state, where
> vendor driver provides the bitmap of pages which are dirtied by vendor
> driver through migration region and as part of RAM copy, those pages
> gets copied to file stream.
>
> To get dirty page bitmap:
> - write start address, page_size and pfn count.
> - read count of pfns copied.
> - Vendor driver should return 0 if driver doesn't have any page to
> report dirty in given range.
> - Vendor driver should return -1 to mark all pages dirty for given range.
> - read data_offset, where vendor driver has written bitmap.
> - read bitmap from the region or mmaped part of the region. This copy is
> iterated till page bitmap for all requested pfns are copied.
>
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
> hw/vfio/migration.c | 119 ++++++++++++++++++++++++++++++++++++++++++
> include/hw/vfio/vfio-common.h | 2 +
> 2 files changed, 121 insertions(+)
>
> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> index e4895f91761d..68775b5dec11 100644
> --- a/hw/vfio/migration.c
> +++ b/hw/vfio/migration.c
> @@ -228,6 +228,125 @@ static int vfio_load_device_config_state(QEMUFile *f, void *opaque)
> return qemu_file_get_error(f);
> }
>
> +void vfio_get_dirty_page_list(VFIODevice *vbasedev,
> + uint64_t start_pfn,
> + uint64_t pfn_count,
> + uint64_t page_size)
> +{
> + VFIOMigration *migration = vbasedev->migration;
> + VFIORegion *region = &migration->region.buffer;
> + uint64_t count = 0;
> + int64_t copied_pfns = 0;
> + int ret;
> +
> + qemu_mutex_lock(&migration->lock);
> + ret = pwrite(vbasedev->fd, &start_pfn, sizeof(start_pfn),
> + region->fd_offset + offsetof(struct vfio_device_migration_info,
> + start_pfn));
> + if (ret < 0) {
> + error_report("Failed to set dirty pages start address %d %s",
> + ret, strerror(errno));
> + goto dpl_unlock;
> + }
> +
> + ret = pwrite(vbasedev->fd, &page_size, sizeof(page_size),
> + region->fd_offset + offsetof(struct vfio_device_migration_info,
> + page_size));
> + if (ret < 0) {
> + error_report("Failed to set dirty page size %d %s",
> + ret, strerror(errno));
> + goto dpl_unlock;
> + }
> +
> + ret = pwrite(vbasedev->fd, &pfn_count, sizeof(pfn_count),
> + region->fd_offset + offsetof(struct vfio_device_migration_info,
> + total_pfns));
> + if (ret < 0) {
> + error_report("Failed to set dirty page total pfns %d %s",
> + ret, strerror(errno));
> + goto dpl_unlock;
> + }
> +
> + do {
> + uint64_t bitmap_size, data_offset = 0;
> + void *buf = NULL;
> + bool buffer_mmaped = false;
> +
> + /* Read copied dirty pfns */
> + ret = pread(vbasedev->fd, &copied_pfns, sizeof(copied_pfns),
> + region->fd_offset + offsetof(struct vfio_device_migration_info,
> + copied_pfns));
> + if (ret < 0) {
> + error_report("Failed to get dirty pages bitmap count %d %s",
> + ret, strerror(errno));
> + goto dpl_unlock;
> + }
> +
> + if (copied_pfns == 0) {
> + /*
> + * copied_pfns could be 0 if driver doesn't have any page to
> + * report dirty in given range
> + */
> + break;
this copied_pfn is the dirty page count in which range?
if it is got each iteration, why break here rather than continue ?
consider there's a big region with pfn_count, and it is now breaked into
several smaller subregions, and copied_pfns is 0 in the first subregion,
it doesn't mean copied_pfns are all 0 in the remaining subregions.
> + } else if (copied_pfns == -1) {
> + /* Mark all pages dirty for this range */
> + cpu_physical_memory_set_dirty_range(start_pfn * page_size,
> + pfn_count * page_size,
> + DIRTY_MEMORY_MIGRATION);
> + break;
> + }
> +
> + bitmap_size = (BITS_TO_LONGS(copied_pfns) + 1) * sizeof(unsigned long);
> +
> + ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> + region->fd_offset + offsetof(struct vfio_device_migration_info,
> + data_offset));
> + if (ret != sizeof(data_offset)) {
> + error_report("Failed to get migration buffer data offset %d",
> + ret);
> + goto dpl_unlock;
> + }
> +
> + if (region->mmaps) {
> + int i;
> + for (i = 0; i < region->nr_mmaps; i++) {
> + if ((region->mmaps[i].offset >= data_offset) &&
> + (data_offset < region->mmaps[i].offset +
> + region->mmaps[i].size)) {
> + buf = region->mmaps[i].mmap + (data_offset -
> + region->mmaps[i].offset);
> + buffer_mmaped = true;
> + break;
> + }
> + }
> + }
> +
> + if (!buffer_mmaped) {
> + buf = g_malloc0(bitmap_size);
> +
> + ret = pread(vbasedev->fd, buf, bitmap_size,
> + region->fd_offset + data_offset);
> + if (ret != bitmap_size) {
> + error_report("Failed to get dirty pages bitmap %d", ret);
> + g_free(buf);
> + goto dpl_unlock;
> + }
> + }
> +
> + cpu_physical_memory_set_dirty_lebitmap((unsigned long *)buf,
> + (start_pfn + count) * page_size,
> + copied_pfns);
> + count += copied_pfns;
> +
here also. why it is count += copied_pfns.
> + if (!buffer_mmaped) {
> + g_free(buf);
> + }
> + } while (count < pfn_count);
> +
> +dpl_unlock:
> + qemu_mutex_unlock(&migration->lock);
> +}
> +
> /* ---------------------------------------------------------------------- */
>
> static int vfio_save_setup(QEMUFile *f, void *opaque)
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 1d26e6be8d48..423d6dbccace 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -224,5 +224,7 @@ int vfio_spapr_remove_window(VFIOContainer *container,
>
> int vfio_migration_probe(VFIODevice *vbasedev, Error **errp);
> void vfio_migration_finalize(VFIODevice *vbasedev);
> +void vfio_get_dirty_page_list(VFIODevice *vbasedev, uint64_t start_pfn,
> + uint64_t pfn_count, uint64_t page_size);
>
> #endif /* HW_VFIO_VFIO_COMMON_H */
> --
> 2.7.0
>
next prev parent reply other threads:[~2019-06-26 0:48 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-20 14:37 [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-06-20 17:18 ` Alex Williamson
2019-06-21 5:52 ` Kirti Wankhede
2019-06-21 15:03 ` Alex Williamson
2019-06-21 19:35 ` Kirti Wankhede
2019-06-21 20:00 ` Alex Williamson
2019-06-21 20:30 ` Kirti Wankhede
2019-06-21 22:01 ` Alex Williamson
2019-06-24 15:00 ` Kirti Wankhede
2019-06-24 15:25 ` Alex Williamson
2019-06-24 18:52 ` Kirti Wankhede
2019-06-24 19:01 ` Alex Williamson
2019-06-25 15:20 ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 03/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-06-21 0:12 ` Yan Zhao
2019-06-21 6:44 ` Kirti Wankhede
2019-06-21 7:50 ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 04/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-06-24 14:00 ` Cornelia Huck
2019-06-27 14:56 ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 05/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-06-25 10:29 ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 06/13] vfio: Add migration state change notifier Kirti Wankhede
2019-06-27 10:33 ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 07/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-06-27 10:01 ` Dr. David Alan Gilbert
2019-06-27 14:31 ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-06-20 19:25 ` Alex Williamson
2019-06-21 6:38 ` Kirti Wankhede
2019-06-21 15:16 ` Alex Williamson
2019-06-21 19:38 ` Kirti Wankhede
2019-06-21 20:02 ` Alex Williamson
2019-06-21 20:07 ` Kirti Wankhede
2019-06-21 20:32 ` Alex Williamson
2019-06-21 21:05 ` Kirti Wankhede
2019-06-21 22:13 ` Alex Williamson
2019-06-24 14:31 ` Kirti Wankhede
2019-06-21 0:31 ` Yan Zhao
2019-06-25 3:30 ` Yan Zhao
2019-06-28 8:50 ` Dr. David Alan Gilbert
2019-06-28 21:16 ` Yan Zhao
2019-06-28 9:09 ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 09/13] vfio: Add load " Kirti Wankhede
2019-06-28 9:18 ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 10/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-06-26 0:40 ` Yan Zhao [this message]
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 11/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 12/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 13/13] vfio: Add trace events in migration code path Kirti Wankhede
2019-06-20 18:50 ` Dr. David Alan Gilbert
2019-06-21 5:54 ` Kirti Wankhede
2019-06-21 0:25 ` [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Yan Zhao
2019-06-21 1:24 ` Yan Zhao
2019-06-21 8:02 ` Kirti Wankhede
2019-06-21 8:46 ` Yan Zhao
2019-06-21 9:22 ` Kirti Wankhede
2019-06-21 10:45 ` Yan Zhao
2019-06-24 19:00 ` Dr. David Alan Gilbert
2019-06-26 0:43 ` Yan Zhao
2019-06-28 9:44 ` Dr. David Alan Gilbert
2019-06-28 21:28 ` Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190626004039.GE6971@joy-OptiPlex-7040 \
--to=yan.y.zhao@intel.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@Alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=kwankhede@nvidia.com \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yi.l.liu@intel.com \
--cc=yulei.zhang@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).