From: Alex Williamson <alex.williamson@redhat.com>
To: Kunkun Jiang <jiangkunkun@huawei.com>
Cc: Liu Yi L <yi.l.liu@intel.com>,
"open list:All patches CC here" <qemu-devel@nongnu.org>,
shameerali.kolothum.thodi@huawei.com,
Eric Auger <eric.auger@redhat.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
Zenghui Yu <yuzenghui@huawei.com>,
wanghaibin.wang@huawei.com, Keqian Zhu <zhukeqian1@huawei.com>
Subject: Re: [PATCH] vfio: Support host translation granule size
Date: Tue, 9 Mar 2021 16:17:13 -0700 [thread overview]
Message-ID: <20210309161713.1cc8ad2f@omen.home.shazbot.org> (raw)
In-Reply-To: <20210304133446.1521-1-jiangkunkun@huawei.com>
On Thu, 4 Mar 2021 21:34:46 +0800
Kunkun Jiang <jiangkunkun@huawei.com> wrote:
> The cpu_physical_memory_set_dirty_lebitmap() can quickly deal with
> the dirty pages of memory by bitmap-traveling, regardless of whether
> the bitmap is aligned correctly or not.
>
> cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of
> host page size. So it'd better to set bitmap_pgsize to host page size
> to support more translation granule sizes.
>
> Fixes: 87ea529c502 (vfio: Get migration capability flags for container)
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
> ---
> hw/vfio/common.c | 44 ++++++++++++++++++++++----------------------
> 1 file changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 6ff1daa763..69fb5083a4 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -378,7 +378,7 @@ static int vfio_dma_unmap_bitmap(VFIOContainer *container,
> {
> struct vfio_iommu_type1_dma_unmap *unmap;
> struct vfio_bitmap *bitmap;
> - uint64_t pages = TARGET_PAGE_ALIGN(size) >> TARGET_PAGE_BITS;
> + uint64_t pages = REAL_HOST_PAGE_ALIGN(size) / qemu_real_host_page_size;
> int ret;
>
> unmap = g_malloc0(sizeof(*unmap) + sizeof(*bitmap));
> @@ -390,12 +390,12 @@ static int vfio_dma_unmap_bitmap(VFIOContainer *container,
> bitmap = (struct vfio_bitmap *)&unmap->data;
>
> /*
> - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of
> - * TARGET_PAGE_SIZE to mark those dirty. Hence set bitmap_pgsize to
> - * TARGET_PAGE_SIZE.
> + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of
> + * qemu_real_host_page_size to mark those dirty. Hence set bitmap_pgsize
> + * to qemu_real_host_page_size.
I don't see that this change is well supported by the code,
cpu_physical_memory_set_dirty_lebitmap() seems to operate on
TARGET_PAGE_SIZE, and the next three patch chunks take a detour through
memory listener code that seem unrelated to the change described in the
commit log. This claims to fix something, what is actually broken?
Thanks,
Alex
> */
>
> - bitmap->pgsize = TARGET_PAGE_SIZE;
> + bitmap->pgsize = qemu_real_host_page_size;
> bitmap->size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) /
> BITS_PER_BYTE;
>
> @@ -674,16 +674,16 @@ static void vfio_listener_region_add(MemoryListener *listener,
> return;
> }
>
> - if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
> - (section->offset_within_region & ~TARGET_PAGE_MASK))) {
> + if (unlikely((section->offset_within_address_space & ~qemu_real_host_page_mask) !=
> + (section->offset_within_region & ~qemu_real_host_page_mask))) {
> error_report("%s received unaligned region", __func__);
> return;
> }
>
> - iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> + iova = REAL_HOST_PAGE_ALIGN(section->offset_within_address_space);
> llend = int128_make64(section->offset_within_address_space);
> llend = int128_add(llend, section->size);
> - llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> + llend = int128_and(llend, int128_exts64(qemu_real_host_page_mask));
>
> if (int128_ge(int128_make64(iova), llend)) {
> return;
> @@ -892,8 +892,8 @@ static void vfio_listener_region_del(MemoryListener *listener,
> return;
> }
>
> - if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
> - (section->offset_within_region & ~TARGET_PAGE_MASK))) {
> + if (unlikely((section->offset_within_address_space & ~qemu_real_host_page_mask) !=
> + (section->offset_within_region & ~qemu_real_host_page_mask))) {
> error_report("%s received unaligned region", __func__);
> return;
> }
> @@ -921,10 +921,10 @@ static void vfio_listener_region_del(MemoryListener *listener,
> */
> }
>
> - iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> + iova = REAL_HOST_PAGE_ALIGN(section->offset_within_address_space);
> llend = int128_make64(section->offset_within_address_space);
> llend = int128_add(llend, section->size);
> - llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> + llend = int128_and(llend, int128_exts64(qemu_real_host_page_mask));
>
> if (int128_ge(int128_make64(iova), llend)) {
> return;
> @@ -1004,13 +1004,13 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
> range->size = size;
>
> /*
> - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of
> - * TARGET_PAGE_SIZE to mark those dirty. Hence set bitmap's pgsize to
> - * TARGET_PAGE_SIZE.
> + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of
> + * qemu_real_host_page_size to mark those dirty. Hence set bitmap's pgsize
> + * to qemu_real_host_page_size.
> */
> - range->bitmap.pgsize = TARGET_PAGE_SIZE;
> + range->bitmap.pgsize = qemu_real_host_page_size;
>
> - pages = TARGET_PAGE_ALIGN(range->size) >> TARGET_PAGE_BITS;
> + pages = REAL_HOST_PAGE_ALIGN(range->size) / qemu_real_host_page_size;
> range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) /
> BITS_PER_BYTE;
> range->bitmap.data = g_try_malloc0(range->bitmap.size);
> @@ -1114,7 +1114,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *container,
> section->offset_within_region;
>
> return vfio_get_dirty_bitmap(container,
> - TARGET_PAGE_ALIGN(section->offset_within_address_space),
> + REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
> int128_get64(section->size), ram_addr);
> }
>
> @@ -1655,10 +1655,10 @@ static void vfio_get_iommu_info_migration(VFIOContainer *container,
> header);
>
> /*
> - * cpu_physical_memory_set_dirty_lebitmap() expects pages in bitmap of
> - * TARGET_PAGE_SIZE to mark those dirty.
> + * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of
> + * qemu_real_host_page_size to mark those dirty.
> */
> - if (cap_mig->pgsize_bitmap & TARGET_PAGE_SIZE) {
> + if (cap_mig->pgsize_bitmap & qemu_real_host_page_size) {
> container->dirty_pages_supported = true;
> container->max_dirty_bitmap_size = cap_mig->max_dirty_bitmap_size;
> container->dirty_pgsizes = cap_mig->pgsize_bitmap;
next prev parent reply other threads:[~2021-03-09 23:18 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-04 13:34 [PATCH] vfio: Support host translation granule size Kunkun Jiang
2021-03-09 23:17 ` Alex Williamson [this message]
2021-03-10 1:40 ` Keqian Zhu
2021-03-10 7:19 ` Kunkun Jiang
2021-03-10 20:24 ` Alex Williamson
2021-03-11 1:24 ` Keqian Zhu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210309161713.1cc8ad2f@omen.home.shazbot.org \
--to=alex.williamson@redhat.com \
--cc=eric.auger@redhat.com \
--cc=jiangkunkun@huawei.com \
--cc=kwankhede@nvidia.com \
--cc=qemu-devel@nongnu.org \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=wanghaibin.wang@huawei.com \
--cc=yi.l.liu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).