From: Joao Martins <joao.m.martins@oracle.com>
To: "Cédric Le Goater" <clg@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
Yishai Hadas <yishaih@nvidia.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Maor Gottlieb <maorg@nvidia.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
Tarun Gupta <targupta@nvidia.com>,
Avihai Horon <avihaih@nvidia.com>,
qemu-devel@nongnu.org
Subject: Re: [PATCH v3 07/13] vfio/common: Record DMA mapped IOVA ranges
Date: Mon, 6 Mar 2023 19:45:36 +0000 [thread overview]
Message-ID: <de96c6af-b521-1ffa-1ecf-d96d95820bdd@oracle.com> (raw)
In-Reply-To: <aa74d068-2897-88b2-a829-be477e832f20@redhat.com>
On 06/03/2023 18:05, Cédric Le Goater wrote:
> On 3/4/23 02:43, Joao Martins wrote:
>> According to the device DMA logging uAPI, IOVA ranges to be logged by
>> the device must be provided all at once upon DMA logging start.
>>
>> As preparation for the following patches which will add device dirty
>> page tracking, keep a record of all DMA mapped IOVA ranges so later they
>> can be used for DMA logging start.
>>
>> Note that when vIOMMU is enabled DMA mapped IOVA ranges are not tracked.
>> This is due to the dynamic nature of vIOMMU DMA mapping/unmapping.
>>
>> Signed-off-by: Avihai Horon <avihaih@nvidia.com>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>
> One question below,
>
> Reviewed-by: Cédric Le Goater <clg@redhat.com>
>
> Thanks,
>
> C.
>
>> ---
>> hw/vfio/common.c | 84 +++++++++++++++++++++++++++++++++++
>> hw/vfio/trace-events | 1 +
>> include/hw/vfio/vfio-common.h | 11 +++++
>> 3 files changed, 96 insertions(+)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index ed908e303dbb..d84e5fd86bb4 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -44,6 +44,7 @@
>> #include "migration/blocker.h"
>> #include "migration/qemu-file.h"
>> #include "sysemu/tpm.h"
>> +#include "qemu/iova-tree.h"
>> VFIOGroupList vfio_group_list =
>> QLIST_HEAD_INITIALIZER(vfio_group_list);
>> @@ -1313,11 +1314,94 @@ static int vfio_set_dirty_page_tracking(VFIOContainer
>> *container, bool start)
>> return ret;
>> }
>> +/*
>> + * Called for the dirty tracking memory listener to calculate the iova/end
>> + * for a given memory region section. The checks here, replicate the logic
>> + * in vfio_listener_region_{add,del}() used for the same purpose. And thus
>> + * both listener should be kept in sync.
>> + */
>> +static bool vfio_get_section_iova_range(VFIOContainer *container,
>> + MemoryRegionSection *section,
>> + hwaddr *out_iova, hwaddr *out_end)
>> +{
>> + Int128 llend;
>> + hwaddr iova;
>> +
>> + iova = REAL_HOST_PAGE_ALIGN(section->offset_within_address_space);
>> + llend = int128_make64(section->offset_within_address_space);
>> + llend = int128_add(llend, section->size);
>> + llend = int128_and(llend, int128_exts64(qemu_real_host_page_mask()));
>> +
>> + if (int128_ge(int128_make64(iova), llend)) {
>> + return false;
>> + }
>> +
>> + *out_iova = iova;
>> + *out_end = int128_get64(llend) - 1;
>> + return true;
>> +}
>> +
>> +static void vfio_dirty_tracking_update(MemoryListener *listener,
>> + MemoryRegionSection *section)
>> +{
>> + VFIOContainer *container = container_of(listener, VFIOContainer,
>> + tracking_listener);
>> + VFIODirtyTrackingRange *range = &container->tracking_range;
>> + hwaddr max32 = (1ULL << 32) - 1ULL;
>> + hwaddr iova, end;
>> +
>> + if (!vfio_listener_valid_section(section) ||
>> + !vfio_get_section_iova_range(container, section, &iova, &end)) {
>> + return;
>> + }
>> +
>> + WITH_QEMU_LOCK_GUARD(&container->tracking_mutex) {
>> + if (iova < max32 && end <= max32) {
>> + if (range->min32 > iova) {
>
> With the memset(0) done in vfio_dirty_tracking_init(), min32 will always
> be 0. Is that OK ?
>
While it's OK, it's making an assumption that it's zero-ed out. But Alex
comments will make this more clear (and cover all cases) and avoid assumption of
always having a range starting from 0.
>> + range->min32 = iova;
>> + }
>> + if (range->max32 < end) {
>> + range->max32 = end;
>> + }
>> + trace_vfio_device_dirty_tracking_update(iova, end,
>> + range->min32, range->max32);
>> + } else {
>> + if (!range->min64 || range->min64 > iova) {
>> + range->min64 = iova;
>> + }
>> + if (range->max64 < end) {
>> + range->max64 = end;
>> + }
>> + trace_vfio_device_dirty_tracking_update(iova, end,
>> + range->min64, range->max64);
>> + }
>> + }
>> + return;
>> +}
>> +
>> +static const MemoryListener vfio_dirty_tracking_listener = {
>> + .name = "vfio-tracking",
>> + .region_add = vfio_dirty_tracking_update,
>> +};
>> +
>> +static void vfio_dirty_tracking_init(VFIOContainer *container)
>> +{
>> + memset(&container->tracking_range, 0, sizeof(container->tracking_range));
>> + qemu_mutex_init(&container->tracking_mutex);
>> + container->tracking_listener = vfio_dirty_tracking_listener;
>> + memory_listener_register(&container->tracking_listener,
>> + container->space->as);
>> + memory_listener_unregister(&container->tracking_listener);
>> + qemu_mutex_destroy(&container->tracking_mutex);
>> +}
>> +
>> static void vfio_listener_log_global_start(MemoryListener *listener)
>> {
>> VFIOContainer *container = container_of(listener, VFIOContainer, listener);
>> int ret;
>> + vfio_dirty_tracking_init(container);
>> +
>> ret = vfio_set_dirty_page_tracking(container, true);
>> if (ret) {
>> vfio_set_migration_error(ret);
>> diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
>> index 669d9fe07cd9..d97a6de17921 100644
>> --- a/hw/vfio/trace-events
>> +++ b/hw/vfio/trace-events
>> @@ -104,6 +104,7 @@ vfio_known_safe_misalignment(const char *name, uint64_t
>> iova, uint64_t offset_wi
>> vfio_listener_region_add_no_dma_map(const char *name, uint64_t iova,
>> uint64_t size, uint64_t page_size) "Region \"%s\" 0x%"PRIx64" size=0x%"PRIx64"
>> is not aligned to 0x%"PRIx64" and cannot be mapped for DMA"
>> vfio_listener_region_del_skip(uint64_t start, uint64_t end) "SKIPPING
>> region_del 0x%"PRIx64" - 0x%"PRIx64
>> vfio_listener_region_del(uint64_t start, uint64_t end) "region_del
>> 0x%"PRIx64" - 0x%"PRIx64
>> +vfio_device_dirty_tracking_update(uint64_t start, uint64_t end, uint64_t min,
>> uint64_t max) "section 0x%"PRIx64" - 0x%"PRIx64" -> update [0x%"PRIx64" -
>> 0x%"PRIx64"]"
>> vfio_disconnect_container(int fd) "close container->fd=%d"
>> vfio_put_group(int fd) "close group->fd=%d"
>> vfio_get_device(const char * name, unsigned int flags, unsigned int
>> num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
>> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
>> index 87524c64a443..96791add2719 100644
>> --- a/include/hw/vfio/vfio-common.h
>> +++ b/include/hw/vfio/vfio-common.h
>> @@ -23,6 +23,7 @@
>> #include "exec/memory.h"
>> #include "qemu/queue.h"
>> +#include "qemu/iova-tree.h"
>> #include "qemu/notify.h"
>> #include "ui/console.h"
>> #include "hw/display/ramfb.h"
>> @@ -68,6 +69,13 @@ typedef struct VFIOMigration {
>> size_t data_buffer_size;
>> } VFIOMigration;
>> +typedef struct VFIODirtyTrackingRange {
>> + hwaddr min32;
>> + hwaddr max32;
>> + hwaddr min64;
>> + hwaddr max64;
>> +} VFIODirtyTrackingRange;
>> +
>> typedef struct VFIOAddressSpace {
>> AddressSpace *as;
>> QLIST_HEAD(, VFIOContainer) containers;
>> @@ -89,6 +97,9 @@ typedef struct VFIOContainer {
>> uint64_t max_dirty_bitmap_size;
>> unsigned long pgsizes;
>> unsigned int dma_max_mappings;
>> + VFIODirtyTrackingRange tracking_range;
>> + QemuMutex tracking_mutex;
>> + MemoryListener tracking_listener;
>> QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
>> QLIST_HEAD(, VFIOHostDMAWindow) hostwin_list;
>> QLIST_HEAD(, VFIOGroup) group_list;
>
next prev parent reply other threads:[~2023-03-06 19:46 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-04 1:43 [PATCH v3 00/13] vfio/migration: Device dirty page tracking Joao Martins
2023-03-04 1:43 ` [PATCH v3 01/13] vfio/common: Fix error reporting in vfio_get_dirty_bitmap() Joao Martins
2023-03-04 1:43 ` [PATCH v3 02/13] vfio/common: Fix wrong %m usages Joao Martins
2023-03-04 1:43 ` [PATCH v3 03/13] vfio/common: Abort migration if dirty log start/stop/sync fails Joao Martins
2023-03-04 1:43 ` [PATCH v3 04/13] vfio/common: Add VFIOBitmap and alloc function Joao Martins
2023-03-06 13:20 ` Cédric Le Goater
2023-03-06 14:37 ` Joao Martins
2023-03-04 1:43 ` [PATCH v3 05/13] vfio/common: Add helper to validate iova/end against hostwin Joao Martins
2023-03-06 13:24 ` Cédric Le Goater
2023-03-04 1:43 ` [PATCH v3 06/13] vfio/common: Consolidate skip/invalid section into helper Joao Martins
2023-03-06 13:33 ` Cédric Le Goater
2023-03-04 1:43 ` [PATCH v3 07/13] vfio/common: Record DMA mapped IOVA ranges Joao Martins
2023-03-06 13:41 ` Cédric Le Goater
2023-03-06 14:37 ` Joao Martins
2023-03-06 15:11 ` Alex Williamson
2023-03-06 18:05 ` Cédric Le Goater
2023-03-06 19:45 ` Joao Martins [this message]
2023-03-06 18:15 ` Alex Williamson
2023-03-06 19:32 ` Joao Martins
2023-03-04 1:43 ` [PATCH v3 08/13] vfio/common: Add device dirty page tracking start/stop Joao Martins
2023-03-06 18:25 ` Cédric Le Goater
2023-03-06 18:42 ` Alex Williamson
2023-03-06 19:39 ` Joao Martins
2023-03-06 20:00 ` Alex Williamson
2023-03-06 23:12 ` Joao Martins
2023-03-04 1:43 ` [PATCH v3 09/13] vfio/common: Extract code from vfio_get_dirty_bitmap() to new function Joao Martins
2023-03-06 16:24 ` Cédric Le Goater
2023-03-04 1:43 ` [PATCH v3 10/13] vfio/common: Add device dirty page bitmap sync Joao Martins
2023-03-06 19:22 ` Alex Williamson
2023-03-06 19:42 ` Joao Martins
2023-03-04 1:43 ` [PATCH v3 11/13] vfio/migration: Block migration with vIOMMU Joao Martins
2023-03-06 17:00 ` Cédric Le Goater
2023-03-06 17:04 ` Joao Martins
2023-03-06 19:42 ` Alex Williamson
2023-03-06 23:10 ` Joao Martins
2023-03-04 1:43 ` [PATCH v3 12/13] vfio/migration: Query device dirty page tracking support Joao Martins
2023-03-06 17:20 ` Cédric Le Goater
2023-03-04 1:43 ` [PATCH v3 13/13] docs/devel: Document VFIO device dirty page tracking Joao Martins
2023-03-06 17:15 ` Cédric Le Goater
2023-03-06 17:18 ` Joao Martins
2023-03-06 17:21 ` Joao Martins
2023-03-06 17:21 ` Cédric Le Goater
2023-03-05 20:57 ` [PATCH v3 00/13] vfio/migration: Device " Alex Williamson
2023-03-05 23:33 ` Joao Martins
2023-03-06 2:19 ` Alex Williamson
2023-03-06 9:45 ` Joao Martins
2023-03-06 11:05 ` Cédric Le Goater
2023-03-06 21:19 ` Alex Williamson
2023-03-06 17:23 ` Cédric Le Goater
2023-03-06 19:41 ` Joao Martins
2023-03-07 8:33 ` Avihai Horon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=de96c6af-b521-1ffa-1ecf-d96d95820bdd@oracle.com \
--to=joao.m.martins@oracle.com \
--cc=alex.williamson@redhat.com \
--cc=avihaih@nvidia.com \
--cc=clg@redhat.com \
--cc=jgg@nvidia.com \
--cc=kwankhede@nvidia.com \
--cc=maorg@nvidia.com \
--cc=qemu-devel@nongnu.org \
--cc=targupta@nvidia.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).