From: Steven Sistare <steven.sistare@oracle.com>
To: Mark Cave-Ayland <mark.caveayland@nutanix.com>, qemu-devel@nongnu.org
Cc: Alex Williamson <alex.williamson@redhat.com>,
Cedric Le Goater <clg@redhat.com>, Yi Liu <yi.l.liu@intel.com>,
Eric Auger <eric.auger@redhat.com>,
Zhenzhong Duan <zhenzhong.duan@intel.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
Peter Xu <peterx@redhat.com>, Fabiano Rosas <farosas@suse.de>
Subject: Re: [PATCH V3 26/42] vfio: return mr from vfio_get_xlat_addr
Date: Thu, 15 May 2025 15:40:46 -0400 [thread overview]
Message-ID: <6d269740-b274-4046-bb15-7ce7f2784ca5@oracle.com> (raw)
In-Reply-To: <49dad632-fd30-45fd-8dac-80c8bb446809@nutanix.com>
On 5/13/2025 7:12 AM, Mark Cave-Ayland wrote:
> On 12/05/2025 16:32, Steve Sistare wrote:
>
>> Modify memory_get_xlat_addr and vfio_get_xlat_addr to return the memory
>> region that the translated address is found in. This will be needed by
>> CPR in a subsequent patch to map blocks using IOMMU_IOAS_MAP_FILE.
>>
>> Also return the xlat offset, so we can simplify the interface by removing
>> the out parameters that can be trivially derived from mr and xlat.
>>
>> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
>> ---
>> hw/vfio/listener.c | 29 +++++++++++++++++++----------
>> hw/virtio/vhost-vdpa.c | 8 ++++++--
>> include/system/memory.h | 16 +++++++---------
>> system/memory.c | 25 ++++---------------------
>> 4 files changed, 36 insertions(+), 42 deletions(-)
>>
>> diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
>> index e86ffcf..87b7a3c 100644
>> --- a/hw/vfio/listener.c
>> +++ b/hw/vfio/listener.c
>> @@ -90,16 +90,17 @@ static bool vfio_listener_skipped_section(MemoryRegionSection *section)
>> section->offset_within_address_space & (1ULL << 63);
>> }
>> -/* Called with rcu_read_lock held. */
>> -static bool vfio_get_xlat_addr(IOMMUTLBEntry *iotlb, void **vaddr,
>> - ram_addr_t *ram_addr, bool *read_only,
>> - Error **errp)
>> +/*
>> + * Called with rcu_read_lock held.
>> + * The returned MemoryRegion must not be accessed after calling rcu_read_unlock.
>> + */
>> +static bool vfio_get_xlat_addr(IOMMUTLBEntry *iotlb, MemoryRegion **mr_p,
>> + hwaddr *xlat_p, Error **errp)
>> {
>> - bool ret, mr_has_discard_manager;
>> + bool ret;
>> - ret = memory_get_xlat_addr(iotlb, vaddr, ram_addr, read_only,
>> - &mr_has_discard_manager, errp);
>> - if (ret && mr_has_discard_manager) {
>> + ret = memory_get_xlat_addr(iotlb, mr_p, xlat_p, errp);
>> + if (ret && memory_region_has_ram_discard_manager(*mr_p)) {
>
> I'm trying to understand the underlying intention of this patch: is it just so that you can access the corresponding RAMBlock in vfio_container_dma_map() in patch 31 "vfio/iommufd: use IOMMU_IOAS_MAP_FILE"?
Yes.
> Given that the flatview can theoretically change at any point, it feels as if the current API whereby the vaddr is passed around is the correct approach, and that the final MemoryRegion lookup should be done at the point where it is required.
The existing code already guarantees a stable address space when vfio_container_dma_map()
is called ...
vfio_iommu_map_notify()
rcu_read_lock();
vfio_get_xlat_addr()
vfio_container_dma_map()
> If this is the case, is it not simpler to add a call to address_space_translate() in patch 31 to obtain the MemoryRegion pointer there instead?
... so it is simpler and more efficient (saving a translation) if we simply
expose mr->ram_block in that range of code.
- Steve
>> /*
>> * Malicious VMs might trigger discarding of IOMMU-mapped memory. The
>> * pages will remain pinned inside vfio until unmapped, resulting in a
>> @@ -126,6 +127,8 @@ static void vfio_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
>> VFIOContainerBase *bcontainer = giommu->bcontainer;
>> hwaddr iova = iotlb->iova + giommu->iommu_offset;
>> + MemoryRegion *mr;
>> + hwaddr xlat;
>> void *vaddr;
>> int ret;
>> Error *local_err = NULL;
>> @@ -150,10 +153,13 @@ static void vfio_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
>> bool read_only;
>> - if (!vfio_get_xlat_addr(iotlb, &vaddr, NULL, &read_only, &local_err)) {
>> + if (!vfio_get_xlat_addr(iotlb, &mr, &xlat, &local_err)) {
>> error_report_err(local_err);
>> goto out;
>> }
>> + vaddr = memory_region_get_ram_ptr(mr) + xlat;
>> + read_only = !(iotlb->perm & IOMMU_WO) || mr->readonly;
>> +
>> /*
>> * vaddr is only valid until rcu_read_unlock(). But after
>> * vfio_dma_map has set up the mapping the pages will be
>> @@ -1047,6 +1053,8 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> ram_addr_t translated_addr;
>> Error *local_err = NULL;
>> int ret = -EINVAL;
>> + MemoryRegion *mr;
>> + ram_addr_t xlat;
>> trace_vfio_iommu_map_dirty_notify(iova, iova + iotlb->addr_mask);
>> @@ -1058,9 +1066,10 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> }
>> rcu_read_lock();
>> - if (!vfio_get_xlat_addr(iotlb, NULL, &translated_addr, NULL, &local_err)) {
>> + if (!vfio_get_xlat_addr(iotlb, &mr, &xlat, &local_err)) {
>> goto out_unlock;
>> }
>> + translated_addr = memory_region_get_ram_addr(mr) + xlat;
>> ret = vfio_container_query_dirty_bitmap(bcontainer, iova, iotlb->addr_mask + 1,
>> translated_addr, &local_err);
>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
>> index 1ab2c11..f191360 100644
>> --- a/hw/virtio/vhost-vdpa.c
>> +++ b/hw/virtio/vhost-vdpa.c
>> @@ -209,6 +209,8 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> int ret;
>> Int128 llend;
>> Error *local_err = NULL;
>> + MemoryRegion *mr;
>> + hwaddr xlat;
>> if (iotlb->target_as != &address_space_memory) {
>> error_report("Wrong target AS \"%s\", only system memory is allowed",
>> @@ -228,11 +230,13 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>> if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
>> bool read_only;
>> - if (!memory_get_xlat_addr(iotlb, &vaddr, NULL, &read_only, NULL,
>> - &local_err)) {
>> + if (!memory_get_xlat_addr(iotlb, &mr, &xlat, &local_err)) {
>> error_report_err(local_err);
>> return;
>> }
>> + vaddr = memory_region_get_ram_ptr(mr) + xlat;
>> + read_only = !(iotlb->perm & IOMMU_WO) || mr->readonly;
>> +
>> ret = vhost_vdpa_dma_map(s, VHOST_VDPA_GUEST_PA_ASID, iova,
>> iotlb->addr_mask + 1, vaddr, read_only);
>> if (ret) {
>> diff --git a/include/system/memory.h b/include/system/memory.h
>> index fbbf4cf..d743214 100644
>> --- a/include/system/memory.h
>> +++ b/include/system/memory.h
>> @@ -738,21 +738,19 @@ void ram_discard_manager_unregister_listener(RamDiscardManager *rdm,
>> RamDiscardListener *rdl);
>> /**
>> - * memory_get_xlat_addr: Extract addresses from a TLB entry
>> + * memory_get_xlat_addr: Extract addresses from a TLB entry.
>> + * Called with rcu_read_lock held.
>> *
>> * @iotlb: pointer to an #IOMMUTLBEntry
>> - * @vaddr: virtual address
>> - * @ram_addr: RAM address
>> - * @read_only: indicates if writes are allowed
>> - * @mr_has_discard_manager: indicates memory is controlled by a
>> - * RamDiscardManager
>> + * @mr_p: return the MemoryRegion containing the @iotlb translated addr.
>> + * The MemoryRegion must not be accessed after rcu_read_unlock.
>> + * @xlat_p: return the offset of the entry from the start of @mr_p
>> * @errp: pointer to Error*, to store an error if it happens.
>> *
>> * Return: true on success, else false setting @errp with error.
>> */
>> -bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, void **vaddr,
>> - ram_addr_t *ram_addr, bool *read_only,
>> - bool *mr_has_discard_manager, Error **errp);
>> +bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, MemoryRegion **mr_p,
>> + hwaddr *xlat_p, Error **errp);
>> typedef struct CoalescedMemoryRange CoalescedMemoryRange;
>> typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd;
>> diff --git a/system/memory.c b/system/memory.c
>> index 63b983e..4894c0d 100644
>> --- a/system/memory.c
>> +++ b/system/memory.c
>> @@ -2174,18 +2174,14 @@ void ram_discard_manager_unregister_listener(RamDiscardManager *rdm,
>> }
>> /* Called with rcu_read_lock held. */
>> -bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, void **vaddr,
>> - ram_addr_t *ram_addr, bool *read_only,
>> - bool *mr_has_discard_manager, Error **errp)
>> +bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, MemoryRegion **mr_p,
>> + hwaddr *xlat_p, Error **errp)
>> {
>> MemoryRegion *mr;
>> hwaddr xlat;
>> hwaddr len = iotlb->addr_mask + 1;
>> bool writable = iotlb->perm & IOMMU_WO;
>> - if (mr_has_discard_manager) {
>> - *mr_has_discard_manager = false;
>> - }
>> /*
>> * The IOMMU TLB entry we have just covers translation through
>> * this IOMMU to its immediate target. We need to translate
>> @@ -2203,9 +2199,6 @@ bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, void **vaddr,
>> .offset_within_region = xlat,
>> .size = int128_make64(len),
>> };
>> - if (mr_has_discard_manager) {
>> - *mr_has_discard_manager = true;
>> - }
>> /*
>> * Malicious VMs can map memory into the IOMMU, which is expected
>> * to remain discarded. vfio will pin all pages, populating memory.
>> @@ -2229,18 +2222,8 @@ bool memory_get_xlat_addr(IOMMUTLBEntry *iotlb, void **vaddr,
>> return false;
>> }
>> - if (vaddr) {
>> - *vaddr = memory_region_get_ram_ptr(mr) + xlat;
>> - }
>> -
>> - if (ram_addr) {
>> - *ram_addr = memory_region_get_ram_addr(mr) + xlat;
>> - }
>> -
>> - if (read_only) {
>> - *read_only = !writable || mr->readonly;
>> - }
>> -
>> + *xlat_p = xlat;
>> + *mr_p = mr;
>> return true;
>> }
>
>
> ATB,
>
> Mark.
>
next prev parent reply other threads:[~2025-05-15 19:42 UTC|newest]
Thread overview: 157+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-12 15:32 [PATCH V3 00/42] Live update: vfio and iommufd Steve Sistare
2025-05-12 15:32 ` [PATCH V3 01/42] MAINTAINERS: Add reviewer for CPR Steve Sistare
2025-05-15 7:36 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 02/42] migration: cpr helpers Steve Sistare
2025-05-15 7:43 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 03/42] migration: lower handler priority Steve Sistare
2025-05-12 15:32 ` [PATCH V3 04/42] vfio: vfio_find_ram_discard_listener Steve Sistare
2025-05-12 15:32 ` [PATCH V3 05/42] vfio: move vfio-cpr.h Steve Sistare
2025-05-15 7:46 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 06/42] vfio/container: register container for cpr Steve Sistare
2025-05-15 7:54 ` Cédric Le Goater
2025-05-15 19:06 ` Steven Sistare
2025-05-16 16:20 ` Cédric Le Goater
2025-05-16 17:21 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 07/42] vfio/container: preserve descriptors Steve Sistare
2025-05-15 12:59 ` Cédric Le Goater
2025-05-15 19:08 ` Steven Sistare
2025-05-19 13:20 ` Cédric Le Goater
2025-05-19 16:21 ` Steven Sistare
2025-05-22 13:51 ` Cédric Le Goater
2025-05-22 13:56 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 08/42] vfio/container: export vfio_legacy_dma_map Steve Sistare
2025-05-15 13:42 ` Cédric Le Goater
2025-05-15 19:08 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 09/42] vfio/container: discard old DMA vaddr Steve Sistare
2025-05-15 13:30 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 10/42] vfio/container: restore " Steve Sistare
2025-05-15 13:42 ` Cédric Le Goater
2025-05-15 19:08 ` Steven Sistare
2025-05-19 13:32 ` Cédric Le Goater
2025-05-19 16:33 ` Steven Sistare
2025-05-22 6:37 ` Cédric Le Goater
2025-05-22 14:00 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 11/42] vfio/container: mdev cpr blocker Steve Sistare
2025-05-16 8:16 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 12/42] vfio/container: recover from unmap-all-vaddr failure Steve Sistare
2025-05-20 6:29 ` Cédric Le Goater
2025-05-20 13:39 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 13/42] pci: export msix_is_pending Steve Sistare
2025-05-12 15:32 ` [PATCH V3 14/42] pci: skip reset during cpr Steve Sistare
2025-05-16 8:19 ` Cédric Le Goater
2025-05-16 17:58 ` Steven Sistare
2025-05-24 9:34 ` Michael S. Tsirkin
2025-05-27 20:42 ` Steven Sistare
2025-05-27 21:03 ` Michael S. Tsirkin
2025-05-28 16:11 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 15/42] vfio-pci: " Steve Sistare
2025-05-20 6:48 ` Cédric Le Goater
2025-05-20 13:44 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 16/42] vfio/pci: vfio_vector_init Steve Sistare
2025-05-16 8:32 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 17/42] vfio/pci: vfio_notifier_init Steve Sistare
2025-05-16 8:29 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 18/42] vfio/pci: pass vector to virq functions Steve Sistare
2025-05-16 8:28 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 19/42] vfio/pci: vfio_notifier_init cpr parameters Steve Sistare
2025-05-16 8:29 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 20/42] vfio/pci: vfio_notifier_cleanup Steve Sistare
2025-05-16 8:30 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 21/42] vfio/pci: export MSI functions Steve Sistare
2025-05-16 8:31 ` Cédric Le Goater
2025-05-16 17:58 ` Steven Sistare
2025-05-20 5:52 ` Cédric Le Goater
2025-05-20 14:56 ` Steven Sistare
2025-05-20 15:10 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 22/42] vfio-pci: preserve MSI Steve Sistare
2025-05-28 17:44 ` Steven Sistare
2025-06-01 17:28 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 23/42] vfio-pci: preserve INTx Steve Sistare
2025-05-12 15:32 ` [PATCH V3 24/42] migration: close kvm after cpr Steve Sistare
2025-05-16 8:35 ` Cédric Le Goater
2025-05-16 17:14 ` Peter Xu
2025-05-16 19:17 ` Steven Sistare
2025-05-16 18:18 ` Steven Sistare
2025-05-19 8:51 ` Cédric Le Goater
2025-05-19 19:07 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 25/42] migration: cpr_get_fd_param helper Steve Sistare
2025-05-19 21:22 ` Fabiano Rosas
2025-05-12 15:32 ` [PATCH V3 26/42] vfio: return mr from vfio_get_xlat_addr Steve Sistare
2025-05-12 20:51 ` John Levon
2025-05-14 17:03 ` Cédric Le Goater
2025-05-15 8:22 ` David Hildenbrand
2025-05-15 19:13 ` Steven Sistare
2025-05-15 17:24 ` Steven Sistare
2025-05-13 11:12 ` Mark Cave-Ayland
2025-05-15 19:40 ` Steven Sistare [this message]
2025-05-12 15:32 ` [PATCH V3 27/42] vfio: pass ramblock to vfio_container_dma_map Steve Sistare
2025-05-16 8:26 ` Duan, Zhenzhong
2025-05-12 15:32 ` [PATCH V3 28/42] backends/iommufd: iommufd_backend_map_file_dma Steve Sistare
2025-05-16 8:26 ` Duan, Zhenzhong
2025-05-19 15:51 ` Steven Sistare
2025-05-20 19:32 ` Steven Sistare
2025-05-21 2:48 ` Duan, Zhenzhong
2025-05-12 15:32 ` [PATCH V3 29/42] backends/iommufd: change process ioctl Steve Sistare
2025-05-16 8:42 ` Duan, Zhenzhong
2025-05-19 15:51 ` Steven Sistare
2025-05-20 19:34 ` Steven Sistare
2025-05-21 3:11 ` Duan, Zhenzhong
2025-05-21 13:01 ` Steven Sistare
2025-05-22 3:19 ` Duan, Zhenzhong
2025-05-22 21:11 ` Steven Sistare
2025-05-23 8:56 ` Duan, Zhenzhong
2025-05-23 14:56 ` Steven Sistare
2025-05-23 19:19 ` Steven Sistare
2025-05-26 2:31 ` Duan, Zhenzhong
2025-05-28 13:31 ` Steven Sistare
2025-05-30 9:56 ` Duan, Zhenzhong
2025-05-12 15:32 ` [PATCH V3 30/42] physmem: qemu_ram_get_fd_offset Steve Sistare
2025-05-16 8:40 ` Duan, Zhenzhong
2025-05-12 15:32 ` [PATCH V3 31/42] vfio/iommufd: use IOMMU_IOAS_MAP_FILE Steve Sistare
2025-05-16 8:48 ` Duan, Zhenzhong
2025-05-19 15:52 ` Steven Sistare
2025-05-20 19:39 ` Steven Sistare
2025-05-21 3:13 ` Duan, Zhenzhong
2025-05-20 12:27 ` Cédric Le Goater
2025-05-20 13:58 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 32/42] vfio/iommufd: export iommufd_cdev_get_info_iova_range Steve Sistare
2025-05-21 18:35 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 33/42] vfio/iommufd: define hwpt constructors Steve Sistare
2025-05-16 8:55 ` Duan, Zhenzhong
2025-05-19 15:55 ` Steven Sistare
2025-05-23 17:47 ` Steven Sistare
2025-05-20 12:34 ` Cédric Le Goater
2025-05-21 2:48 ` Duan, Zhenzhong
2025-05-21 8:19 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 34/42] vfio/iommufd: invariant device name Steve Sistare
2025-05-16 9:29 ` Duan, Zhenzhong
2025-05-19 15:52 ` Steven Sistare
2025-05-20 13:55 ` Cédric Le Goater
2025-05-20 21:00 ` Steven Sistare
2025-05-21 8:20 ` Cédric Le Goater
2025-05-12 15:32 ` [PATCH V3 35/42] vfio/iommufd: register container for cpr Steve Sistare
2025-05-16 10:23 ` Duan, Zhenzhong
2025-05-19 15:52 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 36/42] vfio/iommufd: preserve descriptors Steve Sistare
2025-05-16 10:06 ` Duan, Zhenzhong
2025-05-19 15:53 ` Steven Sistare
2025-05-20 9:15 ` Duan, Zhenzhong
2025-05-12 15:32 ` [PATCH V3 37/42] vfio/iommufd: reconstruct device Steve Sistare
2025-05-16 10:22 ` Duan, Zhenzhong
2025-05-19 15:53 ` Steven Sistare
2025-05-20 9:14 ` Duan, Zhenzhong
2025-05-21 18:38 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 38/42] vfio/iommufd: reconstruct hw_caps Steve Sistare
2025-05-21 19:59 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 39/42] vfio/iommufd: reconstruct hwpt Steve Sistare
2025-05-19 3:25 ` Duan, Zhenzhong
2025-05-19 15:53 ` Steven Sistare
2025-05-20 9:16 ` Duan, Zhenzhong
2025-05-21 17:40 ` Steven Sistare
2025-05-12 15:32 ` [PATCH V3 40/42] vfio/iommufd: change process Steve Sistare
2025-05-12 15:32 ` [PATCH V3 41/42] iommufd: preserve DMA mappings Steve Sistare
2025-05-12 15:32 ` [PATCH V3 42/42] vfio/container: delete old cpr register Steve Sistare
2025-05-16 16:37 ` [PATCH V3 00/42] Live update: vfio and iommufd Cédric Le Goater
2025-05-16 17:17 ` Steven Sistare
2025-05-16 19:48 ` Steven Sistare
2025-05-19 8:54 ` Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6d269740-b274-4046-bb15-7ce7f2784ca5@oracle.com \
--to=steven.sistare@oracle.com \
--cc=alex.williamson@redhat.com \
--cc=clg@redhat.com \
--cc=eric.auger@redhat.com \
--cc=farosas@suse.de \
--cc=marcel.apfelbaum@gmail.com \
--cc=mark.caveayland@nutanix.com \
--cc=mst@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yi.l.liu@intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).