From: Paul Durrant <Paul.Durrant@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
"Jan Beulich (JBeulich@suse.com)" <JBeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
George Dunlap <George.Dunlap@citrix.com>,
Julien Grall <julien.grall@arm.com>,
Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable resource type...
Date: Thu, 14 Dec 2017 09:52:38 +0000 [thread overview]
Message-ID: <fe1c94c33f36456a975bd853ffb8eb2e@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <0ba2e22fae3b424cb63a3743b0a59c63@AMSPEX02CL03.citrite.net>
Actually adding Jan to the To: line this time...
> -----Original Message-----
> From: Paul Durrant
> Sent: 14 December 2017 09:52
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: George Dunlap <George.Dunlap@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim
> (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>
> Subject: RE: [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable
> resource type...
>
> > -----Original Message-----
> > From: Paul Durrant [mailto:paul.durrant@citrix.com]
> > Sent: 28 November 2017 15:09
> > To: xen-devel@lists.xenproject.org
> > Cc: Paul Durrant <Paul.Durrant@citrix.com>; George Dunlap
> > <George.Dunlap@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Andrew
> > Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> > <Ian.Jackson@citrix.com>; Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Tim
> > (Xen.org) <tim@xen.org>; Julien Grall <julien.grall@arm.com>
> > Subject: [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable resource
> > type...
> >
> > ... XENMEM_resource_ioreq_server
> >
> > This patch adds support for a new resource type that can be mapped using
> > the XENMEM_acquire_resource memory op.
> >
> > If an emulator makes use of this resource type then, instead of mapping
> > gfns, the IOREQ server will allocate pages from the heap. These pages
> > will never be present in the P2M of the guest at any point and so are
> > not vulnerable to any direct attack by the guest. They are only ever
> > accessible by Xen and any domain that has mapping privilege over the
> > guest (which may or may not be limited to the domain running the
> > emulator).
> >
> > NOTE: Use of the new resource type is not compatible with use of
> > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns
> flag
> > is
> > set.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Tim Deegan <tim@xen.org>
> > Cc: Julien Grall <julien.grall@arm.com>
> >
> > v14:
> > - Addressed more comments from Jan.
> >
> > v13:
> > - Introduce an arch_acquire_resource() as suggested by Julien (and have
> > the ARM varient simply return -EOPNOTSUPP).
> > - Check for ioreq server id truncation as requested by Jan.
> > - Not added Jan's R-b due to substantive change from v12.
> >
> > v12:
> > - Addressed more comments from Jan.
> > - Dropped George's A-b and Wei's R-b because of material change.
> >
> > v11:
> > - Addressed more comments from Jan.
> >
> > v10:
> > - Addressed comments from Jan.
> >
> > v8:
> > - Re-base on new boilerplate.
> > - Adjust function signature of hvm_get_ioreq_server_frame(), and test
> > whether the bufioreq page is present.
> >
> > v5:
> > - Use get_ioreq_server() function rather than indexing array directly.
> > - Add more explanation into comments to state than mapping guest
> frames
> > and allocation of pages for ioreq servers are not simultaneously
> > permitted.
> > - Add a comment into asm/ioreq.h stating the meaning of the index
> > value passed to hvm_get_ioreq_server_frame().
> > ---
> > xen/arch/x86/hvm/ioreq.c | 156
> > ++++++++++++++++++++++++++++++++++++++++
> > xen/arch/x86/mm.c | 41 +++++++++++
> > xen/common/memory.c | 3 +-
> > xen/include/asm-arm/mm.h | 7 ++
> > xen/include/asm-x86/hvm/ioreq.h | 2 +
> > xen/include/asm-x86/mm.h | 5 ++
> > xen/include/public/hvm/dm_op.h | 4 ++
> > xen/include/public/memory.h | 9 +++
> > 8 files changed, 226 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> > index 39de659ddf..d991ac9cdc 100644
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct
> > hvm_ioreq_server *s, bool buf)
> > struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > int rc;
> >
> > + if ( iorp->page )
> > + {
> > + /*
> > + * If a page has already been allocated (which will happen on
> > + * demand if hvm_get_ioreq_server_frame() is called), then
> > + * mapping a guest frame is not permitted.
> > + */
> > + if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> > + return -EPERM;
> > +
> > + return 0;
> > + }
> > +
> > if ( d->is_dying )
> > return -EINVAL;
> >
> > @@ -281,6 +294,70 @@ static int hvm_map_ioreq_gfn(struct
> > hvm_ioreq_server *s, bool buf)
> > return rc;
> > }
> >
> > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > + struct domain *currd = current->domain;
> > + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > + if ( iorp->page )
> > + {
> > + /*
> > + * If a guest frame has already been mapped (which may happen
> > + * on demand if hvm_get_ioreq_server_info() is called), then
> > + * allocating a page is not permitted.
> > + */
> > + if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > + return -EPERM;
> > +
> > + return 0;
> > + }
> > +
> > + /*
> > + * Allocated IOREQ server pages are assigned to the emulating
> > + * domain, not the target domain. This is because the emulator is
> > + * likely to be destroyed after the target domain has been torn
> > + * down, and we must use MEMF_no_refcount otherwise page
> allocation
> > + * could fail if the emulating domain has already reached its
> > + * maximum allocation.
> > + */
> > + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
>
> This is no longer going to work as it is predicated on my original modification
> to HYPERVISOR_mmu_update (which allowed a PV domain to map a foreign
> MFN from a domain over which it had privilege as if the MFN was local).
> Because that mechanism was decided against, this code needs to change to
> use the target domain of the ioreq server rather than the calling domain. I
> will verfy this modification and submit v15 of the series.
>
> Jan, are you ok for me to keep your R-b?
>
> Paul
>
> > + if ( !iorp->page )
> > + return -ENOMEM;
> > +
> > + if ( !get_page_type(iorp->page, PGT_writable_page) )
> > + {
> > + ASSERT_UNREACHABLE();
> > + put_page(iorp->page);
> > + iorp->page = NULL;
> > + return -ENOMEM;
> > + }
> > +
> > + iorp->va = __map_domain_page_global(iorp->page);
> > + if ( !iorp->va )
> > + {
> > + put_page_and_type(iorp->page);
> > + iorp->page = NULL;
> > + return -ENOMEM;
> > + }
> > +
> > + clear_page(iorp->va);
> > + return 0;
> > +}
> > +
> > +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > + if ( !iorp->page )
> > + return;
> > +
> > + unmap_domain_page_global(iorp->va);
> > + iorp->va = NULL;
> > +
> > + put_page_and_type(iorp->page);
> > + iorp->page = NULL;
> > +}
> > +
> > bool is_ioreq_server_page(struct domain *d, const struct page_info
> *page)
> > {
> > const struct hvm_ioreq_server *s;
> > @@ -484,6 +561,27 @@ static void
> hvm_ioreq_server_unmap_pages(struct
> > hvm_ioreq_server *s)
> > hvm_unmap_ioreq_gfn(s, false);
> > }
> >
> > +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
> > +{
> > + int rc;
> > +
> > + rc = hvm_alloc_ioreq_mfn(s, false);
> > +
> > + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
> > + rc = hvm_alloc_ioreq_mfn(s, true);
> > +
> > + if ( rc )
> > + hvm_free_ioreq_mfn(s, false);
> > +
> > + return rc;
> > +}
> > +
> > +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
> > +{
> > + hvm_free_ioreq_mfn(s, true);
> > + hvm_free_ioreq_mfn(s, false);
> > +}
> > +
> > static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server
> *s)
> > {
> > unsigned int i;
> > @@ -612,7 +710,18 @@ static int hvm_ioreq_server_init(struct
> > hvm_ioreq_server *s,
> >
> > fail_add:
> > hvm_ioreq_server_remove_all_vcpus(s);
> > +
> > + /*
> > + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
> > + * hvm_ioreq_server_free_pages() in that order.
> > + * This is because the former will do nothing if the pages
> > + * are not mapped, leaving the page to be freed by the latter.
> > + * However if the pages are mapped then the former will set
> > + * the page_info pointer to NULL, meaning the latter will do
> > + * nothing.
> > + */
> > hvm_ioreq_server_unmap_pages(s);
> > + hvm_ioreq_server_free_pages(s);
> >
> > return rc;
> > }
> > @@ -622,6 +731,7 @@ static void hvm_ioreq_server_deinit(struct
> > hvm_ioreq_server *s)
> > ASSERT(!s->enabled);
> > hvm_ioreq_server_remove_all_vcpus(s);
> > hvm_ioreq_server_unmap_pages(s);
> > + hvm_ioreq_server_free_pages(s);
> > hvm_ioreq_server_free_rangesets(s);
> > }
> >
> > @@ -777,6 +887,52 @@ int hvm_get_ioreq_server_info(struct domain *d,
> > ioservid_t id,
> > return rc;
> > }
> >
> > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
> > + unsigned long idx, mfn_t *mfn)
> > +{
> > + struct hvm_ioreq_server *s;
> > + int rc;
> > +
> > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > +
> > + if ( id == DEFAULT_IOSERVID )
> > + return -EOPNOTSUPP;
> > +
> > + s = get_ioreq_server(d, id);
> > +
> > + ASSERT(!IS_DEFAULT(s));
> > +
> > + rc = hvm_ioreq_server_alloc_pages(s);
> > + if ( rc )
> > + goto out;
> > +
> > + switch ( idx )
> > + {
> > + case XENMEM_resource_ioreq_server_frame_bufioreq:
> > + rc = -ENOENT;
> > + if ( !HANDLE_BUFIOREQ(s) )
> > + goto out;
> > +
> > + *mfn = _mfn(page_to_mfn(s->bufioreq.page));
> > + rc = 0;
> > + break;
> > +
> > + case XENMEM_resource_ioreq_server_frame_ioreq(0):
> > + *mfn = _mfn(page_to_mfn(s->ioreq.page));
> > + rc = 0;
> > + break;
> > +
> > + default:
> > + rc = -EINVAL;
> > + break;
> > + }
> > +
> > + out:
> > + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > +
> > + return rc;
> > +}
> > +
> > int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
> > uint32_t type, uint64_t start,
> > uint64_t end)
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 6ec6e68afe..2656eb181a 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -122,6 +122,7 @@
> > #include <asm/fixmap.h>
> > #include <asm/io_apic.h>
> > #include <asm/pci.h>
> > +#include <asm/hvm/ioreq.h>
> >
> > #include <asm/hvm/grant_table.h>
> > #include <asm/pv/grant_table.h>
> > @@ -4170,6 +4171,46 @@ int xenmem_add_to_physmap_one(
> > return rc;
> > }
> >
> > +int arch_acquire_resource(struct domain *d, unsigned int type,
> > + unsigned int id, unsigned long frame,
> > + unsigned int nr_frames, xen_pfn_t mfn_list[])
> > +{
> > + int rc;
> > +
> > + switch ( type )
> > + {
> > + case XENMEM_resource_ioreq_server:
> > + {
> > + ioservid_t ioservid = id;
> > + unsigned int i;
> > +
> > + rc = -EINVAL;
> > + if ( id != (unsigned int)ioservid )
> > + break;
> > +
> > + rc = 0;
> > + for ( i = 0; i < nr_frames; i++ )
> > + {
> > + mfn_t mfn;
> > +
> > + rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
> > + if ( rc )
> > + break;
> > +
> > + mfn_list[i] = mfn_x(mfn);
> > + }
> > +
> > + break;
> > + }
> > +
> > + default:
> > + rc = -EOPNOTSUPP;
> > + break;
> > + }
> > +
> > + return rc;
> > +}
> > +
> > long arch_memory_op(unsigned long cmd,
> > XEN_GUEST_HANDLE_PARAM(void) arg)
> > {
> > int rc;
> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> > index 6c385a2328..0167d9788b 100644
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -1016,7 +1016,8 @@ static int acquire_resource(
> > switch ( xmar.type )
> > {
> > default:
> > - rc = -EOPNOTSUPP;
> > + rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
> > + xmar.nr_frames, mfn_list);
> > break;
> > }
> >
> > diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> > index ad2f2a43dc..bd146dee3c 100644
> > --- a/xen/include/asm-arm/mm.h
> > +++ b/xen/include/asm-arm/mm.h
> > @@ -381,6 +381,13 @@ static inline void put_page_and_type(struct
> > page_info *page)
> >
> > void clear_and_clean_page(struct page_info *page);
> >
> > +static inline int arch_acquire_resource(
> > + struct domain *d, unsigned int type, unsigned int id,
> > + unsigned long frame,unsigned int nr_frames, xen_pfn_t mfn_list[])
> > +{
> > + return -EOPNOTSUPP;
> > +}
> > +
> > #endif /* __ARCH_ARM_MM__ */
> > /*
> > * Local variables:
> > diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-
> > x86/hvm/ioreq.h
> > index 1829fcf43e..9e37c97a37 100644
> > --- a/xen/include/asm-x86/hvm/ioreq.h
> > +++ b/xen/include/asm-x86/hvm/ioreq.h
> > @@ -31,6 +31,8 @@ int hvm_get_ioreq_server_info(struct domain *d,
> > ioservid_t id,
> > unsigned long *ioreq_gfn,
> > unsigned long *bufioreq_gfn,
> > evtchn_port_t *bufioreq_port);
> > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
> > + unsigned long idx, mfn_t *mfn);
> > int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
> > uint32_t type, uint64_t start,
> > uint64_t end);
> > diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> > index 83626085e0..10e5b6cd14 100644
> > --- a/xen/include/asm-x86/mm.h
> > +++ b/xen/include/asm-x86/mm.h
> > @@ -629,4 +629,9 @@ static inline bool arch_mfn_in_directmap(unsigned
> > long mfn)
> > return mfn <= (virt_to_mfn(eva - 1) + 1);
> > }
> >
> > +int arch_acquire_resource(struct domain *d, unsigned int type,
> > + unsigned int id, unsigned long frame,
> > + unsigned int nr_frames,
> > + xen_pfn_t mfn_list[]);
> > +
> > #endif /* __ASM_X86_MM_H__ */
> > diff --git a/xen/include/public/hvm/dm_op.h
> > b/xen/include/public/hvm/dm_op.h
> > index 13b3737c2f..add68ea192 100644
> > --- a/xen/include/public/hvm/dm_op.h
> > +++ b/xen/include/public/hvm/dm_op.h
> > @@ -90,6 +90,10 @@ struct xen_dm_op_create_ioreq_server {
> > * the frame numbers passed back in gfns <ioreq_gfn> and
> <bufioreq_gfn>
> > * respectively. (If the IOREQ Server is not handling buffered emulation
> > * only <ioreq_gfn> will be valid).
> > + *
> > + * NOTE: To access the synchronous ioreq structures and buffered ioreq
> > + * ring, it is preferable to use the XENMEM_acquire_resource memory
> > + * op specifying resource type XENMEM_resource_ioreq_server.
> > */
> > #define XEN_DMOP_get_ioreq_server_info 2
> >
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index 83e60b6603..838f248a59 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -609,9 +609,14 @@ struct xen_mem_acquire_resource {
> > domid_t domid;
> > /* IN - the type of resource */
> > uint16_t type;
> > +
> > +#define XENMEM_resource_ioreq_server 0
> > +
> > /*
> > * IN - a type-specific resource identifier, which must be zero
> > * unless stated otherwise.
> > + *
> > + * type == XENMEM_resource_ioreq_server -> id == ioreq server id
> > */
> > uint32_t id;
> > /* IN/OUT - As an IN parameter number of frames of the resource
> > @@ -625,6 +630,10 @@ struct xen_mem_acquire_resource {
> > * is ignored if nr_frames is 0.
> > */
> > uint64_aligned_t frame;
> > +
> > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0
> > +#define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n))
> > +
> > /* IN/OUT - If the tools domain is PV then, upon return, frame_list
> > * will be populated with the MFNs of the resource.
> > * If the tools domain is HVM then it is expected that, on
> > --
> > 2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2017-12-14 9:52 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-28 15:08 [PATCH v14 00/11] x86: guest resource mapping Paul Durrant
2017-11-28 15:08 ` [PATCH v14 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-11-28 15:08 ` [PATCH v14 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-11-28 15:08 ` [PATCH v14 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-11-28 15:08 ` [PATCH v14 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-12-06 21:49 ` Chao Gao
2017-12-07 8:38 ` Paul Durrant
2017-11-28 15:08 ` [PATCH v14 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-11-28 18:50 ` Daniel De Graaf
2017-11-28 15:08 ` [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-12-14 9:51 ` Paul Durrant
2017-12-14 9:52 ` Paul Durrant [this message]
2017-12-14 10:31 ` Paul Durrant
2017-12-14 13:46 ` Jan Beulich
2017-12-14 13:46 ` Paul Durrant
2017-11-28 15:08 ` [PATCH v14 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-12-12 13:25 ` Jan Beulich
2017-12-12 13:52 ` Andrew Cooper
2017-12-12 14:38 ` Jan Beulich
2017-12-13 12:06 ` Paul Durrant
2017-12-13 14:35 ` Jan Beulich
2017-12-13 14:49 ` Paul Durrant
2017-12-13 15:24 ` Jan Beulich
2017-12-13 17:03 ` Paul Durrant
2017-12-14 13:50 ` Jan Beulich
2017-12-12 14:54 ` Paul Durrant
2017-11-28 15:08 ` [PATCH v14 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-11-28 15:08 ` [PATCH v14 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-11-28 15:08 ` [PATCH v14 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-11-28 15:08 ` [PATCH v14 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fe1c94c33f36456a975bd853ffb8eb2e@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=JBeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).