From: Haozhong Zhang <haozhong.zhang@intel.com>
To: xen-devel@lists.xen.org
Cc: Haozhong Zhang <haozhong.zhang@intel.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
Jan Beulich <jbeulich@suse.com>,
Chao Peng <chao.p.peng@linux.intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [RFC XEN PATCH v4 29/41] xen: add hypercall XENMEM_populate_pmem_map
Date: Thu, 7 Dec 2017 18:10:18 +0800 [thread overview]
Message-ID: <20171207101030.22364-30-haozhong.zhang@intel.com> (raw)
In-Reply-To: <20171207101030.22364-1-haozhong.zhang@intel.com>
This hypercall will be used by device models to map host PMEM pages to
guest.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
---
tools/flask/policy/modules/xen.if | 3 ++-
tools/libxc/include/xenctrl.h | 17 ++++++++++++++
tools/libxc/xc_domain.c | 15 +++++++++++++
xen/common/compat/memory.c | 1 +
xen/common/memory.c | 44 +++++++++++++++++++++++++++++++++++++
xen/include/public/memory.h | 14 +++++++++++-
xen/include/xsm/dummy.h | 11 ++++++++++
xen/include/xsm/xsm.h | 12 ++++++++++
xen/xsm/dummy.c | 4 ++++
xen/xsm/flask/hooks.c | 13 +++++++++++
xen/xsm/flask/policy/access_vectors | 2 ++
11 files changed, 134 insertions(+), 2 deletions(-)
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 55437496f6..8c2d6776f4 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -55,7 +55,8 @@ define(`create_domain_common', `
psr_cmt_op psr_cat_op soft_reset set_gnttab_limits };
allow $1 $2:security check_context;
allow $1 $2:shadow enable;
- allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
+ allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp
+ populate_pmem_map };
allow $1 $2:grant setup;
allow $1 $2:hvm { cacheattr getparam hvmctl sethvmc
setparam nested altp2mhvm altp2mhvm_op dm };
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 5194d3ff5e..4d66cbed0b 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2678,6 +2678,23 @@ int xc_nvdimm_pmem_setup_data(xc_interface *xch,
unsigned long smfn, unsigned long emfn,
unsigned long mgmt_smfn, unsigned long mgmt_emfn);
+/*
+ * Map specified host PMEM pages to the specified guest address.
+ *
+ * Parameters:
+ * xch: xc interface handle
+ * domid: the target domain id
+ * mfn: the start MFN of the PMEM pages
+ * gfn: the start GFN of the target guest physical pages
+ * nr_mfns: the number of PMEM pages to be mapped
+ *
+ * Return:
+ * On success, return 0. Otherwise, return a non-zero error code.
+ */
+int xc_domain_populate_pmem_map(xc_interface *xch, uint32_t domid,
+ unsigned long mfn, unsigned long gfn,
+ unsigned long nr_mfns);
+
/* Compat shims */
#include "xenctrl_compat.h"
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 3ccd27f101..a62470e6d8 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2435,6 +2435,21 @@ int xc_domain_soft_reset(xc_interface *xch,
domctl.domain = domid;
return do_domctl(xch, &domctl);
}
+
+int xc_domain_populate_pmem_map(xc_interface *xch, uint32_t domid,
+ unsigned long mfn, unsigned long gfn,
+ unsigned long nr_mfns)
+{
+ struct xen_pmem_map args = {
+ .domid = domid,
+ .mfn = mfn,
+ .gfn = gfn,
+ .nr_mfns = nr_mfns,
+ };
+
+ return do_memory_op(xch, XENMEM_populate_pmem_map, &args, sizeof(args));
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 35bb259808..51bec835b9 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -525,6 +525,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
case XENMEM_add_to_physmap:
case XENMEM_remove_from_physmap:
case XENMEM_access_op:
+ case XENMEM_populate_pmem_map:
break;
case XENMEM_get_vnumainfo:
diff --git a/xen/common/memory.c b/xen/common/memory.c
index a6ba33fdcb..2f870ad2b6 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -23,6 +23,7 @@
#include <xen/numa.h>
#include <xen/mem_access.h>
#include <xen/trace.h>
+#include <xen/pmem.h>
#include <asm/current.h>
#include <asm/hardirq.h>
#include <asm/p2m.h>
@@ -1408,6 +1409,49 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
}
#endif
+#ifdef CONFIG_NVDIMM_PMEM
+ case XENMEM_populate_pmem_map:
+ {
+ struct xen_pmem_map map;
+ struct xen_pmem_map_args args;
+
+ if ( copy_from_guest(&map, arg, 1) )
+ return -EFAULT;
+
+ if ( map.domid == DOMID_SELF )
+ return -EINVAL;
+
+ d = rcu_lock_domain_by_any_id(map.domid);
+ if ( !d )
+ return -EINVAL;
+
+ rc = xsm_populate_pmem_map(XSM_TARGET, curr_d, d);
+ if ( rc )
+ {
+ rcu_unlock_domain(d);
+ return rc;
+ }
+
+ args.domain = d;
+ args.mfn = map.mfn;
+ args.gfn = map.gfn;
+ args.nr_mfns = map.nr_mfns;
+ args.nr_done = start_extent;
+ args.preempted = 0;
+
+ rc = pmem_populate(&args);
+
+ rcu_unlock_domain(d);
+
+ if ( rc == -ERESTART && args.preempted )
+ return hypercall_create_continuation(
+ __HYPERVISOR_memory_op, "lh",
+ op | (args.nr_done << MEMOP_EXTENT_SHIFT), arg);
+
+ break;
+ }
+#endif /* CONFIG_NVDIMM_PMEM */
+
default:
rc = arch_memory_op(cmd, arg);
break;
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 29386df98b..d74436e4b0 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -650,7 +650,19 @@ struct xen_vnuma_topology_info {
typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
-/* Next available subop number is 28 */
+#define XENMEM_populate_pmem_map 28
+
+struct xen_pmem_map {
+ /* IN */
+ domid_t domid;
+ unsigned long mfn;
+ unsigned long gfn;
+ unsigned int nr_mfns;
+};
+typedef struct xen_pmem_map xen_pmem_map_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmem_map_t);
+
+/* Next available subop number is 29 */
#endif /* __XEN_PUBLIC_MEMORY_H__ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index b2cd56cdc5..1eb6595cfa 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -724,3 +724,14 @@ static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
return xsm_default_action(XSM_PRIV, current->domain, NULL);
}
}
+
+#ifdef CONFIG_NVDIMM_PMEM
+
+static XSM_INLINE int xsm_populate_pmem_map(XSM_DEFAULT_ARG
+ struct domain *d1, struct domain *d2)
+{
+ XSM_ASSERT_ACTION(XSM_TARGET);
+ return xsm_default_action(action, d1, d2);
+}
+
+#endif /* CONFIG_NVDIMM_PMEM */
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 7f7feffc68..e43e79f719 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -180,6 +180,10 @@ struct xsm_operations {
int (*dm_op) (struct domain *d);
#endif
int (*xen_version) (uint32_t cmd);
+
+#ifdef CONFIG_NVDIMM_PMEM
+ int (*populate_pmem_map) (struct domain *d1, struct domain *d2);
+#endif
};
#ifdef CONFIG_XSM
@@ -692,6 +696,14 @@ static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
return xsm_ops->xen_version(op);
}
+#ifdef CONFIG_NVDIMM_PMEM
+static inline int xsm_populate_pmem_map(xsm_default_t def,
+ struct domain *d1, struct domain *d2)
+{
+ return xsm_ops->populate_pmem_map(d1, d2);
+}
+#endif /* CONFIG_NVDIMM_PMEM */
+
#endif /* XSM_NO_WRAPPERS */
#ifdef CONFIG_MULTIBOOT
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 479b103614..4d65eaca61 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -157,4 +157,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
set_to_dummy_if_null(ops, dm_op);
#endif
set_to_dummy_if_null(ops, xen_version);
+
+#ifdef CONFIG_NVDIMM_PMEM
+ set_to_dummy_if_null(ops, populate_pmem_map);
+#endif
}
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f677755512..47cfb81d64 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1722,6 +1722,15 @@ static int flask_xen_version (uint32_t op)
}
}
+#ifdef CONFIG_NVDIMM_PMEM
+
+static int flask_populate_pmem_map(struct domain *d1, struct domain *d2)
+{
+ return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__POPULATE_PMEM_MAP);
+}
+
+#endif /* CONFIG_NVDIMM_PMEM */
+
long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
@@ -1855,6 +1864,10 @@ static struct xsm_operations flask_ops = {
.dm_op = flask_dm_op,
#endif
.xen_version = flask_xen_version,
+
+#ifdef CONFIG_NVDIMM_PMEM
+ .populate_pmem_map = flask_populate_pmem_map,
+#endif /* CONFIG_NVDIMM_PMEM */
};
void __init flask_init(const void *policy_buffer, size_t policy_size)
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 3bfbb892c7..daa6937c22 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -389,6 +389,8 @@ class mmu
# Allow a privileged domain to install a map of a page it does not own. Used
# for stub domain device models with the PV framebuffer.
target_hack
+# XENMEM_populate_pmem_map
+ populate_pmem_map
}
# control of the paging_domctl split by subop
--
2.15.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2017-12-07 10:10 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-07 10:09 [RFC XEN PATCH v4 00/41] Add vNVDIMM support to HVM domains Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 01/41] x86_64/mm: fix the PDX group check in mem_hotadd_check() Haozhong Zhang
2018-01-04 6:12 ` Chao Peng
2018-05-07 15:59 ` Jan Beulich
2017-12-07 10:09 ` [RFC XEN PATCH v4 02/41] x86_64/mm: avoid cleaning the unmapped frame table Haozhong Zhang
2018-01-04 6:20 ` Chao Peng
2017-12-07 10:09 ` [RFC XEN PATCH v4 03/41] hvmloader/util: do not compare characters after '\0' in strncmp Haozhong Zhang
2018-01-04 6:23 ` Chao Peng
2017-12-07 10:09 ` [RFC XEN PATCH v4 04/41] xen/common: add Kconfig item for pmem support Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 05/41] x86/mm: exclude PMEM regions from initial frametable Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 06/41] acpi: probe valid PMEM regions via NFIT Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 07/41] xen/pmem: register valid PMEM regions to Xen hypervisor Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 08/41] xen/pmem: hide NFIT and deny access to PMEM from Dom0 Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 09/41] xen/pmem: add framework for hypercall XEN_SYSCTL_nvdimm_op Haozhong Zhang
2017-12-07 10:09 ` [RFC XEN PATCH v4 10/41] xen/pmem: add XEN_SYSCTL_nvdimm_pmem_get_rgions_nr Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 11/41] xen/pmem: add XEN_SYSCTL_nvdimm_pmem_get_regions Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 12/41] tools/xl: add xl command 'pmem-list' Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 13/41] x86_64/mm: refactor memory_add() Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 14/41] x86_64/mm: allow customized location of extended frametable and M2P table Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 15/41] xen/pmem: add XEN_SYSCTL_nvdimm_pmem_setup to setup management PMEM region Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 16/41] tools/xl: accept all bases in parse_ulong() Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 17/41] tools/xl: expose parse_ulong() Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 18/41] tools/xl: add xl command 'pmem-setup' Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 19/41] xen/pmem: support PMEM_REGION_TYPE_MGMT for XEN_SYSCTL_nvdimm_pmem_get_regions_nr Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 20/41] xen/pmem: support PMEM_REGION_TYPE_MGMT for XEN_SYSCTL_nvdimm_pmem_get_regions Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 21/41] tools/xl: add option '--mgmt | -m' to xl command pmem-list Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 22/41] xen/pmem: support setup PMEM region for guest data usage Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 23/41] tools/xl: add option '--data | -d' to xl command pmem-setup Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 24/41] xen/pmem: support PMEM_REGION_TYPE_DATA for XEN_SYSCTL_nvdimm_pmem_get_regions_nr Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 25/41] xen/pmem: support PMEM_REGION_TYPE_DATA for XEN_SYSCTL_nvdimm_pmem_get_regions Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 26/41] tools/xl: add option '--data | -d' to xl command pmem-list Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 27/41] xen/pmem: add function to map PMEM pages to HVM domain Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 28/41] xen/pmem: release PMEM pages on HVM domain destruction Haozhong Zhang
2017-12-07 10:10 ` Haozhong Zhang [this message]
2017-12-07 10:10 ` [RFC XEN PATCH v4 30/41] tools: reserve extra guest memory for ACPI from device model Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 31/41] tools/libacpi: add callback to translate GPA to GVA Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 32/41] tools/libacpi: build a DM ACPI signature blacklist Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 33/41] tools/libacpi, hvmloader: detect QEMU fw_cfg interface Haozhong Zhang
2018-02-27 17:37 ` Anthony PERARD
2018-02-28 9:17 ` Haozhong Zhang
2018-03-02 11:26 ` Anthony PERARD
2018-03-05 7:55 ` Haozhong Zhang
2018-02-27 18:03 ` Anthony PERARD
2018-02-28 8:18 ` Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 34/41] tools/libacpi: probe QEMU ACPI ROMs via " Haozhong Zhang
2018-02-27 17:56 ` Anthony PERARD
2018-02-28 9:28 ` Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 35/41] tools/libacpi: add a QEMU BIOSLinkLoader executor Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 36/41] tools/libacpi: add function to get the data of QEMU RSDP Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 37/41] tools/libacpi: load QEMU ACPI Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 38/41] tools/xl: add xl domain configuration for virtual NVDIMM devices Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 39/41] tools/libxl: allow aborting domain creation on fatal QMP init errors Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 40/41] tools/libxl: initiate PMEM mapping via QMP callback Haozhong Zhang
2017-12-07 10:10 ` [RFC XEN PATCH v4 41/41] tools/libxl: build qemu options from xl vNVDIMM configs Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 00/10] Implement vNVDIMM for Xen HVM guest Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 01/10] xen-hvm: remove a trailing space Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 02/10] xen-hvm: create the hotplug memory region on Xen Haozhong Zhang
2018-02-27 16:37 ` Anthony PERARD
2018-02-28 7:47 ` Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 03/10] hostmem-xen: add a host memory backend for Xen Haozhong Zhang
2018-02-27 16:41 ` Anthony PERARD
2018-02-28 7:56 ` Haozhong Zhang
[not found] ` <20180228075654.gv22h2zd73peuyxm@hz-desktop>
2018-03-02 11:50 ` Anthony PERARD
2018-03-05 7:53 ` [Qemu-devel] " Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 04/10] nvdimm: do not intiailize nvdimm->label_data if label size is zero Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 05/10] xen-hvm: initialize fw_cfg interface Haozhong Zhang
2018-02-27 16:46 ` Anthony PERARD
2018-02-28 8:16 ` Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 06/10] hw/acpi-build, xen-hvm: introduce a Xen-specific ACPI builder Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 07/10] xen-hvm: add functions to copy data from/to HVM memory Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 08/10] nvdimm acpi: add functions to access DSM memory on Xen Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 09/10] nvdimm acpi: add compatibility for 64-bit integer in ACPI 2.0 and later Haozhong Zhang
2017-12-07 10:18 ` [RFC QEMU PATCH v4 10/10] xen-hvm: enable building NFIT and SSDT of vNVDIMM for HVM domains Haozhong Zhang
2018-02-27 17:22 ` [RFC QEMU PATCH v4 00/10] Implement vNVDIMM for Xen HVM guest Anthony PERARD
2018-02-28 9:36 ` Haozhong Zhang
[not found] ` <20180228093659.xpq2amq2zjuw2mdr@hz-desktop>
2018-03-02 12:03 ` Anthony PERARD
2018-03-06 4:16 ` Haozhong Zhang
2018-03-06 11:38 ` Anthony PERARD
2018-02-09 12:33 ` [RFC XEN PATCH v4 00/41] Add vNVDIMM support to HVM domains Roger Pau Monné
2018-02-12 1:25 ` Haozhong Zhang
2018-02-12 10:05 ` Roger Pau Monné
2018-02-13 10:06 ` Jan Beulich
2018-02-13 10:29 ` Roger Pau Monné
2018-02-13 11:05 ` Jan Beulich
2018-02-13 11:13 ` Roger Pau Monné
2018-02-13 13:40 ` Jan Beulich
2018-02-13 15:39 ` Roger Pau Monné
2018-02-15 6:59 ` Haozhong Zhang
2018-02-15 6:44 ` Haozhong Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171207101030.22364-30-haozhong.zhang@intel.com \
--to=haozhong.zhang@intel.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=chao.p.peng@linux.intel.com \
--cc=dan.j.williams@intel.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).