* [PATCH 0/4] Add support for multi-page vm_event ring buffer
@ 2018-09-13 15:01 Petre Pircalabu
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-13 15:01 UTC (permalink / raw)
To: xen-devel
Cc: Petre Pircalabu, Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich
This series enables the vm_event ring to use buffers larger than 4K.
Petre Pircalabu (4):
x86_emulator: Add PHONY uninstall target
tools/libxc: Define VM_EVENT type
x86: Add map_domain_pages_global
vm_event: Add support for multi-page ring buffer
tools/libxc/include/xenctrl.h | 2 +
tools/libxc/xc_monitor.c | 9 +-
tools/libxc/xc_private.h | 9 +-
tools/libxc/xc_vm_event.c | 139 ++++++++++++++-----
tools/tests/x86_emulator/Makefile | 3 +
tools/tests/xen-access/xen-access.c | 33 +++--
xen/arch/x86/domain_page.c | 21 +++
xen/arch/x86/mm.c | 14 ++
xen/common/vm_event.c | 258 +++++++++++++++++++++++++++---------
xen/include/public/domctl.h | 22 ++-
xen/include/public/memory.h | 1 +
xen/include/xen/domain_page.h | 9 ++
xen/include/xen/sched.h | 5 +-
xen/include/xen/vm_event.h | 4 +
14 files changed, 414 insertions(+), 115 deletions(-)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 1/4] x86_emulator: Add PHONY uninstall target
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
@ 2018-09-13 15:01 ` Petre Pircalabu
2018-09-14 9:04 ` Wei Liu
2018-09-14 9:09 ` Jan Beulich
2018-09-13 15:01 ` [PATCH 2/4] tools/libxc: Define VM_EVENT type Petre Pircalabu
` (2 subsequent siblings)
3 siblings, 2 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-13 15:01 UTC (permalink / raw)
To: xen-devel
Cc: Petre Pircalabu, Andrew Cooper, Wei Liu, Ian Jackson, Jan Beulich
The missing uninstall target breaks the top 'tools' uninstall target.
Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
tools/tests/x86_emulator/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/tests/x86_emulator/Makefile b/tools/tests/x86_emulator/Makefile
index e8a3e90..8696a65 100644
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -126,6 +126,9 @@ distclean: clean
.PHONY: install uninstall
install uninstall:
+.PHONY: uninstall
+uninstall:
+
x86_emulate:
[ -L $@ ] || ln -sf $(XEN_ROOT)/xen/arch/x86/$@
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 2/4] tools/libxc: Define VM_EVENT type
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
@ 2018-09-13 15:01 ` Petre Pircalabu
2018-09-14 9:14 ` Jan Beulich
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
3 siblings, 1 reply; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-13 15:01 UTC (permalink / raw)
To: xen-devel
Cc: Petre Pircalabu, Stefano Stabellini, Wei Liu,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Jan Beulich
Define the type for each of the supported vm_event rings (paging,
monitor and sharing) and replace the ring param field with this type.
Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
tools/libxc/xc_monitor.c | 2 +-
tools/libxc/xc_private.h | 6 +--
tools/libxc/xc_vm_event.c | 90 +++++++++++++++++++++++++++++----------------
xen/include/public/domctl.h | 21 +++++++----
4 files changed, 77 insertions(+), 42 deletions(-)
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index 4ac823e..15e6a0e 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -24,7 +24,7 @@
void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port)
{
- return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+ return xc_vm_event_enable(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR,
port);
}
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 705eaa9..be22986 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -430,10 +430,10 @@ int xc_ffs64(uint64_t x);
int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
unsigned int mode, uint32_t *port);
/*
- * Enables vm_event and returns the mapped ring page indicated by param.
- * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
+ * Enables vm_event and returns the mapped ring page indicated by type.
+ * type can be XEN_VM_EVENT_TYPE_(PAGING/MONITOR/SHARING)
*/
-void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
+void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
uint32_t *port);
int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index 8674607..dd34cec 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -39,16 +39,71 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
return rc;
}
-void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
+static int xc_vm_event_ring_pfn_param(int type, int *param)
+{
+ if ( !param )
+ return -EINVAL;
+
+ switch ( type )
+ {
+ case XEN_VM_EVENT_TYPE_PAGING:
+ *param = HVM_PARAM_PAGING_RING_PFN;
+ break;
+
+ case XEN_VM_EVENT_TYPE_MONITOR:
+ *param = HVM_PARAM_MONITOR_RING_PFN;
+ break;
+
+ case XEN_VM_EVENT_TYPE_SHARING:
+ *param = HVM_PARAM_SHARING_RING_PFN;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int xc_vm_event_domctl(int type, unsigned int *param)
+{
+ if ( !param )
+ return -EINVAL;
+
+ switch ( type )
+ {
+ case XEN_VM_EVENT_TYPE_PAGING:
+ *param = XEN_DOMCTL_VM_EVENT_OP_PAGING;
+ break;
+
+ case XEN_VM_EVENT_TYPE_MONITOR:
+ *param = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
+ break;
+
+ case XEN_VM_EVENT_TYPE_SHARING:
+ *param = XEN_DOMCTL_VM_EVENT_OP_SHARING;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
uint32_t *port)
{
void *ring_page = NULL;
uint64_t pfn;
xen_pfn_t ring_pfn, mmap_pfn;
- unsigned int op, mode;
+ unsigned int mode;
int rc1, rc2, saved_errno;
+ int param;
- if ( !port )
+ if ( !port ||
+ xc_vm_event_ring_pfn_param(type, ¶m) != 0 ||
+ xc_vm_event_domctl(type, &mode) )
{
errno = EINVAL;
return NULL;
@@ -94,34 +149,7 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
goto out;
}
- switch ( param )
- {
- case HVM_PARAM_PAGING_RING_PFN:
- op = XEN_VM_EVENT_ENABLE;
- mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
- break;
-
- case HVM_PARAM_MONITOR_RING_PFN:
- op = XEN_VM_EVENT_ENABLE;
- mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
- break;
-
- case HVM_PARAM_SHARING_RING_PFN:
- op = XEN_VM_EVENT_ENABLE;
- mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
- break;
-
- /*
- * This is for the outside chance that the HVM_PARAM is valid but is invalid
- * as far as vm_event goes.
- */
- default:
- errno = EINVAL;
- rc1 = -1;
- goto out;
- }
-
- rc1 = xc_vm_event_control(xch, domain_id, op, mode, port);
+ rc1 = xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_ENABLE, mode, port);
if ( rc1 != 0 )
{
PERROR("Failed to enable vm_event\n");
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 82b6967..ac4ced2 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -757,10 +757,17 @@ struct xen_domctl_gdbsx_domstatus {
/*
* There are currently three rings available for VM events:
- * sharing, monitor and paging. This hypercall allows one to
- * control these rings (enable/disable), as well as to signal
- * to the hypervisor to pull responses (resume) from the given
- * ring.
+ * sharing, monitor and paging
+ */
+
+#define XEN_VM_EVENT_TYPE_PAGING 1
+#define XEN_VM_EVENT_TYPE_MONITOR 2
+#define XEN_VM_EVENT_TYPE_SHARING 3
+
+/*
+ * This hypercall allows one to control the vm_event rings (enable/disable),
+ * as well as to signal to the hypervisor to pull responses (resume) from
+ * the given ring.
*/
#define XEN_VM_EVENT_ENABLE 0
#define XEN_VM_EVENT_DISABLE 1
@@ -780,7 +787,7 @@ struct xen_domctl_gdbsx_domstatus {
* EXDEV - guest has PoD enabled
* EBUSY - guest has or had paging enabled, ring buffer still active
*/
-#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1
+#define XEN_DOMCTL_VM_EVENT_OP_PAGING XEN_VM_EVENT_TYPE_PAGING
/*
* Monitor helper.
@@ -804,7 +811,7 @@ struct xen_domctl_gdbsx_domstatus {
* EBUSY - guest has or had access enabled, ring buffer still active
*
*/
-#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR XEN_VM_EVENT_TYPE_MONITOR
/*
* Sharing ENOMEM helper.
@@ -819,7 +826,7 @@ struct xen_domctl_gdbsx_domstatus {
* Note that shring can be turned on (as per the domctl below)
* *without* this ring being setup.
*/
-#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3
+#define XEN_DOMCTL_VM_EVENT_OP_SHARING XEN_VM_EVENT_TYPE_SHARING
/* Use for teardown/setup of helper<->hypervisor interface for paging,
* access and sharing.*/
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 3/4] x86: Add map_domain_pages_global
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
2018-09-13 15:01 ` [PATCH 2/4] tools/libxc: Define VM_EVENT type Petre Pircalabu
@ 2018-09-13 15:01 ` Petre Pircalabu
2018-09-18 10:51 ` Jan Beulich
2018-09-24 13:13 ` Julien Grall
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
3 siblings, 2 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-13 15:01 UTC (permalink / raw)
To: xen-devel
Cc: Petre Pircalabu, Stefano Stabellini, Wei Liu,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Jan Beulich
Create a single mapping for multiple domain pages.
Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
tools/libxc/xc_vm_event.c | 2 +-
xen/arch/x86/domain_page.c | 22 ++++++++++++++++++++++
xen/include/xen/domain_page.h | 9 +++++++++
3 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index dd34cec..de37ca5 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -74,7 +74,7 @@ static int xc_vm_event_domctl(int type, unsigned int *param)
{
case XEN_VM_EVENT_TYPE_PAGING:
*param = XEN_DOMCTL_VM_EVENT_OP_PAGING;
- break;
+ break;
case XEN_VM_EVENT_TYPE_MONITOR:
*param = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 4a07cfb..0d23e52 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -317,6 +317,28 @@ void *map_domain_page_global(mfn_t mfn)
return vmap(&mfn, 1);
}
+void *map_domain_pages_global(const mfn_t *mfn, unsigned int nr)
+{
+ ASSERT(!in_irq() &&
+ ((system_state >= SYS_STATE_boot &&
+ system_state < SYS_STATE_active) ||
+ local_irq_is_enabled()));
+
+ return vmap(mfn, nr);
+}
+
+void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
+{
+ mfn_t mfn[nr];
+ int i;
+ struct page_info *cur_pg = (struct page_info *)&pg[0];
+
+ for (i = 0; i < nr; i++)
+ mfn[i] = page_to_mfn(cur_pg++);
+
+ return map_domain_pages_global(mfn, nr);
+}
+
void unmap_domain_page_global(const void *ptr)
{
unsigned long va = (unsigned long)ptr;
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index 32669a3..76422f9 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -42,6 +42,7 @@ mfn_t domain_page_map_to_mfn(const void *va);
* mappings can also be unmapped from any context.
*/
void *map_domain_page_global(mfn_t mfn);
+void *map_domain_pages_global(const mfn_t *mfn, unsigned int nr);
void unmap_domain_page_global(const void *va);
#define __map_domain_page(pg) map_domain_page(page_to_mfn(pg))
@@ -51,6 +52,8 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
return map_domain_page_global(page_to_mfn(pg));
}
+void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr);
+
#else /* !CONFIG_DOMAIN_PAGE */
#define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn))
@@ -68,6 +71,12 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
return page_to_virt(pg);
}
+static inline void *__map_domain_pages_global(const struct page_info *pg,
+ unsigned int nr)
+{
+ return __map_domain_page_global(pg);
+}
+
static inline void unmap_domain_page_global(const void *va) {};
#endif /* !CONFIG_DOMAIN_PAGE */
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
` (2 preceding siblings ...)
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
@ 2018-09-13 15:02 ` Petre Pircalabu
2018-09-13 16:42 ` Tamas K Lengyel
` (2 more replies)
3 siblings, 3 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-13 15:02 UTC (permalink / raw)
To: xen-devel
Cc: Petre Pircalabu, Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, Jan Beulich
In high throughput introspection scenarios where lots of monitor
vm_events are generated, the ring buffer can fill up before the monitor
application gets a chance to handle all the requests thus blocking
other vcpus which will have to wait for a slot to become available.
This patch adds support for extending the ring buffer by allocating a
number of pages from domheap and mapping them to the monitor
application's domain using the foreignmemory_map_resource interface.
Unlike the current implementation, the ring buffer pages are not part of
the introspected DomU, so they will not be reclaimed when the monitor is
disabled.
Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
tools/libxc/include/xenctrl.h | 2 +
tools/libxc/xc_monitor.c | 7 +
tools/libxc/xc_private.h | 3 +
tools/libxc/xc_vm_event.c | 49 +++++++
tools/tests/xen-access/xen-access.c | 33 +++--
xen/arch/x86/domain_page.c | 3 +-
xen/arch/x86/mm.c | 14 ++
xen/common/vm_event.c | 258 +++++++++++++++++++++++++++---------
xen/include/public/domctl.h | 1 +
xen/include/public/memory.h | 1 +
xen/include/xen/sched.h | 5 +-
xen/include/xen/vm_event.h | 4 +
12 files changed, 305 insertions(+), 75 deletions(-)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index bb75bcc..4f91ee9 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2005,6 +2005,8 @@ int xc_get_mem_access(xc_interface *xch, uint32_t domain_id,
* Caller has to unmap this page when done.
*/
void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port);
+void *xc_monitor_enable_ex(xc_interface *xch, uint32_t domain_id,
+ int order, uint32_t *port);
int xc_monitor_disable(xc_interface *xch, uint32_t domain_id);
int xc_monitor_resume(xc_interface *xch, uint32_t domain_id);
/*
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index 15e6a0e..5188835 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -28,6 +28,13 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port)
port);
}
+void *xc_monitor_enable_ex(xc_interface *xch, uint32_t domain_id, int order,
+ uint32_t *port)
+{
+ return xc_vm_event_enable_ex(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR,
+ order, port);
+}
+
int xc_monitor_disable(xc_interface *xch, uint32_t domain_id)
{
return xc_vm_event_control(xch, domain_id,
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index be22986..03d9460 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -436,6 +436,9 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
uint32_t *port);
+void *xc_vm_event_enable_ex(xc_interface *xch, uint32_t domain_id, int type,
+ int order, uint32_t *port);
+
int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
#endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index de37ca5..216bbe2 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -21,6 +21,7 @@
*/
#include "xc_private.h"
+#include "xenforeignmemory.h"
int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
unsigned int mode, uint32_t *port)
@@ -184,6 +185,54 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
return ring_page;
}
+void *xc_vm_event_enable_ex(xc_interface *xch, uint32_t domain_id, int type,
+ int order, uint32_t *port)
+{
+ xenforeignmemory_resource_handle *fres = NULL;
+ int saved_errno;
+ void *ring_buffer = NULL;
+
+ if ( !port )
+ {
+ errno = EINVAL;
+ return NULL;
+ }
+
+ /* Pause the domain for ring page setup */
+ if ( xc_domain_pause(xch, domain_id) )
+ {
+ PERROR("Unable to pause domain\n");
+ return NULL;
+ }
+
+ fres = xenforeignmemory_map_resource(xch->fmem, domain_id,
+ XENMEM_resource_vm_event, type, 0,
+ order, &ring_buffer,
+ PROT_READ | PROT_WRITE, 0);
+ if ( !fres )
+ {
+ PERROR("Unable to map vm_event ring pages resource\n");
+ goto out;
+ }
+
+ if ( xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_GET_PORT, type, port) )
+ PERROR("Unable to get vm_event channel port\n");
+
+out:
+ saved_errno = errno;
+ if ( xc_domain_unpause(xch, domain_id) != 0 )
+ {
+ if (fres)
+ saved_errno = errno;
+ PERROR("Unable to unpause domain");
+ }
+
+ free(fres);
+ errno = saved_errno;
+ return ring_buffer;
+}
+
+
/*
* Local variables:
* mode: C
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6aaee16..f4c4eda 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -68,7 +68,8 @@ typedef struct vm_event {
int port;
vm_event_back_ring_t back_ring;
uint32_t evtchn_port;
- void *ring_page;
+ void *ring_buffer;
+ unsigned int ring_page_count;
} vm_event_t;
typedef struct xenaccess {
@@ -136,8 +137,9 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
return 0;
/* Tear down domain xenaccess in Xen */
- if ( xenaccess->vm_event.ring_page )
- munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE);
+ if ( xenaccess->vm_event.ring_buffer )
+ munmap(xenaccess->vm_event.ring_buffer,
+ xenaccess->vm_event.ring_page_count * XC_PAGE_SIZE );
if ( mem_access_enable )
{
@@ -210,12 +212,25 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
/* Set domain id */
xenaccess->vm_event.domain_id = domain_id;
- /* Enable mem_access */
- xenaccess->vm_event.ring_page =
+ /* Ring buffer page count */
+ xenaccess->vm_event.ring_page_count = 2;
+
+ xenaccess->vm_event.ring_buffer = xc_monitor_enable_ex(
+ xenaccess->xc_handle,
+ xenaccess->vm_event.domain_id,
+ xenaccess->vm_event.ring_page_count,
+ &xenaccess->vm_event.evtchn_port);
+
+ if (xenaccess->vm_event.ring_buffer == NULL && errno == EOPNOTSUPP)
+ {
+ xenaccess->vm_event.ring_page_count = 1;
+ xenaccess->vm_event.ring_buffer =
xc_monitor_enable(xenaccess->xc_handle,
xenaccess->vm_event.domain_id,
&xenaccess->vm_event.evtchn_port);
- if ( xenaccess->vm_event.ring_page == NULL )
+ }
+
+ if ( xenaccess->vm_event.ring_buffer == NULL )
{
switch ( errno ) {
case EBUSY:
@@ -254,10 +269,10 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
xenaccess->vm_event.port = rc;
/* Initialise ring */
- SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_page);
+ SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_buffer);
BACK_RING_INIT(&xenaccess->vm_event.back_ring,
- (vm_event_sring_t *)xenaccess->vm_event.ring_page,
- XC_PAGE_SIZE);
+ (vm_event_sring_t *)xenaccess->vm_event.ring_buffer,
+ XC_PAGE_SIZE * xenaccess->vm_event.ring_page_count );
/* Get max_gpfn */
rc = xc_domain_maximum_gpfn(xenaccess->xc_handle,
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 0d23e52..2a9cbf3 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
{
mfn_t mfn[nr];
int i;
- struct page_info *cur_pg = (struct page_info *)&pg[0];
for (i = 0; i < nr; i++)
- mfn[i] = page_to_mfn(cur_pg++);
+ mfn[i] = page_to_mfn(pg++);
return map_domain_pages_global(mfn, nr);
}
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index baea2f5..bec09d0 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -103,6 +103,7 @@
#include <xen/efi.h>
#include <xen/grant_table.h>
#include <xen/hypercall.h>
+#include <xen/vm_event.h>
#include <asm/paging.h>
#include <asm/shadow.h>
#include <asm/page.h>
@@ -4415,6 +4416,19 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
}
#endif
+ case XENMEM_resource_vm_event:
+ {
+ rc = vm_event_get_ring_frames(d, id, frame, nr_frames, mfn_list);
+ if ( rc )
+ break;
+ /*
+ * The frames will have been assigned to the domain that created
+ * the ioreq server.
+ */
+ *flags |= XENMEM_rsrc_acq_caller_owned;
+ break;
+ }
+
default:
rc = -EOPNOTSUPP;
break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 4793aac..faece3c 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -39,16 +39,66 @@
#define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock)
#define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock)
+#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
+
+static int vm_event_alloc_ring(struct domain *d, struct vm_event_domain *ved)
+{
+ struct page_info *page;
+ void *va = NULL;
+ int i, rc = -ENOMEM;
+
+ page = alloc_domheap_pages(d, ved->ring_order, MEMF_no_refcount);
+ if ( !page )
+ return -ENOMEM;
+
+ for ( i = 0; i < (1 << ved->ring_order); i++ )
+ if ( !get_page_type(&page[i], PGT_writable_page) )
+ {
+ rc = -EINVAL;
+ goto fail;
+ }
+
+ va = __map_domain_pages_global(page, (1 << ved->ring_order));
+ if ( !va )
+ goto fail;
+
+ memset(va, 0, PAGE_SIZE << ved->ring_order);
+
+ ved->ring_buffer = va;
+ ved->ring_pg_struct = page;
+ return 0;
+
+fail:
+ i--;
+ for ( ; i >= 0; i-- )
+ put_page_type(&page[i]);
+ free_domheap_pages(page, ved->ring_order);
+
+ return rc;
+ }
+
+static void vm_event_destroy_ring(struct vm_event_domain *ved)
+{
+ if ( ved->ring_buffer )
+ {
+ int i;
+
+ unmap_domain_page_global(ved->ring_buffer);
+ for ( i = 0; i < (1 << ved->ring_order); i++ )
+ put_page_and_type(&(ved->ring_pg_struct[i]));
+ ved->ring_buffer = NULL;
+ }
+}
+
static int vm_event_enable(
struct domain *d,
- struct xen_domctl_vm_event_op *vec,
struct vm_event_domain **ved,
+ unsigned long param,
+ unsigned int nr_frames,
int pause_flag,
- int param,
xen_event_channel_notification_t notification_fn)
{
int rc;
- unsigned long ring_gfn = d->arch.hvm.params[param];
if ( !*ved )
*ved = xzalloc(struct vm_event_domain);
@@ -58,26 +108,39 @@ static int vm_event_enable(
/* Only one helper at a time. If the helper crashed,
* the ring is in an undefined state and so is the guest.
*/
- if ( (*ved)->ring_page )
- return -EBUSY;;
-
- /* The parameter defaults to zero, and it should be
- * set to something */
- if ( ring_gfn == 0 )
- return -ENOSYS;
+ if ( (*ved)->ring_buffer )
+ return -EBUSY;
vm_event_ring_lock_init(*ved);
vm_event_ring_lock(*ved);
rc = vm_event_init_domain(d);
-
if ( rc < 0 )
goto err;
- rc = prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct,
- &(*ved)->ring_page);
- if ( rc < 0 )
- goto err;
+ (*ved)->ring_order = get_order_from_pages(nr_frames);
+
+ if ( param == XEN_VM_EVENT_ALLOC_FROM_DOMHEAP )
+ {
+ rc = vm_event_alloc_ring(current->domain, *ved);
+ if ( rc < 0 )
+ goto err;
+ }
+ else
+ {
+ /* param points to a specific gfn */
+ unsigned long ring_gfn = d->arch.hvm.params[param];
+
+ /* The parameter defaults to zero, and it should be
+ * set to something */
+ if ( ring_gfn == 0 )
+ return -ENOSYS;
+
+ rc = prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct,
+ &(*ved)->ring_buffer);
+ if ( rc < 0 )
+ goto err;
+ }
/* Set the number of currently blocked vCPUs to 0. */
(*ved)->blocked = 0;
@@ -88,12 +151,12 @@ static int vm_event_enable(
if ( rc < 0 )
goto err;
- (*ved)->xen_port = vec->port = rc;
+ (*ved)->xen_port = rc;
/* Prepare ring buffer */
FRONT_RING_INIT(&(*ved)->front_ring,
- (vm_event_sring_t *)(*ved)->ring_page,
- PAGE_SIZE);
+ (vm_event_sring_t *)(*ved)->ring_buffer,
+ PAGE_SIZE * nr_frames);
/* Save the pause flag for this particular ring. */
(*ved)->pause_flag = pause_flag;
@@ -105,8 +168,8 @@ static int vm_event_enable(
return 0;
err:
- destroy_ring_for_helper(&(*ved)->ring_page,
- (*ved)->ring_pg_struct);
+ vm_event_destroy_ring(*ved);
+ vm_event_cleanup_domain(d);
vm_event_ring_unlock(*ved);
xfree(*ved);
*ved = NULL;
@@ -221,9 +284,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **ved)
}
}
- destroy_ring_for_helper(&(*ved)->ring_page,
- (*ved)->ring_pg_struct);
-
+ vm_event_destroy_ring(*ved);
vm_event_cleanup_domain(d);
vm_event_ring_unlock(*ved);
@@ -459,7 +520,7 @@ static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
{
unsigned int avail_req;
- if ( !ved->ring_page )
+ if ( !ved->ring_buffer )
return -ENOSYS;
vm_event_ring_lock(ved);
@@ -498,7 +559,7 @@ static int vm_event_wait_slot(struct vm_event_domain *ved)
bool_t vm_event_check_ring(struct vm_event_domain *ved)
{
- return (ved && ved->ring_page);
+ return (ved && ved->ring_buffer);
}
/*
@@ -587,6 +648,46 @@ void vm_event_cleanup(struct domain *d)
#endif
}
+#ifdef CONFIG_HAS_MEM_PAGING
+static int vm_event_op_paging_is_supported(struct domain *d)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ /* hvm fixme: p2m_is_foreign types need addressing */
+ if ( is_hvm_domain(hardware_domain) )
+ return -EOPNOTSUPP;
+
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ return -ENODEV;
+
+ /* No paging if iommu is used */
+ if ( unlikely(need_iommu(d)) )
+ return -EMLINK;
+
+ /* Disallow paging in a PoD guest */
+ if ( p2m->pod.entry_count )
+ return -EXDEV;
+
+ return 0;
+}
+#endif /* CONFIG_HAS_MEM_PAGING */
+
+#ifdef CONFIG_HAS_MEM_SHARING
+static int vm_event_op_sharing_is_supported(struct domain *d)
+{
+ /* hvm fixme: p2m_is_foreign types need addressing */
+ if ( is_hvm_domain(hardware_domain) )
+ return -EOPNOTSUPP;
+
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ return -ENODEV;
+
+ return 0;
+}
+#endif /* CONFIG_HAS_MEM_SHARING */
+
int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
XEN_GUEST_HANDLE_PARAM(void) u_domctl)
{
@@ -629,35 +730,19 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
switch( vec->op )
{
case XEN_VM_EVENT_ENABLE:
- {
- struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
- rc = -EOPNOTSUPP;
- /* hvm fixme: p2m_is_foreign types need addressing */
- if ( is_hvm_domain(hardware_domain) )
- break;
-
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
- break;
-
- /* No paging if iommu is used */
- rc = -EMLINK;
- if ( unlikely(need_iommu(d)) )
- break;
-
- rc = -EXDEV;
- /* Disallow paging in a PoD guest */
- if ( p2m->pod.entry_count )
+ rc = vm_event_op_paging_is_supported(d);
+ if ( rc )
break;
/* domain_pause() not required here, see XSA-99 */
- rc = vm_event_enable(d, vec, &d->vm_event_paging, _VPF_mem_paging,
- HVM_PARAM_PAGING_RING_PFN,
+ rc = vm_event_enable(d, &d->vm_event_paging,
+ HVM_PARAM_PAGING_RING_PFN, 1,
+ _VPF_mem_paging,
mem_paging_notification);
- }
- break;
+ if ( !rc )
+ vec->port = d->vm_event_paging->xen_port;
+
+ break;
case XEN_VM_EVENT_DISABLE:
if ( vm_event_check_ring(d->vm_event_paging) )
@@ -694,9 +779,14 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
rc = arch_monitor_init_domain(d);
if ( rc )
break;
- rc = vm_event_enable(d, vec, &d->vm_event_monitor, _VPF_mem_access,
- HVM_PARAM_MONITOR_RING_PFN,
+
+ rc = vm_event_enable(d, &d->vm_event_monitor,
+ HVM_PARAM_MONITOR_RING_PFN, 1,
+ _VPF_mem_access,
monitor_notification);
+ if ( !rc )
+ vec->port = d->vm_event_monitor->xen_port;
+
break;
case XEN_VM_EVENT_DISABLE:
@@ -716,6 +806,15 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
rc = -ENODEV;
break;
+ case XEN_VM_EVENT_GET_PORT:
+ rc = -ENODEV;
+ if ( vm_event_check_ring(d->vm_event_monitor) )
+ {
+ vec->port = d->vm_event_monitor->xen_port;
+ rc = 0;
+ }
+ break;
+
default:
rc = -ENOSYS;
break;
@@ -731,20 +830,18 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
switch( vec->op )
{
case XEN_VM_EVENT_ENABLE:
- rc = -EOPNOTSUPP;
- /* hvm fixme: p2m_is_foreign types need addressing */
- if ( is_hvm_domain(hardware_domain) )
- break;
-
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
+ rc = vm_event_op_sharing_is_supported(d);
+ if ( rc )
break;
/* domain_pause() not required here, see XSA-99 */
- rc = vm_event_enable(d, vec, &d->vm_event_share, _VPF_mem_sharing,
- HVM_PARAM_SHARING_RING_PFN,
+ rc = vm_event_enable(d, &d->vm_event_share,
+ HVM_PARAM_SHARING_RING_PFN, 1,
+ _VPF_mem_sharing,
mem_sharing_notification);
+ if ( !rc )
+ vec->port = d->vm_event_share->xen_port;
+
break;
case XEN_VM_EVENT_DISABLE:
@@ -778,6 +875,43 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec,
return rc;
}
+int vm_event_get_ring_frames(struct domain *d, unsigned int id,
+ unsigned long frame, unsigned int nr_frames,
+ xen_pfn_t mfn_list[])
+{
+ int rc, i;
+ int pause_flag;
+ struct vm_event_domain **ved;
+ xen_event_channel_notification_t notification_fn;
+
+ switch(id)
+ {
+ case XEN_VM_EVENT_TYPE_MONITOR:
+ ved = &d->vm_event_monitor;
+ pause_flag = _VPF_mem_access;
+ notification_fn = monitor_notification;
+
+ rc = arch_monitor_init_domain(d);
+ if ( rc )
+ return rc;
+ break;
+
+ default:
+ return -ENOSYS;
+ }
+
+ rc = vm_event_enable(d, ved, XEN_VM_EVENT_ALLOC_FROM_DOMHEAP,
+ nr_frames, pause_flag,
+ notification_fn);
+ if ( rc )
+ return rc;
+
+ for ( i = 0; i < nr_frames; i++ )
+ mfn_list[i] = mfn_x(page_to_mfn(&(*ved)->ring_pg_struct[i]));
+
+ return 0;
+}
+
void vm_event_vcpu_pause(struct vcpu *v)
{
ASSERT(v == current);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index ac4ced2..066d4da 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -772,6 +772,7 @@ struct xen_domctl_gdbsx_domstatus {
#define XEN_VM_EVENT_ENABLE 0
#define XEN_VM_EVENT_DISABLE 1
#define XEN_VM_EVENT_RESUME 2
+#define XEN_VM_EVENT_GET_PORT 3
/*
* Domain memory paging
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 8fc27ce..09400f5 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -612,6 +612,7 @@ struct xen_mem_acquire_resource {
#define XENMEM_resource_ioreq_server 0
#define XENMEM_resource_grant_table 1
+#define XENMEM_resource_vm_event 2
/*
* IN - a type-specific resource identifier, which must be zero
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 0ba80cb..be01f93 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -286,9 +286,10 @@ struct vm_event_domain
/* The ring has 64 entries */
unsigned char foreign_producers;
unsigned char target_producers;
- /* shared ring page */
- void *ring_page;
+ /* shared ring pages */
+ void *ring_buffer;
struct page_info *ring_pg_struct;
+ unsigned int ring_order;
/* front-end ring */
vm_event_front_ring_t front_ring;
/* event channel port (vcpu0 only) */
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 2ff6e1c..d9c5e93 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -80,6 +80,10 @@ void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp);
void vm_event_monitor_next_interrupt(struct vcpu *v);
+int vm_event_get_ring_frames(struct domain *d, unsigned int id,
+ unsigned long frame, unsigned int nr_frames,
+ xen_pfn_t mfn_list[]);
+
#endif /* __VM_EVENT_H__ */
/*
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
@ 2018-09-13 16:42 ` Tamas K Lengyel
2018-09-14 8:10 ` Petre Pircalabu
2018-09-17 14:41 ` Andrew Cooper
2018-09-18 12:58 ` Jan Beulich
2 siblings, 1 reply; 19+ messages in thread
From: Tamas K Lengyel @ 2018-09-13 16:42 UTC (permalink / raw)
To: Petre Pircalabu
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Jan Beulich, Xen-devel
On Thu, Sep 13, 2018 at 9:02 AM Petre Pircalabu
<ppircalabu@bitdefender.com> wrote:
>
> In high throughput introspection scenarios where lots of monitor
> vm_events are generated, the ring buffer can fill up before the monitor
> application gets a chance to handle all the requests thus blocking
> other vcpus which will have to wait for a slot to become available.
>
> This patch adds support for extending the ring buffer by allocating a
> number of pages from domheap and mapping them to the monitor
> application's domain using the foreignmemory_map_resource interface.
> Unlike the current implementation, the ring buffer pages are not part of
> the introspected DomU, so they will not be reclaimed when the monitor is
> disabled.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
Thanks for this addition, it has been on the TODO for a long while
now. Could you also please push the patches as a git branch somewhere?
> ---
> tools/libxc/include/xenctrl.h | 2 +
> tools/libxc/xc_monitor.c | 7 +
> tools/libxc/xc_private.h | 3 +
> tools/libxc/xc_vm_event.c | 49 +++++++
> tools/tests/xen-access/xen-access.c | 33 +++--
> xen/arch/x86/domain_page.c | 3 +-
> xen/arch/x86/mm.c | 14 ++
> xen/common/vm_event.c | 258 +++++++++++++++++++++++++++---------
> xen/include/public/domctl.h | 1 +
> xen/include/public/memory.h | 1 +
> xen/include/xen/sched.h | 5 +-
> xen/include/xen/vm_event.h | 4 +
> 12 files changed, 305 insertions(+), 75 deletions(-)
>
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index bb75bcc..4f91ee9 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2005,6 +2005,8 @@ int xc_get_mem_access(xc_interface *xch, uint32_t domain_id,
> * Caller has to unmap this page when done.
> */
> void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port);
> +void *xc_monitor_enable_ex(xc_interface *xch, uint32_t domain_id,
> + int order, uint32_t *port);
I don't think there is value in keeping both version of the API
around. As libxc is not considered to be a stable API extending the
existing API would be the route I would prefer. Also, perhaps instead
of "order" name the input variable "page_count" to make it more clear
what it's about.
> int xc_monitor_disable(xc_interface *xch, uint32_t domain_id);
> int xc_monitor_resume(xc_interface *xch, uint32_t domain_id);
> /*
> diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
> index 15e6a0e..5188835 100644
> --- a/tools/libxc/xc_monitor.c
> +++ b/tools/libxc/xc_monitor.c
> @@ -28,6 +28,13 @@ void *xc_monitor_enable(xc_interface *xch, uint32_t domain_id, uint32_t *port)
> port);
> }
>
> +void *xc_monitor_enable_ex(xc_interface *xch, uint32_t domain_id, int order,
> + uint32_t *port)
> +{
> + return xc_vm_event_enable_ex(xch, domain_id, XEN_VM_EVENT_TYPE_MONITOR,
> + order, port);
> +}
> +
> int xc_monitor_disable(xc_interface *xch, uint32_t domain_id)
> {
> return xc_vm_event_control(xch, domain_id,
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index be22986..03d9460 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -436,6 +436,9 @@ int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
> void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
> uint32_t *port);
>
> +void *xc_vm_event_enable_ex(xc_interface *xch, uint32_t domain_id, int type,
> + int order, uint32_t *port);
> +
> int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
>
> #endif /* __XC_PRIVATE_H__ */
> diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
> index de37ca5..216bbe2 100644
> --- a/tools/libxc/xc_vm_event.c
> +++ b/tools/libxc/xc_vm_event.c
> @@ -21,6 +21,7 @@
> */
>
> #include "xc_private.h"
> +#include "xenforeignmemory.h"
>
> int xc_vm_event_control(xc_interface *xch, uint32_t domain_id, unsigned int op,
> unsigned int mode, uint32_t *port)
> @@ -184,6 +185,54 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int type,
> return ring_page;
> }
>
> +void *xc_vm_event_enable_ex(xc_interface *xch, uint32_t domain_id, int type,
> + int order, uint32_t *port)
> +{
> + xenforeignmemory_resource_handle *fres = NULL;
> + int saved_errno;
> + void *ring_buffer = NULL;
> +
> + if ( !port )
> + {
> + errno = EINVAL;
> + return NULL;
> + }
> +
> + /* Pause the domain for ring page setup */
> + if ( xc_domain_pause(xch, domain_id) )
> + {
> + PERROR("Unable to pause domain\n");
> + return NULL;
> + }
> +
> + fres = xenforeignmemory_map_resource(xch->fmem, domain_id,
> + XENMEM_resource_vm_event, type, 0,
> + order, &ring_buffer,
> + PROT_READ | PROT_WRITE, 0);
> + if ( !fres )
> + {
> + PERROR("Unable to map vm_event ring pages resource\n");
> + goto out;
> + }
> +
> + if ( xc_vm_event_control(xch, domain_id, XEN_VM_EVENT_GET_PORT, type, port) )
> + PERROR("Unable to get vm_event channel port\n");
> +
> +out:
> + saved_errno = errno;
> + if ( xc_domain_unpause(xch, domain_id) != 0 )
> + {
> + if (fres)
> + saved_errno = errno;
> + PERROR("Unable to unpause domain");
> + }
> +
> + free(fres);
> + errno = saved_errno;
> + return ring_buffer;
> +}
> +
> +
> /*
> * Local variables:
> * mode: C
> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
> index 6aaee16..f4c4eda 100644
> --- a/tools/tests/xen-access/xen-access.c
> +++ b/tools/tests/xen-access/xen-access.c
> @@ -68,7 +68,8 @@ typedef struct vm_event {
> int port;
> vm_event_back_ring_t back_ring;
> uint32_t evtchn_port;
> - void *ring_page;
> + void *ring_buffer;
> + unsigned int ring_page_count;
> } vm_event_t;
>
> typedef struct xenaccess {
> @@ -136,8 +137,9 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
> return 0;
>
> /* Tear down domain xenaccess in Xen */
> - if ( xenaccess->vm_event.ring_page )
> - munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE);
> + if ( xenaccess->vm_event.ring_buffer )
> + munmap(xenaccess->vm_event.ring_buffer,
> + xenaccess->vm_event.ring_page_count * XC_PAGE_SIZE );
>
> if ( mem_access_enable )
> {
> @@ -210,12 +212,25 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
> /* Set domain id */
> xenaccess->vm_event.domain_id = domain_id;
>
> - /* Enable mem_access */
> - xenaccess->vm_event.ring_page =
> + /* Ring buffer page count */
> + xenaccess->vm_event.ring_page_count = 2;
> +
> + xenaccess->vm_event.ring_buffer = xc_monitor_enable_ex(
> + xenaccess->xc_handle,
> + xenaccess->vm_event.domain_id,
> + xenaccess->vm_event.ring_page_count,
> + &xenaccess->vm_event.evtchn_port);
> +
> + if (xenaccess->vm_event.ring_buffer == NULL && errno == EOPNOTSUPP)
How would this situation ever arise? If there is a chance that you
can't setup multi-page rings, perhaps adding a hypercall that would
tell the user how many pages are max available for the ring is the
better route. This just seems like guessing right now.
> + {
> + xenaccess->vm_event.ring_page_count = 1;
> + xenaccess->vm_event.ring_buffer =
> xc_monitor_enable(xenaccess->xc_handle,
> xenaccess->vm_event.domain_id,
> &xenaccess->vm_event.evtchn_port);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-13 16:42 ` Tamas K Lengyel
@ 2018-09-14 8:10 ` Petre Pircalabu
0 siblings, 0 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-14 8:10 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Jan Beulich, Xen-devel
On Jo, 2018-09-13 at 10:42 -0600, Tamas K Lengyel wrote:
> On Thu, Sep 13, 2018 at 9:02 AM Petre Pircalabu
> <ppircalabu@bitdefender.com> wrote:
> >
> >
> > In high throughput introspection scenarios where lots of monitor
> > vm_events are generated, the ring buffer can fill up before the
> > monitor
> > application gets a chance to handle all the requests thus blocking
> > other vcpus which will have to wait for a slot to become available.
> >
> > This patch adds support for extending the ring buffer by allocating
> > a
> > number of pages from domheap and mapping them to the monitor
> > application's domain using the foreignmemory_map_resource
> > interface.
> > Unlike the current implementation, the ring buffer pages are not
> > part of
> > the introspected DomU, so they will not be reclaimed when the
> > monitor is
> > disabled.
> >
> > Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
> Thanks for this addition, it has been on the TODO for a long while
> now. Could you also please push the patches as a git branch
> somewhere?
I've pushed it to my github repository (branch :
multi_page_ring_buffer/devel_new)
https://github.com/petrepircalabu/xen/tree/multi_page_ring_buffer/devel
_new
>
> >
> > ---
> > tools/libxc/include/xenctrl.h | 2 +
> > tools/libxc/xc_monitor.c | 7 +
> > tools/libxc/xc_private.h | 3 +
> > tools/libxc/xc_vm_event.c | 49 +++++++
> >
> > + xenaccess->vm_event.domain_id,
> > + xenaccess->vm_event.ring_page_count,
> > + &xenaccess->vm_event.evtchn_port);
> > +
> > + if (xenaccess->vm_event.ring_buffer == NULL && errno ==
> > EOPNOTSUPP)
> How would this situation ever arise? If there is a chance that you
> can't setup multi-page rings, perhaps adding a hypercall that would
> tell the user how many pages are max available for the ring is the
> better route. This just seems like guessing right now.
>
The multi page ring buffer is mapped using
xenforeignmemory_map_resource which uses IOCTL_PRIVCMD_MMAP_RESOURCE.
This ioctl was added in kernel 4.18.1, which is a relatively new
kernel. If the monitor domain doesnt't recognize this hypercall it sets
errno to EOPNOTSUPP.
> >
> > + {
> > + xenaccess->vm_event.ring_page_count = 1;
> > + xenaccess->vm_event.ring_buffer =
> > xc_monitor_enable(xenaccess->xc_handle,
> > xenaccess->vm_event.domain_id,
> > &xenaccess->vm_event.evtchn_port);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] x86_emulator: Add PHONY uninstall target
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
@ 2018-09-14 9:04 ` Wei Liu
2018-09-14 9:09 ` Jan Beulich
1 sibling, 0 replies; 19+ messages in thread
From: Wei Liu @ 2018-09-14 9:04 UTC (permalink / raw)
To: Petre Pircalabu
Cc: xen-devel, Wei Liu, Ian Jackson, Jan Beulich, Andrew Cooper
I think the patch subject prefix should be changed to tools/tests.
On Thu, Sep 13, 2018 at 06:01:57PM +0300, Petre Pircalabu wrote:
> The missing uninstall target breaks the top 'tools' uninstall target.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] x86_emulator: Add PHONY uninstall target
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
2018-09-14 9:04 ` Wei Liu
@ 2018-09-14 9:09 ` Jan Beulich
2018-09-14 11:19 ` Petre Pircalabu
1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-09-14 9:09 UTC (permalink / raw)
To: Petre Pircalabu; +Cc: Andrew Cooper, Wei Liu, Ian Jackson, xen-devel
>>> On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
> The missing uninstall target breaks the top 'tools' uninstall target.
Missing?
> --- a/tools/tests/x86_emulator/Makefile
> +++ b/tools/tests/x86_emulator/Makefile
> @@ -126,6 +126,9 @@ distclean: clean
> .PHONY: install uninstall
> install uninstall:
>
> +.PHONY: uninstall
> +uninstall:
Please look at the patch context above your change. It was added
a couple of weeks ago (09613d3b5f), and you really should have
noticed this the latest when re-basing on top of that change.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] tools/libxc: Define VM_EVENT type
2018-09-13 15:01 ` [PATCH 2/4] tools/libxc: Define VM_EVENT type Petre Pircalabu
@ 2018-09-14 9:14 ` Jan Beulich
2018-09-14 11:11 ` Petre Pircalabu
0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-09-14 9:14 UTC (permalink / raw)
To: Petre Pircalabu
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk, George Dunlap,
Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall, xen-devel
>>> On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -757,10 +757,17 @@ struct xen_domctl_gdbsx_domstatus {
>
> /*
> * There are currently three rings available for VM events:
> - * sharing, monitor and paging. This hypercall allows one to
> - * control these rings (enable/disable), as well as to signal
> - * to the hypervisor to pull responses (resume) from the given
> - * ring.
> + * sharing, monitor and paging
> + */
> +
> +#define XEN_VM_EVENT_TYPE_PAGING 1
> +#define XEN_VM_EVENT_TYPE_MONITOR 2
> +#define XEN_VM_EVENT_TYPE_SHARING 3
> +
> +/*
> + * This hypercall allows one to control the vm_event rings (enable/disable),
> + * as well as to signal to the hypervisor to pull responses (resume) from
> + * the given ring.
> */
What's the reason to modify the comment, the more with a style
violation (malformed single line comment) as the result?
> @@ -780,7 +787,7 @@ struct xen_domctl_gdbsx_domstatus {
> * EXDEV - guest has PoD enabled
> * EBUSY - guest has or had paging enabled, ring buffer still active
> */
> -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1
> +#define XEN_DOMCTL_VM_EVENT_OP_PAGING XEN_VM_EVENT_TYPE_PAGING
>
> /*
> * Monitor helper.
> @@ -804,7 +811,7 @@ struct xen_domctl_gdbsx_domstatus {
> * EBUSY - guest has or had access enabled, ring buffer still active
> *
> */
> -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2
> +#define XEN_DOMCTL_VM_EVENT_OP_MONITOR XEN_VM_EVENT_TYPE_MONITOR
>
> /*
> * Sharing ENOMEM helper.
> @@ -819,7 +826,7 @@ struct xen_domctl_gdbsx_domstatus {
> * Note that shring can be turned on (as per the domctl below)
> * *without* this ring being setup.
> */
> -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3
> +#define XEN_DOMCTL_VM_EVENT_OP_SHARING XEN_VM_EVENT_TYPE_SHARING
And what's the reason to retain these (now redundant)
XEN_DOMCTL_VM_EVENT_OP_* definitions? Either they're independent
(in which case they shouldn't resolve to XEN_VM_EVENT_TYPE_*) or
they're true aliases (tolerating arbitrary future changes to
XEN_VM_EVENT_TYPE_* without further adjustment here), and then
unnecessary to retain.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] tools/libxc: Define VM_EVENT type
2018-09-14 9:14 ` Jan Beulich
@ 2018-09-14 11:11 ` Petre Pircalabu
2018-09-14 11:16 ` Jan Beulich
0 siblings, 1 reply; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-14 11:11 UTC (permalink / raw)
To: Jan Beulich
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk, George Dunlap,
Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall, xen-devel
On Vi, 2018-09-14 at 03:14 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -757,10 +757,17 @@ struct xen_domctl_gdbsx_domstatus {
> >
> > /*
> > * There are currently three rings available for VM events:
> > - * sharing, monitor and paging. This hypercall allows one to
> > - * control these rings (enable/disable), as well as to signal
> > - * to the hypervisor to pull responses (resume) from the given
> > - * ring.
> > + * sharing, monitor and paging
> > + */
> > +
> > +#define XEN_VM_EVENT_TYPE_PAGING 1
> > +#define XEN_VM_EVENT_TYPE_MONITOR 2
> > +#define XEN_VM_EVENT_TYPE_SHARING 3
> > +
> > +/*
> > + * This hypercall allows one to control the vm_event rings
> > (enable/disable),
> > + * as well as to signal to the hypervisor to pull responses
> > (resume) from
> > + * the given ring.
> > */
> What's the reason to modify the comment, the more with a style
> violation (malformed single line comment) as the result?
>
I have split the comment in 2 parts. The first ("There are ... sharing,
monitor and paging ") describes the 3 XEN_VM_EVENT_TYPE_* rings, and
the second describes the vm_event hypercalls ( XEN_VM_EVENT_ENABLE ....
). Other than the missing period at the end of "paging" (which I will
fix in the next iteration) I cannot identify other coding style
violations.
> >
> > @@ -780,7 +787,7 @@ struct xen_domctl_gdbsx_domstatus {
> > * EXDEV - guest has PoD enabled
> > * EBUSY - guest has or had paging enabled, ring buffer still
> > active
> > */
> > -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1
> > +#define XEN_DOMCTL_VM_EVENT_OP_PAGING XEN_VM_EVENT_TYPE_PAGING
> >
> > /*
> > * Monitor helper.
> > @@ -804,7 +811,7 @@ struct xen_domctl_gdbsx_domstatus {
> > * EBUSY - guest has or had access enabled, ring buffer still
> > active
> > *
> > */
> > -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2
> > +#define XEN_DOMCTL_VM_EVENT_OP_MONITOR XEN_VM_EVENT_TYPE_MONITOR
> >
> > /*
> > * Sharing ENOMEM helper.
> > @@ -819,7 +826,7 @@ struct xen_domctl_gdbsx_domstatus {
> > * Note that shring can be turned on (as per the domctl below)
> > * *without* this ring being setup.
> > */
> > -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3
> > +#define XEN_DOMCTL_VM_EVENT_OP_SHARING XEN_VM_EVENT_TYPE_SHARING
> And what's the reason to retain these (now redundant)
> XEN_DOMCTL_VM_EVENT_OP_* definitions? Either they're independent
> (in which case they shouldn't resolve to XEN_VM_EVENT_TYPE_*) or
> they're true aliases (tolerating arbitrary future changes to
> XEN_VM_EVENT_TYPE_* without further adjustment here), and then
> unnecessary to retain.
>
> Jan
>
>
I kept them despite the redundancy because the domctl.h header is
public and I didn't wanted to break any existent (external) clients of
this interface.
However, if you think removing them altogether it's best, I will
replace them with the XEN_VM_EVENT_TYPE_* and increment
XEN_DOMCTL_INTERFACE_VERSION.
//Petre
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] tools/libxc: Define VM_EVENT type
2018-09-14 11:11 ` Petre Pircalabu
@ 2018-09-14 11:16 ` Jan Beulich
0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-09-14 11:16 UTC (permalink / raw)
To: Petre Pircalabu
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk, George Dunlap,
Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall, xen-devel
>>> On 14.09.18 at 13:11, <ppircalabu@bitdefender.com> wrote:
> On Vi, 2018-09-14 at 03:14 -0600, Jan Beulich wrote:
>> > > > On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
>> > --- a/xen/include/public/domctl.h
>> > +++ b/xen/include/public/domctl.h
>> > @@ -757,10 +757,17 @@ struct xen_domctl_gdbsx_domstatus {
>> >
>> > /*
>> > * There are currently three rings available for VM events:
>> > - * sharing, monitor and paging. This hypercall allows one to
>> > - * control these rings (enable/disable), as well as to signal
>> > - * to the hypervisor to pull responses (resume) from the given
>> > - * ring.
>> > + * sharing, monitor and paging
>> > + */
>> > +
>> > +#define XEN_VM_EVENT_TYPE_PAGING 1
>> > +#define XEN_VM_EVENT_TYPE_MONITOR 2
>> > +#define XEN_VM_EVENT_TYPE_SHARING 3
>> > +
>> > +/*
>> > + * This hypercall allows one to control the vm_event rings
>> > (enable/disable),
>> > + * as well as to signal to the hypervisor to pull responses
>> > (resume) from
>> > + * the given ring.
>> > */
>> What's the reason to modify the comment, the more with a style
>> violation (malformed single line comment) as the result?
>>
> I have split the comment in 2 parts.
I realize you did this, but you still don't really clarify why.
> The first ("There are ... sharing,
> monitor and paging ") describes the 3 XEN_VM_EVENT_TYPE_* rings, and
> the second describes the vm_event hypercalls ( XEN_VM_EVENT_ENABLE ....
> ). Other than the missing period at the end of "paging" (which I will
> fix in the next iteration) I cannot identify other coding style
> violations.
Well, I've already told you: The first of the two resulting comments
is now supposed to be all on a single line.
>> > @@ -780,7 +787,7 @@ struct xen_domctl_gdbsx_domstatus {
>> > * EXDEV - guest has PoD enabled
>> > * EBUSY - guest has or had paging enabled, ring buffer still
>> > active
>> > */
>> > -#define XEN_DOMCTL_VM_EVENT_OP_PAGING 1
>> > +#define XEN_DOMCTL_VM_EVENT_OP_PAGING XEN_VM_EVENT_TYPE_PAGING
>> >
>> > /*
>> > * Monitor helper.
>> > @@ -804,7 +811,7 @@ struct xen_domctl_gdbsx_domstatus {
>> > * EBUSY - guest has or had access enabled, ring buffer still
>> > active
>> > *
>> > */
>> > -#define XEN_DOMCTL_VM_EVENT_OP_MONITOR 2
>> > +#define XEN_DOMCTL_VM_EVENT_OP_MONITOR XEN_VM_EVENT_TYPE_MONITOR
>> >
>> > /*
>> > * Sharing ENOMEM helper.
>> > @@ -819,7 +826,7 @@ struct xen_domctl_gdbsx_domstatus {
>> > * Note that shring can be turned on (as per the domctl below)
>> > * *without* this ring being setup.
>> > */
>> > -#define XEN_DOMCTL_VM_EVENT_OP_SHARING 3
>> > +#define XEN_DOMCTL_VM_EVENT_OP_SHARING XEN_VM_EVENT_TYPE_SHARING
>> And what's the reason to retain these (now redundant)
>> XEN_DOMCTL_VM_EVENT_OP_* definitions? Either they're independent
>> (in which case they shouldn't resolve to XEN_VM_EVENT_TYPE_*) or
>> they're true aliases (tolerating arbitrary future changes to
>> XEN_VM_EVENT_TYPE_* without further adjustment here), and then
>> unnecessary to retain.
>>
> I kept them despite the redundancy because the domctl.h header is
> public and I didn't wanted to break any existent (external) clients of
> this interface.
> However, if you think removing them altogether it's best, I will
> replace them with the XEN_VM_EVENT_TYPE_* and increment
> XEN_DOMCTL_INTERFACE_VERSION.
Yes, that's what I think should be happening, unless there were other
reasons.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] x86_emulator: Add PHONY uninstall target
2018-09-14 9:09 ` Jan Beulich
@ 2018-09-14 11:19 ` Petre Pircalabu
0 siblings, 0 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-14 11:19 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, Ian Jackson, xen-devel
On Vi, 2018-09-14 at 03:09 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
> > The missing uninstall target breaks the top 'tools' uninstall
> > target.
> Missing?
>
> >
> > --- a/tools/tests/x86_emulator/Makefile
> > +++ b/tools/tests/x86_emulator/Makefile
> > @@ -126,6 +126,9 @@ distclean: clean
> > .PHONY: install uninstall
> > install uninstall:
> >
> > +.PHONY: uninstall
> > +uninstall:
> Please look at the patch context above your change. It was added
> a couple of weeks ago (09613d3b5f), and you really should have
> noticed this the latest when re-basing on top of that change.
>
> Jan
>
Thank-you very much for pointing it out. I have missed it completely.
I will completely remove this patch in the next patchset iteration.
//Petre
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
2018-09-13 16:42 ` Tamas K Lengyel
@ 2018-09-17 14:41 ` Andrew Cooper
2018-09-24 16:32 ` Petre Pircalabu
2018-09-18 12:58 ` Jan Beulich
2 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2018-09-17 14:41 UTC (permalink / raw)
To: Petre Pircalabu, xen-devel
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Tim Deegan, Ian Jackson,
Julien Grall, Tamas K Lengyel, Jan Beulich
On 13/09/18 16:02, Petre Pircalabu wrote:
> In high throughput introspection scenarios where lots of monitor
> vm_events are generated, the ring buffer can fill up before the monitor
> application gets a chance to handle all the requests thus blocking
> other vcpus which will have to wait for a slot to become available.
>
> This patch adds support for extending the ring buffer by allocating a
> number of pages from domheap and mapping them to the monitor
> application's domain using the foreignmemory_map_resource interface.
> Unlike the current implementation, the ring buffer pages are not part of
> the introspected DomU, so they will not be reclaimed when the monitor is
> disabled.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
What about the slotted format for the synchronous events? While this is
fine for the async bits, I don't think we want to end up changing the
mapping API twice.
Simply increasing the size of the ring puts more pressure on the
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 0d23e52..2a9cbf3 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
> {
> mfn_t mfn[nr];
> int i;
> - struct page_info *cur_pg = (struct page_info *)&pg[0];
>
> for (i = 0; i < nr; i++)
> - mfn[i] = page_to_mfn(cur_pg++);
> + mfn[i] = page_to_mfn(pg++);
This hunk looks like it should be in the previous patch? That said...
>
> return map_domain_pages_global(mfn, nr);
> }
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 4793aac..faece3c 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -39,16 +39,66 @@
> #define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock)
> #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock)
>
> +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
> +
> +static int vm_event_alloc_ring(struct domain *d, struct vm_event_domain *ved)
> +{
> + struct page_info *page;
> + void *va = NULL;
> + int i, rc = -ENOMEM;
> +
> + page = alloc_domheap_pages(d, ved->ring_order, MEMF_no_refcount);
> + if ( !page )
> + return -ENOMEM;
... what is wrong with vzalloc()?
You don't want to be making a ring_order allocation, especially as the
order grows. All you need are some mappings which are virtually
contiguous, not physically contiguous.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/4] x86: Add map_domain_pages_global
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
@ 2018-09-18 10:51 ` Jan Beulich
2018-09-24 13:13 ` Julien Grall
1 sibling, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-09-18 10:51 UTC (permalink / raw)
To: Petre Pircalabu
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk, George Dunlap,
Andrew Cooper, Ian Jackson, Tim Deegan, Julien Grall, xen-devel
>>> On 13.09.18 at 17:01, <ppircalabu@bitdefender.com> wrote:
> Create a single mapping for multiple domain pages.
At least as far as map_domain_pages_global() goes, you will want to justify
why what we have isn't enough.
> --- a/tools/libxc/xc_vm_event.c
> +++ b/tools/libxc/xc_vm_event.c
> @@ -74,7 +74,7 @@ static int xc_vm_event_domctl(int type, unsigned int *param)
> {
> case XEN_VM_EVENT_TYPE_PAGING:
> *param = XEN_DOMCTL_VM_EVENT_OP_PAGING;
> - break;
> + break;
>
> case XEN_VM_EVENT_TYPE_MONITOR:
> *param = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
Clearly not something you want to do in an otherwise hypervisor-only
patch.
> +void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
I we really think we need this function, no new name space violations
please (here: no undue leading underscores).
> +{
> + mfn_t mfn[nr];
I think we'd like to avoid any (new) VLAs on the stack.
> + int i;
unsigned int
> @@ -68,6 +71,12 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
> return page_to_virt(pg);
> }
>
> +static inline void *__map_domain_pages_global(const struct page_info *pg,
> + unsigned int nr)
> +{
> + return __map_domain_page_global(pg);
> +}
Even if functionally this might be correct, you shouldn't do so, as you
drop nr without any explanation.
Overall, considering the other comment you've got on the last patch,
assuming contiguous physical pages here is likely the wrong route
anyway.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
2018-09-13 16:42 ` Tamas K Lengyel
2018-09-17 14:41 ` Andrew Cooper
@ 2018-09-18 12:58 ` Jan Beulich
2018-09-24 16:54 ` Petre Pircalabu
2 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-09-18 12:58 UTC (permalink / raw)
To: Petre Pircalabu
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, xen-devel
>>> On 13.09.18 at 17:02, <ppircalabu@bitdefender.com> wrote:
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
> {
> mfn_t mfn[nr];
> int i;
> - struct page_info *cur_pg = (struct page_info *)&pg[0];
>
> for (i = 0; i < nr; i++)
> - mfn[i] = page_to_mfn(cur_pg++);
> + mfn[i] = page_to_mfn(pg++);
>
> return map_domain_pages_global(mfn, nr);
> }
Please could you avoid having such random unrelated changes in your patches?
> @@ -4415,6 +4416,19 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
> }
> #endif
>
> + case XENMEM_resource_vm_event:
> + {
> + rc = vm_event_get_ring_frames(d, id, frame, nr_frames, mfn_list);
> + if ( rc )
> + break;
> + /*
> + * The frames will have been assigned to the domain that created
> + * the ioreq server.
> + */
> + *flags |= XENMEM_rsrc_acq_caller_owned;
> + break;
> + }
Isn't it wrong to assume that the monitor application will run in the same
domain as the / one ioreq server?
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -39,16 +39,66 @@
> #define vm_event_ring_lock(_ved) spin_lock(&(_ved)->ring_lock)
> #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)->ring_lock)
>
> +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
Note this constant's type vs ...
> static int vm_event_enable(
> struct domain *d,
> - struct xen_domctl_vm_event_op *vec,
> struct vm_event_domain **ved,
> + unsigned long param,
... the function parameter type here. I don't see why this can't be
"unsigned int".
> @@ -88,12 +151,12 @@ static int vm_event_enable(
> if ( rc < 0 )
> goto err;
>
> - (*ved)->xen_port = vec->port = rc;
> + (*ved)->xen_port = rc;
Yet another unrelated change? It looks to be replaced by code further
down (in the callers), but it's not clear to me why the change is needed
(here). Perhaps worth splitting up the patch a little?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/4] x86: Add map_domain_pages_global
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
2018-09-18 10:51 ` Jan Beulich
@ 2018-09-24 13:13 ` Julien Grall
1 sibling, 0 replies; 19+ messages in thread
From: Julien Grall @ 2018-09-24 13:13 UTC (permalink / raw)
To: Petre Pircalabu, xen-devel
Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk, George Dunlap,
Andrew Cooper, Ian Jackson, Tim Deegan, Jan Beulich
Hi,
On 09/13/2018 04:01 PM, Petre Pircalabu wrote:
> Create a single mapping for multiple domain pages.
>
> Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
> ---
> tools/libxc/xc_vm_event.c | 2 +-
> xen/arch/x86/domain_page.c | 22 ++++++++++++++++++++++
> xen/include/xen/domain_page.h | 9 +++++++++
> 3 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
> index dd34cec..de37ca5 100644
> --- a/tools/libxc/xc_vm_event.c
> +++ b/tools/libxc/xc_vm_event.c
> @@ -74,7 +74,7 @@ static int xc_vm_event_domctl(int type, unsigned int *param)
> {
> case XEN_VM_EVENT_TYPE_PAGING:
> *param = XEN_DOMCTL_VM_EVENT_OP_PAGING;
> - break;
> + break;
>
> case XEN_VM_EVENT_TYPE_MONITOR:
> *param = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 4a07cfb..0d23e52 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -317,6 +317,28 @@ void *map_domain_page_global(mfn_t mfn)
> return vmap(&mfn, 1);
> }
>
> +void *map_domain_pages_global(const mfn_t *mfn, unsigned int nr)
> +{
> + ASSERT(!in_irq() &&
> + ((system_state >= SYS_STATE_boot &&
> + system_state < SYS_STATE_active) ||
> + local_irq_is_enabled()));
> +
> + return vmap(mfn, nr);
> +}
> +
> +void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr)
> +{
> + mfn_t mfn[nr];
> + int i;
> + struct page_info *cur_pg = (struct page_info *)&pg[0];
> +
> + for (i = 0; i < nr; i++)
> + mfn[i] = page_to_mfn(cur_pg++);
> +
> + return map_domain_pages_global(mfn, nr);
> +}
> +
> void unmap_domain_page_global(const void *ptr)
> {
> unsigned long va = (unsigned long)ptr;
> diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
> index 32669a3..76422f9 100644
> --- a/xen/include/xen/domain_page.h
> +++ b/xen/include/xen/domain_page.h
> @@ -42,6 +42,7 @@ mfn_t domain_page_map_to_mfn(const void *va);
> * mappings can also be unmapped from any context.
> */
> void *map_domain_page_global(mfn_t mfn);
> +void *map_domain_pages_global(const mfn_t *mfn, unsigned int nr);
> void unmap_domain_page_global(const void *va);
>
> #define __map_domain_page(pg) map_domain_page(page_to_mfn(pg))
> @@ -51,6 +52,8 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
> return map_domain_page_global(page_to_mfn(pg));
> }
>
> +void *__map_domain_pages_global(const struct page_info *pg, unsigned int nr);
This would require an implementation for Arm, assuming this is going ahead.
> +
> #else /* !CONFIG_DOMAIN_PAGE */
>
> #define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn))
> @@ -68,6 +71,12 @@ static inline void *__map_domain_page_global(const struct page_info *pg)
> return page_to_virt(pg);
> }
>
> +static inline void *__map_domain_pages_global(const struct page_info *pg,
> + unsigned int nr)
> +{
> + return __map_domain_page_global(pg);
> +}
> +
> static inline void unmap_domain_page_global(const void *va) {};
>
> #endif /* !CONFIG_DOMAIN_PAGE */
>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-17 14:41 ` Andrew Cooper
@ 2018-09-24 16:32 ` Petre Pircalabu
0 siblings, 0 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-24 16:32 UTC (permalink / raw)
To: Andrew Cooper, xen-devel
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Tim Deegan, Ian Jackson,
Julien Grall, Tamas K Lengyel, Jan Beulich
On Mon, 2018-09-17 at 15:41 +0100, Andrew Cooper wrote:
> On 13/09/18 16:02, Petre Pircalabu wrote:
> > In high throughput introspection scenarios where lots of monitor
> > vm_events are generated, the ring buffer can fill up before the
> > monitor
> > application gets a chance to handle all the requests thus blocking
> > other vcpus which will have to wait for a slot to become available.
> >
> > This patch adds support for extending the ring buffer by allocating
> > a
> > number of pages from domheap and mapping them to the monitor
> > application's domain using the foreignmemory_map_resource
> > interface.
> > Unlike the current implementation, the ring buffer pages are not
> > part of
> > the introspected DomU, so they will not be reclaimed when the
> > monitor is
> > disabled.
> >
> > Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
>
> What about the slotted format for the synchronous events? While this
> is
> fine for the async bits, I don't think we want to end up changing the
> mapping API twice.
>
> Simply increasing the size of the ring puts more pressure on the
I'm currently investigating this approach and I will send an
inplementation proposal with the next version of the patch.
>
> > diff --git a/xen/arch/x86/domain_page.c
> > b/xen/arch/x86/domain_page.c
> > index 0d23e52..2a9cbf3 100644
> > --- a/xen/arch/x86/domain_page.c
> > +++ b/xen/arch/x86/domain_page.c
> > @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct
> > page_info *pg, unsigned int nr)
> > {
> > mfn_t mfn[nr];
> > int i;
> > - struct page_info *cur_pg = (struct page_info *)&pg[0];
> >
> > for (i = 0; i < nr; i++)
> > - mfn[i] = page_to_mfn(cur_pg++);
> > + mfn[i] = page_to_mfn(pg++);
>
> This hunk looks like it should be in the previous patch? That
> said...
Yep. I've completely missed it. This piece of code will be removed
along with the map_domain_pages_global patch.
>
> >
> > return map_domain_pages_global(mfn, nr);
> > }
> > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> > index 4793aac..faece3c 100644
> > --- a/xen/common/vm_event.c
> > +++ b/xen/common/vm_event.c
> > @@ -39,16 +39,66 @@
> > #define vm_event_ring_lock(_ved) spin_lock(&(_ved)-
> > >ring_lock)
> > #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)-
> > >ring_lock)
> >
> > +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
> > +
> > +static int vm_event_alloc_ring(struct domain *d, struct
> > vm_event_domain *ved)
> > +{
> > + struct page_info *page;
> > + void *va = NULL;
> > + int i, rc = -ENOMEM;
> > +
> > + page = alloc_domheap_pages(d, ved->ring_order,
> > MEMF_no_refcount);
> > + if ( !page )
> > + return -ENOMEM;
>
> ... what is wrong with vzalloc()?
>
> You don't want to be making a ring_order allocation, especially as
> the
> order grows. All you need are some mappings which are virtually
> contiguous, not physically contiguous.
Unfortunately, vzalloc doesn't work here (the acquire_resource
succeeds, but the subsequent mapping fails:
(XEN) mm.c:1024:d0v5 pg_owner d0 l1e_owner d0, but real_pg_owner d-1
(XEN) mm.c:1100:d0v5 Error getting mfn dd24d (pfn ffffffffffffffff)
from L1 entry 80000000dd24d227 for l1e_owner d0, pg_owner d0
(XEN) mm.c:1024:d0v5 pg_owner d0 l1e_owner d0, but real_pg_owner d-1
(XEN) mm.c:1100:d0v5 Error getting mfn dd24c (pfn ffffffffffffffff)
from L1 entry 80000000dd24c227 for l1e_owner d0, pg_owner d0
However, allocating each page with alloc_domheap_page and then mapping
them using vmap does the trick.
Until the next version is ready (slotted format for synchronous events)
I have pushed an intermediate version which addresses the issues
signaled by you an Jan to my github fork of the xen repository:
https://github.com/petrepircalabu/xen/tree/multi_page_ring_buffer/devel
_new
> ~Andrew
Many thanks for your support,
Petre
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] vm_event: Add support for multi-page ring buffer
2018-09-18 12:58 ` Jan Beulich
@ 2018-09-24 16:54 ` Petre Pircalabu
0 siblings, 0 replies; 19+ messages in thread
From: Petre Pircalabu @ 2018-09-24 16:54 UTC (permalink / raw)
To: Jan Beulich
Cc: Stefano Stabellini, Wei Liu, Razvan Cojocaru,
Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
Tim Deegan, Julien Grall, Tamas K Lengyel, xen-devel
On Tue, 2018-09-18 at 06:58 -0600, Jan Beulich wrote:
> > > > On 13.09.18 at 17:02, <ppircalabu@bitdefender.com> wrote:
> >
> > --- a/xen/arch/x86/domain_page.c
> > +++ b/xen/arch/x86/domain_page.c
> > @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct
> > page_info *pg, unsigned int nr)
> > {
> > mfn_t mfn[nr];
> > int i;
> > - struct page_info *cur_pg = (struct page_info *)&pg[0];
> >
> > for (i = 0; i < nr; i++)
> > - mfn[i] = page_to_mfn(cur_pg++);
> > + mfn[i] = page_to_mfn(pg++);
> >
> > return map_domain_pages_global(mfn, nr);
> > }
>
> Please could you avoid having such random unrelated changes in your
> patches?
>
Thank-uy very much for noticing it, I've completely missed it. However
I drop the map_domain_pages functionality altoghether as Andrew pointed
out is not necessary to have a physically contiguous area.
> > @@ -4415,6 +4416,19 @@ int arch_acquire_resource(struct domain *d,
> > unsigned int type,
> > }
> > #endif
> >
> > + case XENMEM_resource_vm_event:
> > + {
> > + rc = vm_event_get_ring_frames(d, id, frame, nr_frames,
> > mfn_list);
> > + if ( rc )
> > + break;
> > + /*
> > + * The frames will have been assigned to the domain that
> > created
> > + * the ioreq server.
> > + */
> > + *flags |= XENMEM_rsrc_acq_caller_owned;
> > + break;
> > + }
>
> Isn't it wrong to assume that the monitor application will run in the
> same
> domain as the / one ioreq server?
I will remove the ioreq server comment as it's misleading. This call is
not using a ioreq server, but only allocates the memory for the
vm_event's ring buffer.
> > --- a/xen/common/vm_event.c
> > +++ b/xen/common/vm_event.c
> > @@ -39,16 +39,66 @@
> > #define vm_event_ring_lock(_ved) spin_lock(&(_ved)-
> > >ring_lock)
> > #define vm_event_ring_unlock(_ved) spin_unlock(&(_ved)-
> > >ring_lock)
> >
> > +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
>
> Note this constant's type vs ...
>
> > static int vm_event_enable(
> > struct domain *d,
> > - struct xen_domctl_vm_event_op *vec,
> > struct vm_event_domain **ved,
> > + unsigned long param,
>
> ... the function parameter type here. I don't see why this can't be
> "unsigned int".
Fixed.
>
> > @@ -88,12 +151,12 @@ static int vm_event_enable(
> > if ( rc < 0 )
> > goto err;
> >
> > - (*ved)->xen_port = vec->port = rc;
> > + (*ved)->xen_port = rc;
>
> Yet another unrelated change? It looks to be replaced by code further
> down (in the callers), but it's not clear to me why the change is
> needed
> (here). Perhaps worth splitting up the patch a little?
vm_event_enabled can be called from both vm_event_domctl and
vm_event_get_ring_frames (acquire_resource call path). I have removed
the vec parameter (struct xen_domctl_vm_event_op) to make the function
caller agnostic.
Many thank for your comments,
Petre
>
> Jan
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2018-09-24 16:54 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-09-13 15:01 [PATCH 0/4] Add support for multi-page vm_event ring buffer Petre Pircalabu
2018-09-13 15:01 ` [PATCH 1/4] x86_emulator: Add PHONY uninstall target Petre Pircalabu
2018-09-14 9:04 ` Wei Liu
2018-09-14 9:09 ` Jan Beulich
2018-09-14 11:19 ` Petre Pircalabu
2018-09-13 15:01 ` [PATCH 2/4] tools/libxc: Define VM_EVENT type Petre Pircalabu
2018-09-14 9:14 ` Jan Beulich
2018-09-14 11:11 ` Petre Pircalabu
2018-09-14 11:16 ` Jan Beulich
2018-09-13 15:01 ` [PATCH 3/4] x86: Add map_domain_pages_global Petre Pircalabu
2018-09-18 10:51 ` Jan Beulich
2018-09-24 13:13 ` Julien Grall
2018-09-13 15:02 ` [PATCH 4/4] vm_event: Add support for multi-page ring buffer Petre Pircalabu
2018-09-13 16:42 ` Tamas K Lengyel
2018-09-14 8:10 ` Petre Pircalabu
2018-09-17 14:41 ` Andrew Cooper
2018-09-24 16:32 ` Petre Pircalabu
2018-09-18 12:58 ` Jan Beulich
2018-09-24 16:54 ` Petre Pircalabu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).