* [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-14 18:52 ` Ian Jackson
2010-12-06 20:59 ` [PATCH 02/17] xenpaging: remove perror usage " Olaf Hering
` (16 subsequent siblings)
17 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.xc_interface_close.patch --]
[-- Type: text/plain, Size: 553 bytes --]
Just for correctness, close the xch handle in the error path.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 1 +
1 file changed, 1 insertion(+)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -224,6 +224,7 @@ xenpaging_t *xenpaging_init(xc_interface
err:
if ( paging )
{
+ xc_interface_close(xch);
if ( paging->mem_event.shared_page )
{
munlock(paging->mem_event.shared_page, PAGE_SIZE);
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 02/17] xenpaging: remove perror usage in xenpaging_init error path
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment Olaf Hering
` (15 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.perror.patch --]
[-- Type: text/plain, Size: 730 bytes --]
Use the libxc error macro to report errors if initialising xenpaging
fails. Also report the actual errno string.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -141,7 +141,7 @@ xenpaging_t *xenpaging_init(xc_interface
ERROR("EPT not supported for this guest");
break;
default:
- perror("Error initialising shared page");
+ ERROR("Error initialising shared page: %s", strerror(errno));
break;
}
goto err;
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
2010-12-06 20:59 ` [PATCH 01/17] xenpaging: close xch handle in xenpaging_init error path Olaf Hering
2010-12-06 20:59 ` [PATCH 02/17] xenpaging: remove perror usage " Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 04/17] xenpaging: print number of evicted pages Olaf Hering
` (14 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.xentoollog_logger.patch --]
[-- Type: text/plain, Size: 1082 bytes --]
No DPRINTF output is logged because the default loglevel is to low in
libxc. Recognize the XENPAGING_DEBUG environment variable to change the
default loglevel at runtime.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -39,12 +39,6 @@
#include "policy.h"
#include "xenpaging.h"
-
-#if 0
-#undef DPRINTF
-#define DPRINTF(...) ((void)0)
-#endif
-
static char filename[80];
static int interrupted;
static void close_handler(int sig)
@@ -83,9 +77,12 @@ xenpaging_t *xenpaging_init(xc_interface
{
xenpaging_t *paging;
xc_interface *xch;
+ xentoollog_logger *dbg = NULL;
int rc;
- xch = xc_interface_open(NULL, NULL, 0);
+ if ( getenv("XENPAGING_DEBUG") )
+ dbg = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, XTL_DEBUG, 0);
+ xch = xc_interface_open(dbg, NULL, 0);
if ( !xch )
goto err_iface;
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 04/17] xenpaging: print number of evicted pages
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (2 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 03/17] xenpaging: print DPRINTF ouput if XENPAGING_DEBUG is in environment Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call Olaf Hering
` (13 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xenpaging_init.num_pages.patch --]
[-- Type: text/plain, Size: 557 bytes --]
Print number of evicted pages after evict loop.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -583,7 +583,7 @@ int main(int argc, char *argv[])
DPRINTF("%d pages evicted\n", i);
}
- DPRINTF("pages evicted\n");
+ DPRINTF("%d pages evicted. Done.\n", i);
/* Swap pages in and out */
while ( !interrupted )
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (3 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 04/17] xenpaging: print number of evicted pages Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable Olaf Hering
` (12 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xc_handle.double_free.patch --]
[-- Type: text/plain, Size: 571 bytes --]
Fix double-free in xc_interface_close() because xenpaging_teardown()
releases the *xch already. Remove second xc_interface_close() call.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 2 --
1 file changed, 2 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -693,8 +693,6 @@ int main(int argc, char *argv[])
if ( rc == 0 )
rc = rc1;
- xc_interface_close(xch);
-
DPRINTF("xenpaging exit code %d\n", rc);
return rc;
}
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (4 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 05/17] xenpaging: remove duplicate xc_interface_close call Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 07/17] xenpaging: update xch usage Olaf Hering
` (11 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xc_handle.dprintf.patch --]
[-- Type: text/plain, Size: 1208 bytes --]
Fix DPRINTF/ERROR usage. Both macros reference a xch variable in local scope.
If xc_interface_open fails and after xc_interface_close, both can not be used
anymore. Use standard fprintf for this case.
Remove the code to print the exit value, its not really useful.
Its a left-over for debugging from an earlier patch.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/xenpaging.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -539,7 +539,7 @@ int main(int argc, char *argv[])
paging = xenpaging_init(&xch, domain_id);
if ( paging == NULL )
{
- ERROR("Error initialising paging");
+ fprintf(stderr, "Error initialising paging");
return 1;
}
@@ -688,12 +688,10 @@ int main(int argc, char *argv[])
/* Tear down domain paging */
rc1 = xenpaging_teardown(xch, paging);
if ( rc1 != 0 )
- ERROR("Error tearing down paging");
+ fprintf(stderr, "Error tearing down paging");
if ( rc == 0 )
rc = rc1;
-
- DPRINTF("xenpaging exit code %d\n", rc);
return rc;
}
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 07/17] xenpaging: update xch usage
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (5 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 06/17] xenpaging: do not use DPRINTF/ERROR if xch handle is unavailable Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring() Olaf Hering
` (10 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.xc_handle.xch.patch --]
[-- Type: text/plain, Size: 9129 bytes --]
Instead of passing xch around, use the handle from xenpaging_t.
In the updated functions, use a local xch variable.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/policy.h | 3 --
tools/xenpaging/policy_default.c | 4 +-
tools/xenpaging/xenpaging.c | 54 ++++++++++++++++++++-------------------
3 files changed, 32 insertions(+), 29 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy.h
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy.h
@@ -29,8 +29,7 @@
int policy_init(xenpaging_t *paging);
-int policy_choose_victim(xc_interface *xch,
- xenpaging_t *paging, domid_t domain_id,
+int policy_choose_victim(xenpaging_t *paging, domid_t domain_id,
xenpaging_victim_t *victim);
void policy_notify_paged_out(domid_t domain_id, unsigned long gfn);
void policy_notify_paged_in(domid_t domain_id, unsigned long gfn);
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -67,10 +67,10 @@ int policy_init(xenpaging_t *paging)
return rc;
}
-int policy_choose_victim(xc_interface *xch,
- xenpaging_t *paging, domid_t domain_id,
+int policy_choose_victim(xenpaging_t *paging, domid_t domain_id,
xenpaging_victim_t *victim)
{
+ struct xc_interface *xch = paging->xc_handle;
unsigned long wrap = current_gfn;
ASSERT(victim != NULL);
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -73,7 +73,7 @@ static void *init_page(void)
return NULL;
}
-xenpaging_t *xenpaging_init(xc_interface **xch_r, domid_t domain_id)
+xenpaging_t *xenpaging_init(domid_t domain_id)
{
xenpaging_t *paging;
xc_interface *xch;
@@ -87,7 +87,6 @@ xenpaging_t *xenpaging_init(xc_interface
goto err_iface;
DPRINTF("xenpaging init\n");
- *xch_r = xch;
/* Allocate memory */
paging = malloc(sizeof(xenpaging_t));
@@ -125,7 +124,7 @@ xenpaging_t *xenpaging_init(xc_interface
mem_event_ring_lock_init(&paging->mem_event);
/* Initialise Xen */
- rc = xc_mem_event_enable(paging->xc_handle, paging->mem_event.domain_id,
+ rc = xc_mem_event_enable(xch, paging->mem_event.domain_id,
paging->mem_event.shared_page,
paging->mem_event.ring_page);
if ( rc != 0 )
@@ -172,7 +171,7 @@ xenpaging_t *xenpaging_init(xc_interface
goto err;
}
- rc = xc_get_platform_info(paging->xc_handle, domain_id,
+ rc = xc_get_platform_info(xch, domain_id,
paging->platform_info);
if ( rc != 1 )
{
@@ -188,7 +187,7 @@ xenpaging_t *xenpaging_init(xc_interface
goto err;
}
- rc = xc_domain_getinfolist(paging->xc_handle, domain_id, 1,
+ rc = xc_domain_getinfolist(xch, domain_id, 1,
paging->domain_info);
if ( rc != 1 )
{
@@ -244,15 +243,18 @@ xenpaging_t *xenpaging_init(xc_interface
return NULL;
}
-int xenpaging_teardown(xc_interface *xch, xenpaging_t *paging)
+int xenpaging_teardown(xenpaging_t *paging)
{
int rc;
+ struct xc_interface *xch;
if ( paging == NULL )
return 0;
+ xch = paging->xc_handle;
+ paging->xc_handle = NULL;
/* Tear down domain paging in Xen */
- rc = xc_mem_event_disable(paging->xc_handle, paging->mem_event.domain_id);
+ rc = xc_mem_event_disable(xch, paging->mem_event.domain_id);
if ( rc != 0 )
{
ERROR("Error tearing down domain paging in xen");
@@ -275,12 +277,11 @@ int xenpaging_teardown(xc_interface *xch
paging->mem_event.xce_handle = -1;
/* Close connection to Xen */
- rc = xc_interface_close(paging->xc_handle);
+ rc = xc_interface_close(xch);
if ( rc != 0 )
{
ERROR("Error closing connection to xen");
}
- paging->xc_handle = NULL;
return 0;
@@ -334,9 +335,10 @@ static int put_response(mem_event_t *mem
return 0;
}
-int xenpaging_evict_page(xc_interface *xch, xenpaging_t *paging,
+int xenpaging_evict_page(xenpaging_t *paging,
xenpaging_victim_t *victim, int fd, int i)
{
+ struct xc_interface *xch = paging->xc_handle;
void *page;
unsigned long gfn;
int ret;
@@ -346,7 +348,7 @@ int xenpaging_evict_page(xc_interface *x
/* Map page */
gfn = victim->gfn;
ret = -EFAULT;
- page = xc_map_foreign_pages(paging->xc_handle, victim->domain_id,
+ page = xc_map_foreign_pages(xch, victim->domain_id,
PROT_READ | PROT_WRITE, &gfn, 1);
if ( page == NULL )
{
@@ -369,7 +371,7 @@ int xenpaging_evict_page(xc_interface *x
munmap(page, PAGE_SIZE);
/* Tell Xen to evict page */
- ret = xc_mem_paging_evict(paging->xc_handle, paging->mem_event.domain_id,
+ ret = xc_mem_paging_evict(xch, paging->mem_event.domain_id,
victim->gfn);
if ( ret != 0 )
{
@@ -407,10 +409,10 @@ static int xenpaging_resume_page(xenpagi
return ret;
}
-static int xenpaging_populate_page(
- xc_interface *xch, xenpaging_t *paging,
+static int xenpaging_populate_page(xenpaging_t *paging,
uint64_t *gfn, int fd, int i)
{
+ struct xc_interface *xch = paging->xc_handle;
unsigned long _gfn;
void *page;
int ret;
@@ -420,7 +422,7 @@ static int xenpaging_populate_page(
do
{
/* Tell Xen to allocate a page for the domain */
- ret = xc_mem_paging_prep(paging->xc_handle, paging->mem_event.domain_id,
+ ret = xc_mem_paging_prep(xch, paging->mem_event.domain_id,
_gfn);
if ( ret != 0 )
{
@@ -439,7 +441,7 @@ static int xenpaging_populate_page(
/* Map page */
ret = -EFAULT;
- page = xc_map_foreign_pages(paging->xc_handle, paging->mem_event.domain_id,
+ page = xc_map_foreign_pages(xch, paging->mem_event.domain_id,
PROT_READ | PROT_WRITE, &_gfn, 1);
*gfn = _gfn;
if ( page == NULL )
@@ -462,15 +464,16 @@ static int xenpaging_populate_page(
return ret;
}
-static int evict_victim(xc_interface *xch, xenpaging_t *paging, domid_t domain_id,
+static int evict_victim(xenpaging_t *paging, domid_t domain_id,
xenpaging_victim_t *victim, int fd, int i)
{
+ struct xc_interface *xch = paging->xc_handle;
int j = 0;
int ret;
do
{
- ret = policy_choose_victim(xch, paging, domain_id, victim);
+ ret = policy_choose_victim(paging, domain_id, victim);
if ( ret != 0 )
{
if ( ret != -ENOSPC )
@@ -483,10 +486,10 @@ static int evict_victim(xc_interface *xc
ret = -EINTR;
goto out;
}
- ret = xc_mem_paging_nominate(paging->xc_handle,
+ ret = xc_mem_paging_nominate(xch,
paging->mem_event.domain_id, victim->gfn);
if ( ret == 0 )
- ret = xenpaging_evict_page(xch, paging, victim, fd, i);
+ ret = xenpaging_evict_page(paging, victim, fd, i);
else
{
if ( j++ % 1000 == 0 )
@@ -536,12 +539,13 @@ int main(int argc, char *argv[])
srand(time(NULL));
/* Initialise domain paging */
- paging = xenpaging_init(&xch, domain_id);
+ paging = xenpaging_init(domain_id);
if ( paging == NULL )
{
fprintf(stderr, "Error initialising paging");
return 1;
}
+ xch = paging->xc_handle;
DPRINTF("starting %s %u %d\n", argv[0], domain_id, num_pages);
@@ -574,7 +578,7 @@ int main(int argc, char *argv[])
memset(victims, 0, sizeof(xenpaging_victim_t) * num_pages);
for ( i = 0; i < num_pages; i++ )
{
- rc = evict_victim(xch, paging, domain_id, &victims[i], fd, i);
+ rc = evict_victim(paging, domain_id, &victims[i], fd, i);
if ( rc == -ENOSPC )
break;
if ( rc == -EINTR )
@@ -627,7 +631,7 @@ int main(int argc, char *argv[])
}
/* Populate the page */
- rc = xenpaging_populate_page(xch, paging, &req.gfn, fd, i);
+ rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
if ( rc != 0 )
{
ERROR("Error populating page");
@@ -648,7 +652,7 @@ int main(int argc, char *argv[])
}
/* Evict a new page to replace the one we just paged in */
- evict_victim(xch, paging, domain_id, &victims[i], fd, i);
+ evict_victim(paging, domain_id, &victims[i], fd, i);
}
else
{
@@ -686,7 +690,7 @@ int main(int argc, char *argv[])
free(victims);
/* Tear down domain paging */
- rc1 = xenpaging_teardown(xch, paging);
+ rc1 = xenpaging_teardown(paging);
if ( rc1 != 0 )
fprintf(stderr, "Error tearing down paging");
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring()
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (6 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 07/17] xenpaging: update xch usage Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation Olaf Hering
` (9 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.mem_event_check_ring.no_vcpu_sleep.patch --]
[-- Type: text/plain, Size: 2625 bytes --]
Add a new option to mem_event_check_ring() to disable the
vcpu_sleep_nosync. This is needed for an upcoming patch which sends a
one-way request to the pager.
Also add a micro-optimization, check ring_full first because its value
was just evaluated.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
xen/arch/x86/mm/mem_event.c | 4 ++--
xen/arch/x86/mm/mem_sharing.c | 2 +-
xen/arch/x86/mm/p2m.c | 2 +-
xen/include/asm-x86/mem_event.h | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/mem_event.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/mem_event.c
@@ -143,7 +143,7 @@ void mem_event_unpause_vcpus(struct doma
vcpu_wake(v);
}
-int mem_event_check_ring(struct domain *d)
+int mem_event_check_ring(struct domain *d, int do_vcpu_sleep)
{
struct vcpu *curr = current;
int free_requests;
@@ -159,7 +159,7 @@ int mem_event_check_ring(struct domain *
}
ring_full = free_requests < MEM_EVENT_RING_THRESHOLD;
- if ( (curr->domain->domain_id == d->domain_id) && ring_full )
+ if ( ring_full && do_vcpu_sleep && (curr->domain->domain_id == d->domain_id) )
{
set_bit(_VPF_mem_event, &curr->pause_flags);
vcpu_sleep_nosync(curr);
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/mem_sharing.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/mem_sharing.c
@@ -321,7 +321,7 @@ static struct page_info* mem_sharing_all
}
/* XXX: Need to reserve a request, not just check the ring! */
- if(mem_event_check_ring(d)) return page;
+ if(mem_event_check_ring(d, 1)) return page;
req.flags |= MEM_EVENT_FLAG_OUT_OF_MEM;
req.gfn = gfn;
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2758,7 +2758,7 @@ void p2m_mem_paging_populate(struct p2m_
struct domain *d = p2m->domain;
/* Check that there's space on the ring for this request */
- if ( mem_event_check_ring(d) )
+ if ( mem_event_check_ring(d, 1) )
return;
memset(&req, 0, sizeof(req));
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/mem_event.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/mem_event.h
@@ -24,7 +24,7 @@
#ifndef __MEM_EVENT_H__
#define __MEM_EVENT_H__
-int mem_event_check_ring(struct domain *d);
+int mem_event_check_ring(struct domain *d, int do_vcpu_sleep);
void mem_event_put_request(struct domain *d, mem_event_request_t *req);
void mem_event_get_response(struct domain *d, mem_event_response_t *rsp);
void mem_event_unpause_vcpus(struct domain *d);
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (7 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 08/17] xenpaging: make vcpu_sleep_nosync() optional in mem_event_check_ring() Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
` (8 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.machine_to_phys_mapping.free_domheap_pages.patch --]
[-- Type: text/plain, Size: 2029 bytes --]
The machine_to_phys_mapping[] array needs updating during page
deallocation. If that page is allocated again, a call to
get_gpfn_from_mfn() will still return an old gfn from another guest.
This will cause trouble because this gfn number has no or different
meaning in the context of the current guest.
This happens when the entire guest ram is paged-out before
xen_vga_populate_vram() runs. Then XENMEM_populate_physmap is called
with gfn 0xff000. A new page is allocated with alloc_domheap_pages.
This new page does not have a gfn yet. However, in
guest_physmap_add_entry() the passed mfn maps still to an old gfn
(perhaps from another old guest). This old gfn is in paged-out state in
this guests context and has no mfn anymore. As a result, the ASSERT()
triggers because p2m_is_ram() is true for p2m_ram_paging* types.
If the machine_to_phys_mapping[] array is updated properly, both loops
in guest_physmap_add_entry() turn into no-ops for the new page and the
mfn/gfn mapping will be done at the end of the function.
If XENMEM_add_to_physmap is used with XENMAPSPACE_gmfn,
get_gpfn_from_mfn() will return an appearently valid gfn. As a result,
guest_physmap_remove_page() is called. The ASSERT in p2m_remove_page
triggers because the passed mfn does not match the old mfn for the
passed gfn.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
xen/common/page_alloc.c | 6 ++++++
1 file changed, 6 insertions(+)
--- xen-unstable.hg-4.1.22459.orig/xen/common/page_alloc.c
+++ xen-unstable.hg-4.1.22459/xen/common/page_alloc.c
@@ -1200,9 +1200,15 @@ void free_domheap_pages(struct page_info
{
int i, drop_dom_ref;
struct domain *d = page_get_owner(pg);
+ unsigned long mfn;
ASSERT(!in_irq());
+ /* this page is not a gfn anymore */
+ mfn = page_to_mfn(pg);
+ for ( i = 0; i < (1 << order); i++ )
+ set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY);
+
if ( unlikely(is_xen_heap_page(pg)) )
{
/* NB. May recursively lock from relinquish_memory(). */
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (8 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 09/17] xenpaging: update machine_to_phys_mapping[] during page deallocation Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-14 22:58 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page Olaf Hering
` (7 subsequent siblings)
17 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.machine_to_phys_mapping.patch --]
[-- Type: text/plain, Size: 642 bytes --]
Update the machine_to_phys_mapping[] array during page-in. The gfn is
now at a different page and the array has still INVALID_M2P_ENTRY in the
index.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
xen/arch/x86/mm/p2m.c | 1 +
1 file changed, 1 insertion(+)
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
p2m_lock(p2m);
set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
+ set_gpfn_from_mfn(mfn_x(mfn), gfn);
audit_p2m(p2m, 1);
p2m_unlock(p2m);
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
@ 2010-12-14 22:58 ` Olaf Hering
2010-12-15 10:47 ` Tim Deegan
0 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-14 22:58 UTC (permalink / raw)
To: xen-devel
On Mon, Dec 06, Olaf Hering wrote:
> Update the machine_to_phys_mapping[] array during page-in. The gfn is
> now at a different page and the array has still INVALID_M2P_ENTRY in the
> index.
Does anyone know what the "best" location for this array update is?
p2m_mem_paging_prep() allocates a new page for the guest and assigns a
gfn to that mfn. So in theory the array could be updated right away,
even if the gfn will still have a p2m_ram_paging_* type until
p2m_mem_paging_resume() is called.
Olaf
> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> @@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
> mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
> p2m_lock(p2m);
> set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
> + set_gpfn_from_mfn(mfn_x(mfn), gfn);
> audit_p2m(p2m, 1);
> p2m_unlock(p2m);
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in
2010-12-14 22:58 ` Olaf Hering
@ 2010-12-15 10:47 ` Tim Deegan
0 siblings, 0 replies; 27+ messages in thread
From: Tim Deegan @ 2010-12-15 10:47 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel@lists.xensource.com
At 22:58 +0000 on 14 Dec (1292367517), Olaf Hering wrote:
> On Mon, Dec 06, Olaf Hering wrote:
>
> > Update the machine_to_phys_mapping[] array during page-in. The gfn is
> > now at a different page and the array has still INVALID_M2P_ENTRY in the
> > index.
>
> Does anyone know what the "best" location for this array update is?
> p2m_mem_paging_prep() allocates a new page for the guest and assigns a
> gfn to that mfn. So in theory the array could be updated right away,
> even if the gfn will still have a p2m_ram_paging_* type until
> p2m_mem_paging_resume() is called.
I slightly prefer it where you have put it in this patch, but either
would probably be fine.
Tim.
> Olaf
>
> > --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> > +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> > @@ -2827,6 +2827,7 @@ void p2m_mem_paging_resume(struct p2m_do
> > mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
> > p2m_lock(p2m);
> > set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
> > + set_gpfn_from_mfn(mfn_x(mfn), gfn);
> > audit_p2m(p2m, 1);
> > p2m_unlock(p2m);
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
--
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (9 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 10/17] xenpaging: update machine_to_phys_mapping[] during page-in Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
` (6 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.guest_remove_page.patch --]
[-- Type: text/plain, Size: 7834 bytes --]
Simply drop paged-pages in guest_remove_page(), and notify xenpaging to
drop its reference to the gfn. If the ring is full, the page will
remain in paged-out state in xenpaging. This is not an issue, it just
means this gfn will not be nominated again.
This patch depends on an earlier patch for mem_event_check_ring(),
which adds an additional option to mem_event_check_ring().
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v3:
send one-way notification to pager to release page
use new mem_event_check_ring() feature to not pause vcpu when ring is full
v2:
resume dropped page to unpause vcpus
tools/xenpaging/xenpaging.c | 46 ++++++++++++++++++++++------------
xen/arch/x86/mm/p2m.c | 54 +++++++++++++++++++++++++++++++----------
xen/common/memory.c | 6 ++++
xen/include/asm-x86/p2m.h | 4 +++
xen/include/public/mem_event.h | 1
5 files changed, 83 insertions(+), 28 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/xenpaging.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/xenpaging.c
@@ -386,6 +386,12 @@ int xenpaging_evict_page(xenpaging_t *pa
return ret;
}
+static void xenpaging_drop_page(xenpaging_t *paging, unsigned long gfn)
+{
+ /* Notify policy of page being dropped */
+ policy_notify_paged_in(paging->mem_event.domain_id, gfn);
+}
+
static int xenpaging_resume_page(xenpaging_t *paging, mem_event_response_t *rsp, int notify_policy)
{
int ret;
@@ -630,25 +636,33 @@ int main(int argc, char *argv[])
goto out;
}
- /* Populate the page */
- rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
- if ( rc != 0 )
+ if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
{
- ERROR("Error populating page");
- goto out;
+ DPRINTF("Dropping page %"PRIx64"\n", req.gfn);
+ xenpaging_drop_page(paging, req.gfn);
}
-
- /* Prepare the response */
- rsp.gfn = req.gfn;
- rsp.p2mt = req.p2mt;
- rsp.vcpu_id = req.vcpu_id;
- rsp.flags = req.flags;
-
- rc = xenpaging_resume_page(paging, &rsp, 1);
- if ( rc != 0 )
+ else
{
- ERROR("Error resuming page");
- goto out;
+ /* Populate the page */
+ rc = xenpaging_populate_page(paging, &req.gfn, fd, i);
+ if ( rc != 0 )
+ {
+ ERROR("Error populating page");
+ goto out;
+ }
+
+ /* Prepare the response */
+ rsp.gfn = req.gfn;
+ rsp.p2mt = req.p2mt;
+ rsp.vcpu_id = req.vcpu_id;
+ rsp.flags = req.flags;
+
+ rc = xenpaging_resume_page(paging, &rsp, 1);
+ if ( rc != 0 )
+ {
+ ERROR("Error resuming page");
+ goto out;
+ }
}
/* Evict a new page to replace the one we just paged in */
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2194,12 +2194,15 @@ p2m_remove_page(struct p2m_domain *p2m,
P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn, mfn);
- for ( i = 0; i < (1UL << page_order); i++ )
+ if ( mfn_valid(_mfn(mfn)) )
{
- mfn_return = p2m->get_entry(p2m, gfn + i, &t, p2m_query);
- if ( !p2m_is_grant(t) )
- set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
- ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
+ for ( i = 0; i < (1UL << page_order); i++ )
+ {
+ mfn_return = p2m->get_entry(p2m, gfn + i, &t, p2m_query);
+ if ( !p2m_is_grant(t) )
+ set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
+ ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
+ }
}
set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid);
}
@@ -2750,6 +2753,30 @@ int p2m_mem_paging_evict(struct p2m_doma
return 0;
}
+void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
+{
+ struct vcpu *v = current;
+ mem_event_request_t req;
+ struct domain *d = p2m->domain;
+
+ /* Check that there's space on the ring for this request */
+ if ( mem_event_check_ring(d, 0) )
+ {
+ /* This just means this gfn will not be paged again */
+ gdprintk(XENLOG_ERR, "dropped gfn %lx not released in xenpaging\n", gfn);
+ }
+ else
+ {
+ /* Send release notification to pager */
+ memset(&req, 0, sizeof(req));
+ req.flags |= MEM_EVENT_FLAG_DROP_PAGE;
+ req.gfn = gfn;
+ req.vcpu_id = v->vcpu_id;
+
+ mem_event_put_request(d, &req);
+ }
+}
+
void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
{
struct vcpu *v = current;
@@ -2823,13 +2850,16 @@ void p2m_mem_paging_resume(struct p2m_do
/* Pull the response off the ring */
mem_event_get_response(d, &rsp);
- /* Fix p2m entry */
- mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
- p2m_lock(p2m);
- set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
- set_gpfn_from_mfn(mfn_x(mfn), gfn);
- audit_p2m(p2m, 1);
- p2m_unlock(p2m);
+ /* Fix p2m entry if the page was not dropped */
+ if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
+ {
+ mfn = gfn_to_mfn(p2m, rsp.gfn, &p2mt);
+ p2m_lock(p2m);
+ set_p2m_entry(p2m, rsp.gfn, mfn, 0, p2m_ram_rw);
+ set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+ audit_p2m(p2m, 1);
+ p2m_unlock(p2m);
+ }
/* Unpause domain */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
--- xen-unstable.hg-4.1.22459.orig/xen/common/memory.c
+++ xen-unstable.hg-4.1.22459/xen/common/memory.c
@@ -163,6 +163,12 @@ int guest_remove_page(struct domain *d,
#ifdef CONFIG_X86
mfn = mfn_x(gfn_to_mfn(p2m_get_hostp2m(d), gmfn, &p2mt));
+ if ( unlikely(p2m_is_paging(p2mt)) )
+ {
+ guest_physmap_remove_page(d, gmfn, mfn, 0);
+ p2m_mem_paging_drop_page(p2m_get_hostp2m(d), gmfn);
+ return 1;
+ }
#else
mfn = gmfn_to_mfn(d, gmfn);
#endif
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
@@ -471,6 +471,8 @@ int set_shared_p2m_entry(struct p2m_doma
int p2m_mem_paging_nominate(struct p2m_domain *p2m, unsigned long gfn);
/* Evict a frame */
int p2m_mem_paging_evict(struct p2m_domain *p2m, unsigned long gfn);
+/* Tell xenpaging to drop a paged out frame */
+void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
/* Start populating a paged out frame */
void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
/* Prepare the p2m for paging a frame in */
@@ -478,6 +480,8 @@ int p2m_mem_paging_prep(struct p2m_domai
/* Resume normal operation (in case a domain was paused) */
void p2m_mem_paging_resume(struct p2m_domain *p2m);
#else
+static inline void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
+{ }
static inline void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
{ }
#endif
--- xen-unstable.hg-4.1.22459.orig/xen/include/public/mem_event.h
+++ xen-unstable.hg-4.1.22459/xen/include/public/mem_event.h
@@ -37,6 +37,7 @@
#define MEM_EVENT_FLAG_VCPU_PAUSED (1 << 0)
#define MEM_EVENT_FLAG_DOM_PAUSED (1 << 1)
#define MEM_EVENT_FLAG_OUT_OF_MEM (1 << 2)
+#define MEM_EVENT_FLAG_DROP_PAGE (1 << 3)
typedef struct mem_event_shared_page {
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (10 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 11/17] xenpaging: drop paged pages in guest_remove_page Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-07 9:27 ` Jan Beulich
2010-12-15 11:35 ` Keir Fraser
2010-12-06 20:59 ` [PATCH 13/17] xenpaging: page only pagetables for debugging Olaf Hering
` (5 subsequent siblings)
17 siblings, 2 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.HVMCOPY_gfn_paged_out.patch --]
[-- Type: text/plain, Size: 7100 bytes --]
copy_from_user_hvm can fail when __hvm_copy returns
HVMCOPY_gfn_paged_out for a referenced gfn, for example during guests
pagetable walk. This has to be handled in some way.
Use the recently added wait_queue feature to preempt the current vcpu
when populate a page, then resume execution later when the page was
resumed. This is only done if the active domain needs to access the
page, because in this case the vcpu would leave the active state anyway.
This patch adds a return code to p2m_mem_paging_populate() to indicate
the caller that the page was ready, so it can retry the gfn_to_mfn call.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
xen/arch/x86/hvm/hvm.c | 3 ++-
xen/arch/x86/mm/guest_walk.c | 5 +++--
xen/arch/x86/mm/hap/guest_walk.c | 10 ++++++----
xen/arch/x86/mm/p2m.c | 19 ++++++++++++++-----
xen/common/domain.c | 1 +
xen/include/asm-x86/p2m.h | 7 ++++---
xen/include/xen/sched.h | 3 +++
7 files changed, 33 insertions(+), 15 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/hvm/hvm.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/hvm/hvm.c
@@ -1939,7 +1939,8 @@ static enum hvm_copy_result __hvm_copy(
if ( p2m_is_paging(p2mt) )
{
- p2m_mem_paging_populate(p2m, gfn);
+ if ( p2m_mem_paging_populate(p2m, gfn) )
+ continue;
return HVMCOPY_gfn_paged_out;
}
if ( p2m_is_shared(p2mt) )
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
@@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
uint32_t *rc)
{
/* Translate the gfn, unsharing if shared */
+retry:
*mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
if ( p2m_is_paging(*p2mt) )
{
- p2m_mem_paging_populate(p2m, gfn_x(gfn));
-
+ if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
+ goto retry;
*rc = _PAGE_PAGED;
return NULL;
}
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/hap/guest_walk.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/hap/guest_walk.c
@@ -46,12 +46,13 @@ unsigned long hap_gva_to_gfn(GUEST_PAGIN
struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
/* Get the top-level table's MFN */
+retry_cr3:
cr3 = v->arch.hvm_vcpu.guest_cr[3];
top_mfn = gfn_to_mfn_unshare(p2m, cr3 >> PAGE_SHIFT, &p2mt, 0);
if ( p2m_is_paging(p2mt) )
{
- p2m_mem_paging_populate(p2m, cr3 >> PAGE_SHIFT);
-
+ if ( p2m_mem_paging_populate(p2m, cr3 >> PAGE_SHIFT) )
+ goto retry_cr3;
pfec[0] = PFEC_page_paged;
return INVALID_GFN;
}
@@ -79,11 +80,12 @@ unsigned long hap_gva_to_gfn(GUEST_PAGIN
if ( missing == 0 )
{
gfn_t gfn = guest_l1e_get_gfn(gw.l1e);
+retry_missing:
gfn_to_mfn_unshare(p2m, gfn_x(gfn), &p2mt, 0);
if ( p2m_is_paging(p2mt) )
{
- p2m_mem_paging_populate(p2m, gfn_x(gfn));
-
+ if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
+ goto retry_missing;
pfec[0] = PFEC_page_paged;
return INVALID_GFN;
}
--- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
+++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
@@ -2777,16 +2777,17 @@ void p2m_mem_paging_drop_page(struct p2m
}
}
-void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
+int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
{
struct vcpu *v = current;
mem_event_request_t req;
p2m_type_t p2mt;
struct domain *d = p2m->domain;
+ int ret = 0;
/* Check that there's space on the ring for this request */
if ( mem_event_check_ring(d, 1) )
- return;
+ return ret;
memset(&req, 0, sizeof(req));
@@ -2805,13 +2806,13 @@ void p2m_mem_paging_populate(struct p2m_
/* Pause domain */
if ( v->domain->domain_id == d->domain_id )
{
- vcpu_pause_nosync(v);
req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+ ret = 1;
}
else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
{
/* gfn is already on its way back and vcpu is not paused */
- return;
+ goto populate_out;
}
/* Send request to pager */
@@ -2820,6 +2821,14 @@ void p2m_mem_paging_populate(struct p2m_
req.vcpu_id = v->vcpu_id;
mem_event_put_request(d, &req);
+
+ if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+ {
+ wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) && !p2m_is_paging(p2mt));
+ }
+
+populate_out:
+ return ret;
}
int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn)
@@ -2863,7 +2872,7 @@ void p2m_mem_paging_resume(struct p2m_do
/* Unpause domain */
if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
- vcpu_unpause(d->vcpu[rsp.vcpu_id]);
+ wake_up(&d->wq);
/* Unpause any domains that were paused because the ring was full */
mem_event_unpause_vcpus(d);
--- xen-unstable.hg-4.1.22459.orig/xen/common/domain.c
+++ xen-unstable.hg-4.1.22459/xen/common/domain.c
@@ -244,6 +244,7 @@ struct domain *domain_create(
spin_lock_init(&d->node_affinity_lock);
spin_lock_init(&d->shutdown_lock);
+ init_waitqueue_head(&d->wq);
d->shutdown_code = -1;
if ( domcr_flags & DOMCRF_hvm )
--- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
+++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
@@ -474,7 +474,8 @@ int p2m_mem_paging_evict(struct p2m_doma
/* Tell xenpaging to drop a paged out frame */
void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
/* Start populating a paged out frame */
-void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
+/* retval 1 means the page is present on return */
+int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
/* Prepare the p2m for paging a frame in */
int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn);
/* Resume normal operation (in case a domain was paused) */
@@ -482,8 +483,8 @@ void p2m_mem_paging_resume(struct p2m_do
#else
static inline void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn)
{ }
-static inline void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
-{ }
+static inline int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn)
+{ return 0; }
#endif
struct page_info *p2m_alloc_ptp(struct p2m_domain *p2m, unsigned long type);
--- xen-unstable.hg-4.1.22459.orig/xen/include/xen/sched.h
+++ xen-unstable.hg-4.1.22459/xen/include/xen/sched.h
@@ -26,6 +26,7 @@
#include <xen/cpumask.h>
#include <xen/nodemask.h>
#include <xen/multicall.h>
+#include <xen/wait.h>
#ifdef CONFIG_COMPAT
#include <compat/vcpu.h>
@@ -332,6 +333,8 @@ struct domain
nodemask_t node_affinity;
unsigned int last_alloc_node;
spinlock_t node_affinity_lock;
+
+ struct waitqueue_head wq;
};
struct domain_setup_info
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
@ 2010-12-07 9:27 ` Jan Beulich
2010-12-07 9:45 ` Olaf Hering
2010-12-15 11:35 ` Keir Fraser
1 sibling, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2010-12-07 9:27 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
>>> On 06.12.10 at 21:59, Olaf Hering <olaf@aepfle.de> wrote:
> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
> @@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
> uint32_t *rc)
> {
> /* Translate the gfn, unsharing if shared */
> +retry:
> *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
> if ( p2m_is_paging(*p2mt) )
> {
> - p2m_mem_paging_populate(p2m, gfn_x(gfn));
> -
> + if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
> + goto retry;
> *rc = _PAGE_PAGED;
> return NULL;
> }
Is this retry loop (and similar ones later in the patch) guaranteed
to be bounded in some way?
> --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/p2m.c
> +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/p2m.c
> @@ -2805,13 +2806,13 @@ void p2m_mem_paging_populate(struct p2m_
> /* Pause domain */
> if ( v->domain->domain_id == d->domain_id )
> {
> - vcpu_pause_nosync(v);
> req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> + ret = 1;
> }
> else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
> {
> /* gfn is already on its way back and vcpu is not paused */
> - return;
> + goto populate_out;
Do you really need a goto here (i.e. are you foreseeing to get stuff
added between the label and the return below)?
> }
>
> /* Send request to pager */
> @@ -2820,6 +2821,14 @@ void p2m_mem_paging_populate(struct p2m_
> req.vcpu_id = v->vcpu_id;
>
> mem_event_put_request(d, &req);
> +
> + if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> + {
> + wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) && !p2m_is_paging(p2mt));
> + }
> +
> +populate_out:
> + return ret;
> }
>
> int p2m_mem_paging_prep(struct p2m_domain *p2m, unsigned long gfn)
> --- xen-unstable.hg-4.1.22459.orig/xen/include/asm-x86/p2m.h
> +++ xen-unstable.hg-4.1.22459/xen/include/asm-x86/p2m.h
> @@ -474,7 +474,8 @@ int p2m_mem_paging_evict(struct p2m_doma
> /* Tell xenpaging to drop a paged out frame */
> void p2m_mem_paging_drop_page(struct p2m_domain *p2m, unsigned long gfn);
> /* Start populating a paged out frame */
> -void p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
> +/* retval 1 means the page is present on return */
> +int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
Isn't this a case where you absolutely need the return value checked?
If so, you will want to add __must_check here.
Jan
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-07 9:27 ` Jan Beulich
@ 2010-12-07 9:45 ` Olaf Hering
0 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-07 9:45 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel
On Tue, Dec 07, Jan Beulich wrote:
> >>> On 06.12.10 at 21:59, Olaf Hering <olaf@aepfle.de> wrote:
> > --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
> > +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
> > @@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
> > uint32_t *rc)
> > {
> > /* Translate the gfn, unsharing if shared */
> > +retry:
> > *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
> > if ( p2m_is_paging(*p2mt) )
> > {
> > - p2m_mem_paging_populate(p2m, gfn_x(gfn));
> > -
> > + if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
> > + goto retry;
> > *rc = _PAGE_PAGED;
> > return NULL;
> > }
>
> Is this retry loop (and similar ones later in the patch) guaranteed
> to be bounded in some way?
This needs to be fixed, yes.
For the plain __hvm_copy case, with nothing else being modified, the
'return HVMCOPY_gfn_paged_out' could be just a 'continue'. But even
then, something needs to break the loop.
> > /* gfn is already on its way back and vcpu is not paused */
> > - return;
> > + goto populate_out;
>
> Do you really need a goto here (i.e. are you foreseeing to get stuff
> added between the label and the return below)?
Thats something for my debug patch, I have a trace_var at the end of
each function.
> > +/* retval 1 means the page is present on return */
> > +int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
>
> Isn't this a case where you absolutely need the return value checked?
> If so, you will want to add __must_check here.
Yes, that would be a good addition.
Maybe the wait_event/wake_up could be done unconditionally, independent
if the p2m domain differs from the vcpu domain.
Olaf
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
2010-12-07 9:27 ` Jan Beulich
@ 2010-12-15 11:35 ` Keir Fraser
2010-12-15 13:51 ` Olaf Hering
1 sibling, 1 reply; 27+ messages in thread
From: Keir Fraser @ 2010-12-15 11:35 UTC (permalink / raw)
To: Olaf Hering, xen-devel
On 06/12/2010 20:59, "Olaf Hering" <olaf@aepfle.de> wrote:
> mem_event_put_request(d, &req);
> +
> + if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> + {
> + wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) &&
> !p2m_is_paging(p2mt));
> + }
> +
This I find interesting. Do you not race the xenpaging daemon satisfying
your page-in request, but then very quickly paging it out again? In which
case you might never wake up!
I think the condition you wait on should be for a response to your paging
request. A wake_up() alone is not really sufficient; you need some kind of
explicit flagging to the vcpu too. Could the paging daemon stick a response
in a shared ring, or otherwise explicitly flag to this vcpu that it's
request has been fully satisfied and it's time to wake up and retry its
operation? Well, really that's a rhetorical question, because that is
exactly what you need to implement for this waitqueue strategy to work
properly!
-- Keir
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-15 11:35 ` Keir Fraser
@ 2010-12-15 13:51 ` Olaf Hering
2010-12-15 14:08 ` Keir Fraser
0 siblings, 1 reply; 27+ messages in thread
From: Olaf Hering @ 2010-12-15 13:51 UTC (permalink / raw)
To: Keir Fraser; +Cc: xen-devel
On Wed, Dec 15, Keir Fraser wrote:
> On 06/12/2010 20:59, "Olaf Hering" <olaf@aepfle.de> wrote:
>
> > mem_event_put_request(d, &req);
> > +
> > + if ( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> > + {
> > + wait_event(d->wq, mfn_valid(gfn_to_mfn(p2m, gfn, &p2mt)) &&
> > !p2m_is_paging(p2mt));
> > + }
> > +
>
> This I find interesting. Do you not race the xenpaging daemon satisfying
> your page-in request, but then very quickly paging it out again? In which
> case you might never wake up!
That probably depends on the size of the mru size in the xenpaging
policy. Right now alot of page-out/page-in will happen before the gfn
will be nominated again.
> I think the condition you wait on should be for a response to your paging
> request. A wake_up() alone is not really sufficient; you need some kind of
> explicit flagging to the vcpu too. Could the paging daemon stick a response
> in a shared ring, or otherwise explicitly flag to this vcpu that it's
> request has been fully satisfied and it's time to wake up and retry its
> operation? Well, really that's a rhetorical question, because that is
> exactly what you need to implement for this waitqueue strategy to work
> properly!
Yes, there needs to be some reliable event which the vcpu has to pick up.
I will return to work on this issue, but most likely not this year anymore.
Olaf
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user
2010-12-15 13:51 ` Olaf Hering
@ 2010-12-15 14:08 ` Keir Fraser
0 siblings, 0 replies; 27+ messages in thread
From: Keir Fraser @ 2010-12-15 14:08 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
On 15/12/2010 13:51, "Olaf Hering" <olaf@aepfle.de> wrote:
>> I think the condition you wait on should be for a response to your paging
>> request. A wake_up() alone is not really sufficient; you need some kind of
>> explicit flagging to the vcpu too. Could the paging daemon stick a response
>> in a shared ring, or otherwise explicitly flag to this vcpu that it's
>> request has been fully satisfied and it's time to wake up and retry its
>> operation? Well, really that's a rhetorical question, because that is
>> exactly what you need to implement for this waitqueue strategy to work
>> properly!
>
> Yes, there needs to be some reliable event which the vcpu has to pick up.
> I will return to work on this issue, but most likely not this year anymore.
This is all bugfix stuff which can be slipped into 4.1 during feature
freeze. Also, what doesn't get done in time for 4.1.0 can go into 4.1.1
instead, which will likely be 6-8 weeks later.
-- Keir
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 13/17] xenpaging: page only pagetables for debugging
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (11 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 12/17] xenpaging: handle HVMCOPY_gfn_paged_out in copy_from/to_user Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 14/17] xenpaging: prevent page-out of first 16MB Olaf Hering
` (4 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.page_pagetables.patch --]
[-- Type: text/plain, Size: 845 bytes --]
Page only page-tables with a Linux guest, needed to run __hvm_copy code paths
---
tools/xenpaging/policy_default.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -26,7 +26,7 @@
#include "policy.h"
-#define MRU_SIZE (1024 * 16)
+#define MRU_SIZE (1 << 4)
static unsigned long mru[MRU_SIZE];
@@ -60,8 +60,11 @@ int policy_init(xenpaging_t *paging)
for ( i = 0; i < MRU_SIZE; i++ )
mru[i] = INVALID_MFN;
- /* Don't page out page 0 */
- set_bit(0, bitmap);
+ /* Leave a hole for pagetables */
+ for ( i = 0; i < max_pages; i++ )
+ set_bit(i, bitmap);
+ for ( i = 0x1800; i < 0x18ff; i++ )
+ clear_bit(i, bitmap);
out:
return rc;
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 14/17] xenpaging: prevent page-out of first 16MB
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (12 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 13/17] xenpaging: page only pagetables for debugging Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 15/17] xenpaging: start xenpaging via config option Olaf Hering
` (3 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.blacklist.patch --]
[-- Type: text/plain, Size: 824 bytes --]
This is more a workaround than a bugfix:
Don't page out first 16MB of memory.
When the BIOS does its initialization process and xenpaging removes pages,
crashes will occour due to lack of support of xenpaging.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/xenpaging/policy_default.c | 4 ++++
1 file changed, 4 insertions(+)
--- xen-unstable.hg-4.1.22459.orig/tools/xenpaging/policy_default.c
+++ xen-unstable.hg-4.1.22459/tools/xenpaging/policy_default.c
@@ -60,6 +60,10 @@ int policy_init(xenpaging_t *paging)
for ( i = 0; i < MRU_SIZE; i++ )
mru[i] = INVALID_MFN;
+ /* Don't page out first 16MB */
+ for ( i = 0; i < ((16*1024*1024)/4096); i++ )
+ set_bit(i, bitmap);
+
/* Leave a hole for pagetables */
for ( i = 0; i < max_pages; i++ )
set_bit(i, bitmap);
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 15/17] xenpaging: start xenpaging via config option
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (13 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 14/17] xenpaging: prevent page-out of first 16MB Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging Olaf Hering
` (2 subsequent siblings)
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.autostart.patch --]
[-- Type: text/plain, Size: 9729 bytes --]
Start xenpaging via config option.
TODO: make it actually work with xen-unstable ('None' is passed as size arg?)
TODO: add config option for different pagefile directory
TODO: add libxl support
TODO: parse config values like 42K, 42M, 42G, 42%
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2:
unlink logfile instead of truncating it.
allows hardlinking for further inspection
tools/examples/xmexample.hvm | 3 +
tools/python/README.XendConfig | 1
tools/python/README.sxpcfg | 1
tools/python/xen/xend/XendConfig.py | 3 +
tools/python/xen/xend/XendDomainInfo.py | 6 ++
tools/python/xen/xend/image.py | 91 ++++++++++++++++++++++++++++++++
tools/python/xen/xm/create.py | 5 +
tools/python/xen/xm/xenapi_create.py | 1
8 files changed, 111 insertions(+)
--- xen-unstable.hg-4.1.22459.orig/tools/examples/xmexample.hvm
+++ xen-unstable.hg-4.1.22459/tools/examples/xmexample.hvm
@@ -127,6 +127,9 @@ disk = [ 'file:/var/images/min-el3-i386.
# Device Model to be used
device_model = 'qemu-dm'
+# xenpaging, number of pages
+xenpaging = 42
+
#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
--- xen-unstable.hg-4.1.22459.orig/tools/python/README.XendConfig
+++ xen-unstable.hg-4.1.22459/tools/python/README.XendConfig
@@ -120,6 +120,7 @@ otherConfig
image.vncdisplay
image.vncunused
image.hvm.device_model
+ image.hvm.xenpaging
image.hvm.display
image.hvm.xauthority
image.hvm.vncconsole
--- xen-unstable.hg-4.1.22459.orig/tools/python/README.sxpcfg
+++ xen-unstable.hg-4.1.22459/tools/python/README.sxpcfg
@@ -51,6 +51,7 @@ image
- vncunused
(HVM)
- device_model
+ - xenpaging
- display
- xauthority
- vncconsole
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/XendConfig.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/XendConfig.py
@@ -147,6 +147,7 @@ XENAPI_PLATFORM_CFG_TYPES = {
'apic': int,
'boot': str,
'device_model': str,
+ 'xenpaging': int,
'loader': str,
'display' : str,
'fda': str,
@@ -508,6 +509,8 @@ class XendConfig(dict):
self['platform']['nomigrate'] = 0
if self.is_hvm():
+ if 'xenpaging' not in self['platform']:
+ self['platform']['xenpaging'] = None
if 'timer_mode' not in self['platform']:
self['platform']['timer_mode'] = 1
if 'viridian' not in self['platform']:
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/XendDomainInfo.py
@@ -2390,6 +2390,7 @@ class XendDomainInfo:
if self.image:
self.image.createDeviceModel()
+ self.image.createXenPaging()
#if have pass-through devs, need the virtual pci slots info from qemu
self.pci_device_configure_boot()
@@ -2402,6 +2403,11 @@ class XendDomainInfo:
self.image.destroyDeviceModel()
except Exception, e:
log.exception("Device model destroy failed %s" % str(e))
+ try:
+ log.debug("stopping xenpaging")
+ self.image.destroyXenPaging()
+ except Exception, e:
+ log.exception("stopping xenpaging failed %s" % str(e))
else:
log.debug("No device model")
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/image.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/image.py
@@ -122,12 +122,14 @@ class ImageHandler:
self.vm.permissionsVm("image/cmdline", { 'dom': self.vm.getDomid(), 'read': True } )
self.device_model = vmConfig['platform'].get('device_model')
+ self.xenpaging = vmConfig['platform'].get('xenpaging')
self.display = vmConfig['platform'].get('display')
self.xauthority = vmConfig['platform'].get('xauthority')
self.vncconsole = int(vmConfig['platform'].get('vncconsole', 0))
self.dmargs = self.parseDeviceModelArgs(vmConfig)
self.pid = None
+ self.xenpaging_pid = None
rtc_timeoffset = int(vmConfig['platform'].get('rtc_timeoffset', 0))
if int(vmConfig['platform'].get('localtime', 0)):
if time.localtime(time.time())[8]:
@@ -392,6 +394,95 @@ class ImageHandler:
sentinel_fifos_inuse[sentinel_path_fifo] = 1
self.sentinel_path_fifo = sentinel_path_fifo
+ def createXenPaging(self):
+ if self.xenpaging is None:
+ return
+ if self.xenpaging == 0:
+ return
+ if self.xenpaging_pid:
+ return
+ xenpaging_bin = auxbin.pathTo("xenpaging")
+ args = [xenpaging_bin]
+ args = args + ([ "%d" % self.vm.getDomid()])
+ args = args + ([ "%s" % self.xenpaging])
+ env = dict(os.environ)
+ self.xenpaging_logfile = "/var/log/xen/xenpaging-%s.log" % str(self.vm.info['name_label'])
+ logfile_mode = os.O_WRONLY|os.O_CREAT|os.O_APPEND|os.O_TRUNC
+ null = os.open("/dev/null", os.O_RDONLY)
+ try:
+ os.unlink(self.xenpaging_logfile)
+ except:
+ pass
+ logfd = os.open(self.xenpaging_logfile, logfile_mode, 0644)
+ sys.stderr.flush()
+ contract = osdep.prefork("%s:%d" % (self.vm.getName(), self.vm.getDomid()))
+ xenpaging_pid = os.fork()
+ if xenpaging_pid == 0: #child
+ try:
+ xenpaging_dir = "/var/lib/xen/xenpaging"
+ osdep.postfork(contract)
+ os.dup2(null, 0)
+ os.dup2(logfd, 1)
+ os.dup2(logfd, 2)
+ try:
+ os.mkdir(xenpaging_dir)
+ except:
+ log.info("mkdir %s failed" % xenpaging_dir)
+ pass
+ try:
+ os.chdir(xenpaging_dir)
+ except:
+ log.warn("chdir %s failed" % xenpaging_dir)
+ try:
+ log.info("starting %s" % args)
+ os.execve(xenpaging_bin, args, env)
+ except Exception, e:
+ print >>sys.stderr, (
+ 'failed to execute xenpaging: %s: %s' %
+ xenpaging_bin, utils.exception_string(e))
+ os._exit(126)
+ except Exception, e:
+ log.warn("staring xenpaging in %s failed" % xenpaging_dir)
+ os._exit(127)
+ else:
+ osdep.postfork(contract, abandon=True)
+ self.xenpaging_pid = xenpaging_pid
+ os.close(null)
+ os.close(logfd)
+
+ def destroyXenPaging(self):
+ if self.xenpaging is None:
+ return
+ if self.xenpaging_pid:
+ try:
+ os.kill(self.xenpaging_pid, signal.SIGHUP)
+ except OSError, exn:
+ log.exception(exn)
+ for i in xrange(100):
+ try:
+ (p, rv) = os.waitpid(self.xenpaging_pid, os.WNOHANG)
+ if p == self.xenpaging_pid:
+ break
+ except OSError:
+ # This is expected if Xend has been restarted within
+ # the life of this domain. In this case, we can kill
+ # the process, but we can't wait for it because it's
+ # not our child. We continue this loop, and after it is
+ # terminated make really sure the process is going away
+ # (SIGKILL).
+ pass
+ time.sleep(0.1)
+ else:
+ log.warning("xenpaging %d took more than 10s "
+ "to terminate: sending SIGKILL" % self.xenpaging_pid)
+ try:
+ os.kill(self.xenpaging_pid, signal.SIGKILL)
+ os.waitpid(self.xenpaging_pid, 0)
+ except OSError:
+ # This happens if the process doesn't exist.
+ pass
+ self.xenpaging_pid = None
+
def createDeviceModel(self, restore = False):
if self.device_model is None:
return
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xm/create.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xm/create.py
@@ -491,6 +491,10 @@ gopts.var('nfs_root', val="PATH",
fn=set_value, default=None,
use="Set the path of the root NFS directory.")
+gopts.var('xenpaging', val='NUM',
+ fn=set_int, default=None,
+ use="Number of pages to swap.")
+
gopts.var('device_model', val='FILE',
fn=set_value, default=None,
use="Path to device model program.")
@@ -1076,6 +1080,7 @@ def configure_hvm(config_image, vals):
args = [ 'acpi', 'apic',
'boot',
'cpuid', 'cpuid_check',
+ 'xenpaging',
'device_model', 'display',
'fda', 'fdb',
'gfx_passthru', 'guest_os_type',
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xm/xenapi_create.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xm/xenapi_create.py
@@ -1085,6 +1085,7 @@ class sxp2xml:
'acpi',
'apic',
'boot',
+ 'xenpaging',
'device_model',
'loader',
'fda',
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (14 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 15/17] xenpaging: start xenpaging via config option Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 20:59 ` [PATCH 17/17] xenpaging: (sparse) documenation Olaf Hering
2010-12-06 21:16 ` [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.autostart_delay.patch --]
[-- Type: text/plain, Size: 4588 bytes --]
This is a debug helper. Since the xenpaging support is still fragile, run
xenpaging at different stages in the bootprocess. Different delays will trigger
more bugs. This implementation starts without delay for 5 reboots, then
increments the delay by 0.1 seconds It uses xenstore for presistant storage of
delay values
TODO: find the correct place to remove the xenstore directory when the guest is shutdown or crashed
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
tools/python/xen/xend/image.py | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
--- xen-unstable.hg-4.1.22459.orig/tools/python/xen/xend/image.py
+++ xen-unstable.hg-4.1.22459/tools/python/xen/xend/image.py
@@ -123,6 +123,18 @@ class ImageHandler:
self.device_model = vmConfig['platform'].get('device_model')
self.xenpaging = vmConfig['platform'].get('xenpaging')
+ self.xenpaging_delay = xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay" % self.vm.info['name_label'])
+ if self.xenpaging_delay == None:
+ log.warn("XXX creating /local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
+ xstransact.Mkdir("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay', '0.0'))
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_inc', '0.1'))
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_use', '5'))
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_used', '0'))
+ self.xenpaging_delay = float(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay" % self.vm.info['name_label']))
+ self.xenpaging_delay_inc = float(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_inc" % self.vm.info['name_label']))
+ self.xenpaging_delay_use = int(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_use" % self.vm.info['name_label']))
+ self.xenpaging_delay_used = int(xstransact.Read("/local/domain/0/xenpaging/%s/xenpaging_delay_used" % self.vm.info['name_label']))
self.display = vmConfig['platform'].get('display')
self.xauthority = vmConfig['platform'].get('xauthority')
@@ -401,6 +413,17 @@ class ImageHandler:
return
if self.xenpaging_pid:
return
+ if self.xenpaging_delay_used < self.xenpaging_delay_use:
+ self.xenpaging_delay_used += 1
+ else:
+ self.xenpaging_delay_used = 0
+ self.xenpaging_delay += self.xenpaging_delay_inc
+ log.info("delay_used %s" % self.xenpaging_delay_used)
+ log.info("delay_use %s" % self.xenpaging_delay_use)
+ log.info("delay %s" % self.xenpaging_delay)
+ log.info("delay_inc %s" % self.xenpaging_delay_inc)
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay', self.xenpaging_delay))
+ xstransact.Store("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'], ('xenpaging_delay_used', self.xenpaging_delay_used))
xenpaging_bin = auxbin.pathTo("xenpaging")
args = [xenpaging_bin]
args = args + ([ "%d" % self.vm.getDomid()])
@@ -434,6 +457,9 @@ class ImageHandler:
except:
log.warn("chdir %s failed" % xenpaging_dir)
try:
+ if self.xenpaging_delay != 0.0:
+ log.info("delaying xenpaging startup %s seconds ..." % self.xenpaging_delay)
+ time.sleep(self.xenpaging_delay)
log.info("starting %s" % args)
os.execve(xenpaging_bin, args, env)
except Exception, e:
@@ -449,10 +475,16 @@ class ImageHandler:
self.xenpaging_pid = xenpaging_pid
os.close(null)
os.close(logfd)
+ if self.xenpaging_delay == 0.0:
+ log.warn("waiting for xenpaging ...")
+ time.sleep(22)
+ log.warn("waiting for xenpaging done.")
def destroyXenPaging(self):
if self.xenpaging is None:
return
+ # FIXME find correct place for guest shutdown or crash
+ #xstransact.Remove("/local/domain/0/xenpaging/%s" % self.vm.info['name_label'])
if self.xenpaging_pid:
try:
os.kill(self.xenpaging_pid, signal.SIGHUP)
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH 17/17] xenpaging: (sparse) documenation
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (15 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 16/17] xenpaging: add dynamic startup delay for xenpaging Olaf Hering
@ 2010-12-06 20:59 ` Olaf Hering
2010-12-06 21:16 ` [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 20:59 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: xen-unstable.xenpaging.doc.patch --]
[-- Type: text/plain, Size: 1775 bytes --]
Write up some sparse documentation about xenpaging usage.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
docs/misc/xenpaging.txt | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
--- /dev/null
+++ xen-unstable.hg-4.1.22459/docs/misc/xenpaging.txt
@@ -0,0 +1,48 @@
+Warning:
+
+The xenpaging code is new and not fully debugged.
+Usage of xenpaging can crash Xen or cause severe data corruption in the
+guest memory and its filesystems!
+
+Description:
+
+xenpaging writes memory pages of a given guest to a file and moves the
+pages back to the pool of available memory. Once the guests wants to
+access the paged-out memory, the page is read from disk and placed into
+memory. This allows the sum of all running guests to use more memory
+than physically available on the host.
+
+Usage:
+
+Once the guest is running, run xenpaging with the guest_id and the
+number of pages to page-out:
+
+ chdir /var/lib/xen/xenpaging
+ xenpaging <guest_id> <number_of_pages>
+
+To obtain the guest_id, run 'xm list'.
+xenpaging will write the pagefile to the current directory.
+Example with 128MB pagefile on guest 1:
+
+ xenpaging 1 32768
+
+Caution: stopping xenpaging manually will cause the guest to stall or
+crash because the paged-out memory is not written back into the guest!
+
+After a reboot of a guest, its guest_id changes, the current xenpaging
+binary has no target anymore. To automate restarting of xenpaging after
+guest reboot, specify the number if pages in the guest configuration
+file /etc/xen/vm/<guest_name>:
+
+xenpaging=32768
+
+Redo the guest with 'xm create /etc/xen/vm/<guest_name>' to activate the
+changes.
+
+
+Todo:
+- implement stopping of xenpaging
+- implement/test live migration
+
+
+# vim: tw=72
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH 00/17] xenpaging changes for xen-unstable
2010-12-06 20:59 [PATCH 00/17] xenpaging changes for xen-unstable Olaf Hering
` (16 preceding siblings ...)
2010-12-06 20:59 ` [PATCH 17/17] xenpaging: (sparse) documenation Olaf Hering
@ 2010-12-06 21:16 ` Olaf Hering
17 siblings, 0 replies; 27+ messages in thread
From: Olaf Hering @ 2010-12-06 21:16 UTC (permalink / raw)
To: xen-devel
On Mon, Dec 06, Olaf Hering wrote:
> This series uses the recently added wait_event feature. The __hvm_copy
> patch crashes Xen with what looks like stack corruption. After a few
> populate/resume iterations. I have added some printk to the
> populate/resume functions and also wait.c. This leads to crashes. It
> rarely prints a clean backtrace like the one shown below (most of the
> time a few cpus crash at once).
I have to add, sometimes the ASSERT in do_softirq() triggers.
But I havent had a chances to see what of the 3 checks actually trigger.
Olaf
^ permalink raw reply [flat|nested] 27+ messages in thread