* [PATCH 0 of 2] xenpaging:speed up page-in
@ 2012-01-05 3:50 hongkaixing
2012-01-05 3:50 ` [PATCH 1 of 2] xenpaging:add a new array to speed up page-in in xenpaging hongkaixing
2012-01-05 3:50 ` [PATCH 2 of 2] xenpaging:change page-in process " hongkaixing
0 siblings, 2 replies; 4+ messages in thread
From: hongkaixing @ 2012-01-05 3:50 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
The following two patches are about how to speed up in xenpaging when page in pages.
On suse11-64 with 4G memory,if we page out 2G pages,it will cost about 15.5 seconds,
but take 2088 seconds to finish paging in.If page-in costs too much time,it will cause
unmesurable problems when vm or dom0 access the paged_out page,such as BSOD,crash.
What鈥檚 more,the dom0 is always in high I/O pressure.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1 of 2] xenpaging:add a new array to speed up page-in in xenpaging
2012-01-05 3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
@ 2012-01-05 3:50 ` hongkaixing
2012-01-05 3:50 ` [PATCH 2 of 2] xenpaging:change page-in process " hongkaixing
1 sibling, 0 replies; 4+ messages in thread
From: hongkaixing @ 2012-01-05 3:50 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 3997 bytes --]
# HG changeset patch
# User hongkaixing<hongkaixing@huawei.com>
# Date 1325149704 -28800
# Node ID 052727b8165ce6e05002184ae894096214c8b537
# Parent 54a5e994a241a506900ee0e197bb42e5f1d8e759
xenpaging:add a new array to speed up page-in in xenpaging
This patch adds a new array named page_out_index to reserve the victim's index.
When page in a page,it has to go through a for loop from 0 to num_pages to find
the right page to read,and it costs much time in this loop.After adding the
page_out_index array,it just reads the arrry to get the right page,and saves much time.
The following is a xenpaging test on suse11-64 with 4G memories.
Nums of page_out pages Page out time Page in time(in unstable code) Page in time(apply this patch)
512M(131072) 2.6s 540s 530s
2G(524288) 15.5s 2088s 2055s
Signed-off-by£ºhongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
diff -r 54a5e994a241 -r 052727b8165c tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Wed Nov 02 17:09:09 2011 +0000
+++ b/tools/xenpaging/xenpaging.c Thu Dec 29 17:08:24 2011 +0800
@@ -599,6 +599,7 @@
struct sigaction act;
xenpaging_t *paging;
xenpaging_victim_t *victims;
+ victim_to_i_t *page_out_index = NULL;
mem_event_request_t req;
mem_event_response_t rsp;
int i;
@@ -637,6 +638,17 @@
}
victims = calloc(paging->num_pages, sizeof(xenpaging_victim_t));
+ if (NULL == victims)
+ {
+ ERROR("Failed to alloc memory\n");
+ goto out;
+ }
+ page_out_index = (victim_to_i_t *)calloc(paging->domain_info->max_pages, sizeof(victim_to_i_t));
+ if ( NULL == page_out_index )
+ {
+ ERROR("Failed to alloc memory\n");
+ goto out;
+ }
/* ensure that if we get a signal, we'll do cleanup, then exit */
act.sa_handler = close_handler;
@@ -660,6 +672,7 @@
break;
if ( i % 100 == 0 )
DPRINTF("%d pages evicted\n", i);
+ page_out_index[victims[i].gfn].index=i;
}
DPRINTF("%d pages evicted. Done.\n", i);
@@ -687,17 +700,7 @@
if ( test_and_clear_bit(req.gfn, paging->bitmap) )
{
/* Find where in the paging file to read from */
- for ( i = 0; i < paging->num_pages; i++ )
- {
- if ( victims[i].gfn == req.gfn )
- break;
- }
-
- if ( i >= paging->num_pages )
- {
- DPRINTF("Couldn't find page %"PRIx64"\n", req.gfn);
- goto out;
- }
+ i = page_out_index[req.gfn].index ;
if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
{
@@ -733,7 +736,11 @@
if ( interrupted )
victims[i].gfn = INVALID_MFN;
else
+ {
evict_victim(paging, &victims[i], fd, i);
+ if( victims[i].gfn !=INVALID_MFN )
+ page_out_index[victims[i].gfn].index = i;
+ }
}
else
{
@@ -798,7 +805,15 @@
out:
close(fd);
unlink_pagefile();
- free(victims);
+ if ( NULL != victims )
+ {
+ free(victims);
+ }
+
+ if ( NULL != page_out_index )
+ {
+ free(page_out_index);
+ }
/* Tear down domain paging */
rc1 = xenpaging_teardown(paging);
diff -r 54a5e994a241 -r 052727b8165c tools/xenpaging/xenpaging.h
--- a/tools/xenpaging/xenpaging.h Wed Nov 02 17:09:09 2011 +0000
+++ b/tools/xenpaging/xenpaging.h Thu Dec 29 17:08:24 2011 +0800
@@ -54,6 +54,10 @@
unsigned long pagein_queue[XENPAGING_PAGEIN_QUEUE_SIZE];
} xenpaging_t;
+typedef struct victim_to_i {
+ /* the index of victim array to read from */
+ int index;
+} victim_to_i_t;
typedef struct xenpaging_victim {
/* the gfn of the page to evict */
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
2012-01-05 3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
2012-01-05 3:50 ` [PATCH 1 of 2] xenpaging:add a new array to speed up page-in in xenpaging hongkaixing
@ 2012-01-05 3:50 ` hongkaixing
2012-01-05 9:05 ` Tim Deegan
1 sibling, 1 reply; 4+ messages in thread
From: hongkaixing @ 2012-01-05 3:50 UTC (permalink / raw)
To: Olaf Hering; +Cc: xen-devel
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 9381 bytes --]
# HG changeset patch
# User hongkaixing<hongkaixing@huawei.com>
# Date 1325158899 -28800
# Node ID 978daceef147273920f298556489b60dc32ce458
# Parent 052727b8165ce6e05002184ae894096214c8b537
xenpaging:change page-in process to speed up page-in in xenpaging
In this patch,we change the page-in process.Firstly,we add a new function paging_in_trigger_sync
to handle page-in requests directly.and when the requests' count is up to 32,then handle them
batchly;Most importantly,we use an increasing gfn to test_bit,which saves much time.
In p2m.c,we changes p2m_mem_paging_populate() to return a value;
The following is a xenpaging test on suse11-64 with 4G memory.
Nums of page_out pages Page out time Page in time(in unstable code) Page in time(apply this patch)
512M(131072) 2.6s 540s 4.7s
2G(524288) 15.5s 2088s 17.7s
Signed-off-by£ºhongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
--- a/tools/libxc/xc_mem_paging.c Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/libxc/xc_mem_paging.c Thu Dec 29 19:41:39 2011 +0800
@@ -73,6 +73,13 @@
NULL, NULL, gfn);
}
+int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned long gfn)
+{
+ return xc_mem_event_control(xch, domain_id,
+ XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
+ XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+ NULL, NULL, gfn);
+}
/*
* Local variables:
diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/libxc/xenctrl.h Thu Dec 29 19:41:39 2011 +0800
@@ -1841,6 +1841,7 @@
int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
unsigned long gfn);
+int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long gfn);
int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
void *shared_page, void *ring_page);
diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/xenpaging/xenpaging.c Thu Dec 29 19:41:39 2011 +0800
@@ -594,6 +594,13 @@
return ret;
}
+static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long gfn)
+{
+ int rc = 0;
+ rc = xc_mem_paging_in(paging->xc_handle, paging->mem_event.domain_id,gfn);
+ return rc;
+}
+
int main(int argc, char *argv[])
{
struct sigaction act;
@@ -605,6 +612,9 @@
int i;
int rc = -1;
int rc1;
+ int request_count = 0;
+ unsigned long page_in_start_gfn = 0;
+ unsigned long real_page = 0;
xc_interface *xch;
int open_flags = O_CREAT | O_TRUNC | O_RDWR;
@@ -773,24 +783,51 @@
/* Write all pages back into the guest */
if ( interrupted == SIGTERM || interrupted == SIGINT )
{
- int num = 0;
+ request_count = 0;
for ( i = 0; i < paging->domain_info->max_pages; i++ )
{
- if ( test_bit(i, paging->bitmap) )
+ real_page = i + page_in_start_gfn;
+ real_page %= paging->domain_info->max_pages;
+ if ( test_bit(real_page, paging->bitmap) )
{
- paging->pagein_queue[num] = i;
- num++;
- if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
- break;
+ rc = paging_in_trigger_sync(paging,real_page);
+ if ( 0 == rc )
+ {
+ request_count++;
+ /* If page_in requests up to 32 then handle them */
+ if( request_count >= 32 )
+ {
+ page_in_start_gfn=real_page + 1;
+ break;
+ }
+ }
+ else
+ {
+ /* If IO ring is full then handle requests to free space */
+ if( ENOSPC == errno )
+ {
+ page_in_start_gfn = real_page;
+ break;
+ }
+ /* If p2mt is not p2m_is_paging,then clear bitmap;
+ * e.g. a page is paged then it is dropped by balloon.
+ */
+ else if ( EINVAL == errno )
+ {
+ clear_bit(i,paging->bitmap);
+ policy_notify_paged_in(i);
+ }
+ /* If hypercall fails then go to teardown xenpaging */
+ else
+ {
+ ERROR("Error paging in page");
+ goto out;
+ }
+ }
}
}
- /*
- * One more round if there are still pages to process.
- * If no more pages to process, exit loop.
- */
- if ( num )
- page_in_trigger();
- else if ( i == paging->domain_info->max_pages )
+ if( (i==paging->domain_info->max_pages) &&
+ !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
break;
}
else
diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
--- a/xen/arch/x86/mm/mem_paging.c Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/arch/x86/mm/mem_paging.c Thu Dec 29 19:41:39 2011 +0800
@@ -57,7 +57,14 @@
return 0;
}
break;
-
+
+ case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
+ {
+ unsigned long gfn = mec->gfn;
+ return p2m_mem_paging_populate(d, gfn);
+ }
+ break;
+
default:
return -ENOSYS;
break;
diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/arch/x86/mm/p2m.c Thu Dec 29 19:41:39 2011 +0800
@@ -874,7 +874,7 @@
* already sent to the pager. In this case the caller has to try again until the
* gfn is fully paged in again.
*/
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
+int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
{
struct vcpu *v = current;
mem_event_request_t req;
@@ -882,10 +882,12 @@
p2m_access_t a;
mfn_t mfn;
struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ int ret;
/* Check that there's space on the ring for this request */
+ ret = -ENOSPC;
if ( mem_event_check_ring(d, &d->mem_paging) )
- return;
+ goto out;
memset(&req, 0, sizeof(req));
req.type = MEM_EVENT_TYPE_PAGING;
@@ -905,19 +907,27 @@
}
p2m_unlock(p2m);
+ ret = -EINVAL;
/* Pause domain if request came from guest and gfn has paging type */
- if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
+ if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
{
vcpu_pause_nosync(v);
req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
}
/* No need to inform pager if the gfn is not in the page-out path */
- else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
+ else if ( p2mt == p2m_ram_paging_in_start || p2mt == p2m_ram_paging_in )
{
/* gfn is already on its way back and vcpu is not paused */
mem_event_put_req_producers(&d->mem_paging);
- return;
+ return 0;
}
+ else if ( !p2m_is_paging(p2mt) )
+ {
+ /* please clear the bit in paging->bitmap; */
+ mem_event_put_req_producers(&d->mem_paging);
+ goto out;
+ }
+
/* Send request to pager */
req.gfn = gfn;
@@ -925,8 +935,13 @@
req.vcpu_id = v->vcpu_id;
mem_event_put_request(d, &d->mem_paging, &req);
+
+ ret = 0;
+ out:
+ return ret;
}
+
/**
* p2m_mem_paging_prep - Allocate a new page for the guest
* @d: guest domain
diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
--- a/xen/include/asm-x86/p2m.h Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/include/asm-x86/p2m.h Thu Dec 29 19:41:39 2011 +0800
@@ -485,7 +485,7 @@
/* Tell xenpaging to drop a paged out frame */
void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
/* Start populating a paged out frame */
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
+int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
/* Prepare the p2m for paging a frame in */
int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
/* Resume normal operation (in case a domain was paused) */
diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
--- a/xen/include/public/domctl.h Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/include/public/domctl.h Thu Dec 29 19:41:39 2011 +0800
@@ -721,6 +721,7 @@
#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT 3
#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP 4
#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME 5
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN 6
/*
* Access permissions.
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
2012-01-05 3:50 ` [PATCH 2 of 2] xenpaging:change page-in process " hongkaixing
@ 2012-01-05 9:05 ` Tim Deegan
0 siblings, 0 replies; 4+ messages in thread
From: Tim Deegan @ 2012-01-05 9:05 UTC (permalink / raw)
To: hongkaixing; +Cc: Olaf Hering, xen-devel
Hello,
At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
> xenpaging:change page-in process to speed up page-in in xenpaging
> In this patch,we change the page-in process.Firstly,we add a new function paging_in_trigger_sync
> to handle page-in requests directly.and when the requests' count is up to 32,then handle them
> batchly;Most importantly,we use an increasing gfn to test_bit,which saves much time.
> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
> The following is a xenpaging test on suse11-64 with 4G memory.
>
> Nums of page_out pages Page out time Page in time(in unstable code) Page in time(apply this patch)
> 512M(131072) 2.6s 540s 4.7s
> 2G(524288) 15.5s 2088s 17.7s
>
Thanks for the patch! That's an impressive difference. You're changing
quite a few things in this patch, though. Can you send them as separate
patches so they can be reviewed one at a time? Is one of them in
particular making the difference? I suspect it's mostly the change to
test_bit(), and the rest is not necessary.
Cheers,
Tim.
> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
>
> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
> --- a/tools/libxc/xc_mem_paging.c Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/libxc/xc_mem_paging.c Thu Dec 29 19:41:39 2011 +0800
> @@ -73,6 +73,13 @@
> NULL, NULL, gfn);
> }
>
> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned long gfn)
> +{
> + return xc_mem_event_control(xch, domain_id,
> + XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
> + XEN_DOMCTL_MEM_EVENT_OP_PAGING,
> + NULL, NULL, gfn);
> +}
>
> /*
> * Local variables:
> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/libxc/xenctrl.h Thu Dec 29 19:41:39 2011 +0800
> @@ -1841,6 +1841,7 @@
> int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
> int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
> unsigned long gfn);
> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long gfn);
>
> int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
> void *shared_page, void *ring_page);
> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
> --- a/tools/xenpaging/xenpaging.c Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/xenpaging/xenpaging.c Thu Dec 29 19:41:39 2011 +0800
> @@ -594,6 +594,13 @@
> return ret;
> }
>
> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long gfn)
> +{
> + int rc = 0;
> + rc = xc_mem_paging_in(paging->xc_handle, paging->mem_event.domain_id,gfn);
> + return rc;
> +}
This function is
> +
> int main(int argc, char *argv[])
> {
> struct sigaction act;
> @@ -605,6 +612,9 @@
> int i;
> int rc = -1;
> int rc1;
> + int request_count = 0;
> + unsigned long page_in_start_gfn = 0;
> + unsigned long real_page = 0;
> xc_interface *xch;
>
> int open_flags = O_CREAT | O_TRUNC | O_RDWR;
> @@ -773,24 +783,51 @@
> /* Write all pages back into the guest */
> if ( interrupted == SIGTERM || interrupted == SIGINT )
> {
> - int num = 0;
> + request_count = 0;
> for ( i = 0; i < paging->domain_info->max_pages; i++ )
> {
> - if ( test_bit(i, paging->bitmap) )
> + real_page = i + page_in_start_gfn;
> + real_page %= paging->domain_info->max_pages;
> + if ( test_bit(real_page, paging->bitmap) )
> {
> - paging->pagein_queue[num] = i;
> - num++;
> - if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
> - break;
> + rc = paging_in_trigger_sync(paging,real_page);
> + if ( 0 == rc )
> + {
> + request_count++;
> + /* If page_in requests up to 32 then handle them */
> + if( request_count >= 32 )
> + {
> + page_in_start_gfn=real_page + 1;
> + break;
> + }
> + }
> + else
> + {
> + /* If IO ring is full then handle requests to free space */
> + if( ENOSPC == errno )
> + {
> + page_in_start_gfn = real_page;
> + break;
> + }
> + /* If p2mt is not p2m_is_paging,then clear bitmap;
> + * e.g. a page is paged then it is dropped by balloon.
> + */
> + else if ( EINVAL == errno )
> + {
> + clear_bit(i,paging->bitmap);
> + policy_notify_paged_in(i);
> + }
> + /* If hypercall fails then go to teardown xenpaging */
> + else
> + {
> + ERROR("Error paging in page");
> + goto out;
> + }
> + }
> }
> }
> - /*
> - * One more round if there are still pages to process.
> - * If no more pages to process, exit loop.
> - */
> - if ( num )
> - page_in_trigger();
> - else if ( i == paging->domain_info->max_pages )
> + if( (i==paging->domain_info->max_pages) &&
> + !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
> break;
> }
> else
> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
> --- a/xen/arch/x86/mm/mem_paging.c Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/arch/x86/mm/mem_paging.c Thu Dec 29 19:41:39 2011 +0800
> @@ -57,7 +57,14 @@
> return 0;
> }
> break;
> -
> +
> + case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
> + {
> + unsigned long gfn = mec->gfn;
> + return p2m_mem_paging_populate(d, gfn);
> + }
> + break;
> +
> default:
> return -ENOSYS;
> break;
> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/arch/x86/mm/p2m.c Thu Dec 29 19:41:39 2011 +0800
> @@ -874,7 +874,7 @@
> * already sent to the pager. In this case the caller has to try again until the
> * gfn is fully paged in again.
> */
> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
> {
> struct vcpu *v = current;
> mem_event_request_t req;
> @@ -882,10 +882,12 @@
> p2m_access_t a;
> mfn_t mfn;
> struct p2m_domain *p2m = p2m_get_hostp2m(d);
> + int ret;
>
> /* Check that there's space on the ring for this request */
> + ret = -ENOSPC;
> if ( mem_event_check_ring(d, &d->mem_paging) )
> - return;
> + goto out;
>
> memset(&req, 0, sizeof(req));
> req.type = MEM_EVENT_TYPE_PAGING;
> @@ -905,19 +907,27 @@
> }
> p2m_unlock(p2m);
>
> + ret = -EINVAL;
> /* Pause domain if request came from guest and gfn has paging type */
> - if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
> + if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
> {
> vcpu_pause_nosync(v);
> req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> }
> /* No need to inform pager if the gfn is not in the page-out path */
> - else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
> + else if ( p2mt == p2m_ram_paging_in_start || p2mt == p2m_ram_paging_in )
> {
> /* gfn is already on its way back and vcpu is not paused */
> mem_event_put_req_producers(&d->mem_paging);
> - return;
> + return 0;
> }
> + else if ( !p2m_is_paging(p2mt) )
> + {
> + /* please clear the bit in paging->bitmap; */
> + mem_event_put_req_producers(&d->mem_paging);
> + goto out;
> + }
> +
>
> /* Send request to pager */
> req.gfn = gfn;
> @@ -925,8 +935,13 @@
> req.vcpu_id = v->vcpu_id;
>
> mem_event_put_request(d, &d->mem_paging, &req);
> +
> + ret = 0;
> + out:
> + return ret;
> }
>
> +
> /**
> * p2m_mem_paging_prep - Allocate a new page for the guest
> * @d: guest domain
> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
> --- a/xen/include/asm-x86/p2m.h Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/include/asm-x86/p2m.h Thu Dec 29 19:41:39 2011 +0800
> @@ -485,7 +485,7 @@
> /* Tell xenpaging to drop a paged out frame */
> void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
> /* Start populating a paged out frame */
> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
> /* Prepare the p2m for paging a frame in */
> int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
> /* Resume normal operation (in case a domain was paused) */
> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/include/public/domctl.h Thu Dec 29 19:41:39 2011 +0800
> @@ -721,6 +721,7 @@
> #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT 3
> #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP 4
> #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME 5
> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN 6
>
> /*
> * Access permissions.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-01-05 9:05 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-05 3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
2012-01-05 3:50 ` [PATCH 1 of 2] xenpaging:add a new array to speed up page-in in xenpaging hongkaixing
2012-01-05 3:50 ` [PATCH 2 of 2] xenpaging:change page-in process " hongkaixing
2012-01-05 9:05 ` Tim Deegan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).