xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: olaf@aepfle.de, tim@xen.org, hongkaixing@huawei.com
Subject: Re: [PATCH 2 of 2] xenpaging:change page-in process to	speed up page-in in xenpaging
Date: Fri, 6 Jan 2012 19:51:32 -0800	[thread overview]
Message-ID: <a344dc410cb70b471e31c922ce80d94a.squirrel@webmail.lagarcavilla.org> (raw)
In-Reply-To: <mailman.5929.1325754331.12970.xen-devel@lists.xensource.com>

> Date: Thu, 5 Jan 2012 09:05:16 +0000
> From: Tim Deegan <tim@xen.org>
> To: hongkaixing@huawei.com
> Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 2 of 2] xenpaging:change page-in
> 	process to	speed up page-in in	xenpaging
> Message-ID: <20120105090516.GE15595@ocelot.phlegethon.org>
> Content-Type: text/plain; charset=iso-8859-1
>
> Hello,
>
> At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
>> xenpaging:change page-in process to speed up page-in in xenpaging
>> In this patch,we change the page-in process.Firstly,we add a new
>> function paging_in_trigger_sync
>> to handle page-in requests directly.and when the requests' count is up
>> to 32,then handle them
>> batchly;Most importantly,we use an increasing gfn to test_bit,which
>> saves much time.
>> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
>> The following is a xenpaging test on suse11-64 with 4G memory.
>>
>> Nums of page_out pages	Page out time	Page in time(in unstable code) Page
>> in time(apply this patch)
>> 512M(131072)		    2.6s		        540s		              4.7s
>> 2G(524288)		    15.5s		        2088s		      	      17.7s
>>
>
> Thanks for the patch!  That's an impressive difference.  You're changing
> quite a few things in this patch, though.  Can you send them as separate
> patches so they can be reviewed one at a time?  Is one of them in
> particular making the difference?  I suspect it's mostly the change to
> test_bit(), and the rest is not necessary.

Second that, on all counts. Impressive numbers, and, a bit puzzled as to
what actually made the difference.

I would also like to see changes to xenpaging teased out from changes to
the hypervisor.

I've been sitting on a patch to page-in synchronously, which shortcuts
even more aggressively the page-in path: instead of calling populate, we
go straight into paging_load. This does not necessitate an additional
domctl, and would save even more hypervisor<->pager control round-trips.
Do you foresee any conflicts with your current approach?

Thanks!
Andres

>
> Cheers,
>
> Tim.
>
>> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
>>
>> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
>> --- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -73,6 +73,13 @@
>>                                  NULL, NULL, gfn);
>>  }
>>
>> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned
>> long gfn)
>> +{
>> +    return xc_mem_event_control(xch, domain_id,
>> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
>> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
>> +                                NULL, NULL, gfn);
>> +}
>>
>>  /*
>>   * Local variables:
>> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
>> --- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -1841,6 +1841,7 @@
>>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned
>> long gfn);
>>  int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
>>                           unsigned long gfn);
>> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long
>> gfn);
>>
>>  int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
>>                          void *shared_page, void *ring_page);
>> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
>> --- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -594,6 +594,13 @@
>>      return ret;
>>  }
>>
>> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long
>> gfn)
>> +{
>> +    int rc = 0;
>> +    rc = xc_mem_paging_in(paging->xc_handle,
>> paging->mem_event.domain_id,gfn);
>> +    return rc;
>> +}
>
> This function is
>
>> +
>>  int main(int argc, char *argv[])
>>  {
>>      struct sigaction act;
>> @@ -605,6 +612,9 @@
>>      int i;
>>      int rc = -1;
>>      int rc1;
>> +    int request_count = 0;
>> +    unsigned long page_in_start_gfn = 0;
>> +    unsigned long real_page = 0;
>>      xc_interface *xch;
>>
>>      int open_flags = O_CREAT | O_TRUNC | O_RDWR;
>> @@ -773,24 +783,51 @@
>>          /* Write all pages back into the guest */
>>          if ( interrupted == SIGTERM || interrupted == SIGINT )
>>          {
>> -            int num = 0;
>> +            request_count = 0;
>>              for ( i = 0; i < paging->domain_info->max_pages; i++ )
>>              {
>> -                if ( test_bit(i, paging->bitmap) )
>> +                real_page = i + page_in_start_gfn;
>> +                real_page %= paging->domain_info->max_pages;
>> +                if ( test_bit(real_page, paging->bitmap) )
>>                  {
>> -                    paging->pagein_queue[num] = i;
>> -                    num++;
>> -                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
>> -                        break;
>> +                    rc = paging_in_trigger_sync(paging,real_page);
>> +                    if ( 0 == rc )
>> +                    {
>> +                        request_count++;
>> +                        /* If page_in requests up to 32 then handle
>> them */
>> +                        if( request_count >= 32 )
>> +                        {
>> +                            page_in_start_gfn=real_page + 1;
>> +                            break;
>> +                        }
>> +                    }
>> +                    else
>> +                    {
>> +                        /* If IO ring is full then handle requests to
>> free space */
>> +                        if( ENOSPC == errno )
>> +                        {
>> +                            page_in_start_gfn = real_page;
>> +                            break;
>> +                        }
>> +                        /* If p2mt is not p2m_is_paging,then clear
>> bitmap;
>> +                        * e.g. a page is paged then it is dropped by
>> balloon.
>> +                        */
>> +                        else if ( EINVAL == errno )
>> +                        {
>> +                            clear_bit(i,paging->bitmap);
>> +                            policy_notify_paged_in(i);
>> +                        }
>> +                        /* If hypercall fails then go to teardown
>> xenpaging */
>> +                        else
>> +                        {
>> +                            ERROR("Error paging in page");
>> +                            goto out;
>> +                        }
>> +                    }
>>                  }
>>              }
>> -            /*
>> -             * One more round if there are still pages to process.
>> -             * If no more pages to process, exit loop.
>> -             */
>> -            if ( num )
>> -                page_in_trigger();
>> -            else if ( i == paging->domain_info->max_pages )
>> +            if( (i==paging->domain_info->max_pages) &&
>> +
>> !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
>>                  break;
>>          }
>>          else
>> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
>> --- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -57,7 +57,14 @@
>>          return 0;
>>      }
>>      break;
>> -
>> +
>> +    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
>> +    {
>> +        unsigned long gfn = mec->gfn;
>> +        return p2m_mem_paging_populate(d, gfn);
>> +    }
>> +    break;
>> +
>>      default:
>>          return -ENOSYS;
>>          break;
>> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -874,7 +874,7 @@
>>   * already sent to the pager. In this case the caller has to try again
>> until the
>>   * gfn is fully paged in again.
>>   */
>> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>>  {
>>      struct vcpu *v = current;
>>      mem_event_request_t req;
>> @@ -882,10 +882,12 @@
>>      p2m_access_t a;
>>      mfn_t mfn;
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    int ret;
>>
>>      /* Check that there's space on the ring for this request */
>> +    ret = -ENOSPC;
>>      if ( mem_event_check_ring(d, &d->mem_paging) )
>> -        return;
>> +        goto out;
>>
>>      memset(&req, 0, sizeof(req));
>>      req.type = MEM_EVENT_TYPE_PAGING;
>> @@ -905,19 +907,27 @@
>>      }
>>      p2m_unlock(p2m);
>>
>> +    ret = -EINVAL;
>>      /* Pause domain if request came from guest and gfn has paging type
>> */
>> -    if (  p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
>> +    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
>>      {
>>          vcpu_pause_nosync(v);
>>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>>      }
>>      /* No need to inform pager if the gfn is not in the page-out path
>> */
>> -    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
>> +    else if ( p2mt == p2m_ram_paging_in_start || p2mt ==
>> p2m_ram_paging_in )
>>      {
>>          /* gfn is already on its way back and vcpu is not paused */
>>          mem_event_put_req_producers(&d->mem_paging);
>> -        return;
>> +        return 0;
>>      }
>> +    else if ( !p2m_is_paging(p2mt) )
>> +    {
>> +        /* please clear the bit in paging->bitmap; */
>> +        mem_event_put_req_producers(&d->mem_paging);
>> +        goto out;
>> +    }
>> +
>>
>>      /* Send request to pager */
>>      req.gfn = gfn;
>> @@ -925,8 +935,13 @@
>>      req.vcpu_id = v->vcpu_id;
>>
>>      mem_event_put_request(d, &d->mem_paging, &req);
>> +
>> +    ret = 0;
>> + out:
>> +    return ret;
>>  }
>>
>> +
>>  /**
>>   * p2m_mem_paging_prep - Allocate a new page for the guest
>>   * @d: guest domain
>> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
>> --- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -485,7 +485,7 @@
>>  /* Tell xenpaging to drop a paged out frame */
>>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
>>  /* Start populating a paged out frame */
>> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>>  /* Prepare the p2m for paging a frame in */
>>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
>>  /* Resume normal operation (in case a domain was paused) */
>> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
>> --- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -721,6 +721,7 @@
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
>> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
>>
>>  /*
>>   * Access permissions.
>>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
> End of Xen-devel Digest, Vol 83, Issue 39
> *****************************************
>

       reply	other threads:[~2012-01-07  3:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <mailman.5929.1325754331.12970.xen-devel@lists.xensource.com>
2012-01-07  3:51 ` Andres Lagar-Cavilla [this message]
2012-01-07  8:38   ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging Hongkaixing
2012-01-10  3:33     ` Andres Lagar-Cavilla
2012-01-05  3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
2012-01-05  3:50 ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging hongkaixing
2012-01-05  9:05   ` Tim Deegan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a344dc410cb70b471e31c922ce80d94a.squirrel@webmail.lagarcavilla.org \
    --to=andres@lagarcavilla.org \
    --cc=hongkaixing@huawei.com \
    --cc=olaf@aepfle.de \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).