xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Chao Gao <chao.gao@intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v2 3/5] VT-d PI: restrict the vcpu number on a given pcpu
Date: Mon, 15 May 2017 17:27:45 +0800	[thread overview]
Message-ID: <20170515092745.GB7052@skl-2s3> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D190CD1BB7@SHSMSX101.ccr.corp.intel.com>

On Mon, May 15, 2017 at 01:24:45PM +0800, Tian, Kevin wrote:
>> From: Gao, Chao
>> Sent: Thursday, May 11, 2017 2:04 PM
>> 
>> Currently, a blocked vCPU is put in its pCPU's pi blocking list. If
>> too many vCPUs are blocked on a given pCPU, it will incur that the list
>> grows too long. After a simple analysis, there are 32k domains and
>> 128 vcpu per domain, thus about 4M vCPUs may be blocked in one pCPU's
>> PI blocking list. When a wakeup interrupt arrives, the list is
>> traversed to find some specific vCPUs to wake them up. This traversal in
>> that case would consume much time.
>> 
>> To mitigate this issue, this patch limits the vcpu number on a given
>> pCPU, taking factors such as perfomance of common case, current hvm vcpu
>> count and current pcpu count into consideration. With this method, for
>> the common case, it works fast and for some extreme cases, the list
>> length is under control.
>> 
>> The change in vmx_pi_unblock_vcpu() is for the following case:
>> vcpu is running -> try to block (this patch may change NSDT to
>> another pCPU) but notification comes in time, thus the vcpu
>> goes back to running station -> VM-entry (we should set NSDT again,
>> reverting the change we make to NSDT in vmx_vcpu_block())
>> 
>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>> ---
>>  xen/arch/x86/hvm/vmx/vmx.c | 78
>> +++++++++++++++++++++++++++++++++++++++++-----
>>  1 file changed, 71 insertions(+), 7 deletions(-)
>> 
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index efff6cd..c0d0b58 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -100,16 +100,70 @@ void vmx_pi_per_cpu_init(unsigned int cpu)
>>      spin_lock_init(&per_cpu(vmx_pi_blocking, cpu).lock);
>>  }
>> 
>> +/*
>> + * Choose an appropriate pcpu to receive wakeup interrupt.
>> + * By default, the local pcpu is chosen as the destination. But if the
>> + * vcpu number of the local pcpu exceeds a limit, another pcpu is chosen.
>> + *
>> + * Currently, choose (v_tot/p_tot) + K as the limit of vcpu, where
>> + * v_tot is the total number of vcpus on the system, p_tot is the total
>> + * number of pcpus in the system, and K is a fixed number. Experments
>> shows
>> + * the maximal time to wakeup a vcpu from a 128-entry blocking list is
>> + * considered acceptable. So choose 128 as the fixed number K.
>
>better you can provide your experimental data here so others have
>a gut-feeling why it's acceptable...

Will add this.

>
>> + *
>> + * This policy makes sure:
>> + * 1) for common cases, the limit won't be reached and the local pcpu is
>> used
>> + * which is beneficial to performance (at least, avoid an IPI when unblocking
>> + * vcpu).
>> + * 2) for the worst case, the blocking list length scales with the vcpu count
>> + * divided by the pcpu count.
>> + */
>> +#define PI_LIST_FIXED_NUM 128
>> +#define PI_LIST_LIMIT     (atomic_read(&num_hvm_vcpus) /
>> num_online_cpus() + \
>> +                           PI_LIST_FIXED_NUM)
>> +
>> +static unsigned int vmx_pi_choose_dest_cpu(struct vcpu *v)
>> +{
>> +    int count, limit = PI_LIST_LIMIT;
>> +    unsigned int dest = v->processor;
>> +
>> +    count = atomic_read(&per_cpu(vmx_pi_blocking, dest).counter);
>> +    while ( unlikely(count >= limit) )
>> +    {
>> +        dest = cpumask_cycle(dest, &cpu_online_map);
>> +        count = atomic_read(&per_cpu(vmx_pi_blocking, dest).counter);
>> +    }
>
>is it possible to hit infinite loop here?
>

theoretically, it will not for cpumask_cycle() will iterate through all
online pcpus and it's impossible that all online pcpus have reach the
upper bound.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-15  9:27 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-11  6:04 [PATCH v2 0/5] mitigate the per-pCPU blocking list may be too long Chao Gao
2017-05-11  6:04 ` [PATCH v2 1/5] xentrace: add TRC_HVM_PI_LIST_ADD Chao Gao
2017-05-15  1:33   ` Tian, Kevin
2017-05-15  8:57     ` Chao Gao
2017-05-15 15:14   ` George Dunlap
2017-05-11  6:04 ` [PATCH v2 2/5] vcpu: track hvm vcpu number on the system Chao Gao
2017-05-11 11:35   ` Wei Liu
2017-05-11 11:37     ` Wei Liu
2017-05-12  8:23       ` Chao Gao
2017-05-11  6:04 ` [PATCH v2 3/5] VT-d PI: restrict the vcpu number on a given pcpu Chao Gao
2017-05-15  5:24   ` Tian, Kevin
2017-05-15  9:27     ` Chao Gao [this message]
2017-05-15 15:48   ` George Dunlap
2017-05-15 16:13     ` Chao Gao
2017-05-11  6:04 ` [PATCH v2 4/5] VT-d PI: Adding reference count to pi_desc Chao Gao
2017-05-15 14:42   ` George Dunlap
2017-05-11  6:04 ` [PATCH v2 5/5] VT-d PI: Don't add vCPU to PI blocking list for a case Chao Gao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170515092745.GB7052@skl-2s3 \
    --to=chao.gao@intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).