xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Chao Gao <chao.gao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v4 3/4] VT-d PI: restrict the vcpu number on a given pcpu
Date: Mon, 10 Jul 2017 09:17:22 +0800	[thread overview]
Message-ID: <20170710011722.GC85738@skl-2s3.sh.intel.com> (raw)
In-Reply-To: <595FCB9B0200007800169C5F@prv-mh.provo.novell.com>

On Fri, Jul 07, 2017 at 09:57:47AM -0600, Jan Beulich wrote:
>>>> On 07.07.17 at 08:48, <chao.gao@intel.com> wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -95,22 +95,91 @@ static DEFINE_PER_CPU(struct vmx_pi_blocking_vcpu, 
>> vmx_pi_blocking);
>>  uint8_t __read_mostly posted_intr_vector;
>>  static uint8_t __read_mostly pi_wakeup_vector;
>>  
>> +/*
>> + * Protect critical sections to avoid adding a blocked vcpu to a destroyed
>> + * blocking list.
>> + */
>> +static DEFINE_SPINLOCK(remote_pbl_operation);
>
>What is "pbl" supposed to stand for?

pi blocking list.

>
>> +#define remote_pbl_operation_begin(flags)                   \
>> +({                                                          \
>> +    spin_lock_irqsave(&remote_pbl_operation, flags);        \
>> +})
>> +
>> +#define remote_pbl_operation_done(flags)                    \
>> +({                                                          \
>> +    spin_unlock_irqrestore(&remote_pbl_operation, flags);   \
>> +})
>
>No need for the ({ }) here.
>
>But then I don't understand what this is needed for in the first
>place. If this is once again about CPU offlining, then I can only
>repeat that such happens in stop_machine context. Otherwise

But I don't think vmx_pi_desc_fixup() happens in stop_machine context,
please refer to cpu_callback() function in hvm.c and the time
notifier_call_chain(CPU_DEAD) is called in cpu_down().

Our goal here is to avoid adding one entry to a destroyed list.
To avoid destruction happens during adding, we can put these two
process in critical sections, like

add:
	remote_pbl_operation_begin()
	add one entry to the list
	remote_pbl_operation_end()

destroy:
	remote_pbl_operation_begin()
	destruction
	remote_pbl_operation_end()

Destruction may happen before we enter the critical section.
so adding should be:

add:
	remote_pbl_operation_begin()
	check the list is still valid
	add one entry to the list
	remote_pbl_operation_end()

In this patch, we choose an online cpu's list. The list should be valid
for the list is always destroyed after offline.

>I'm afraid the comment ahead of this code section needs
>adjustment, as I can't interpret it in another way.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-07-10  1:17 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-07  6:48 [PATCH v4 0/4] mitigate the per-pCPU blocking list may be too long Chao Gao
2017-07-07  6:48 ` [PATCH v4 1/4] VT-d PI: track the vcpu number on pi blocking list Chao Gao
2017-07-07 15:41   ` Jan Beulich
2017-07-10  0:50     ` Chao Gao
2017-07-21 15:04   ` George Dunlap
2017-07-07  6:48 ` [PATCH v4 2/4] x86/vcpu: track hvm vcpu number on the system Chao Gao
2017-07-07 15:42   ` Jan Beulich
2017-07-07  6:48 ` [PATCH v4 3/4] VT-d PI: restrict the vcpu number on a given pcpu Chao Gao
2017-07-07 15:57   ` Jan Beulich
2017-07-10  1:17     ` Chao Gao [this message]
2017-07-10  9:36       ` Jan Beulich
2017-07-10 11:42         ` Chao Gao
2017-07-21 15:43   ` George Dunlap
2017-07-07  6:49 ` [PATCH v4 4/4] Xentrace: add support for HVM's PI blocking list operation Chao Gao
2017-07-07 15:37   ` Jan Beulich
2017-07-10  0:45     ` Chao Gao
2017-07-21 16:26   ` George Dunlap
2017-07-28  8:23     ` Chao Gao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170710011722.GC85738@skl-2s3.sh.intel.com \
    --to=chao.gao@intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).