From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Wu, Feng" <feng.wu@intel.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
"keir@xen.org" <keir@xen.org>,
"george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
"jbeulich@suse.com" <jbeulich@suse.com>
Subject: Re: [PATCH 3/3] VMX: Remove the vcpu from the per-cpu blocking list after domain termination
Date: Mon, 23 May 2016 16:45:38 +0200 [thread overview]
Message-ID: <1464014738.21930.60.camel@citrix.com> (raw)
In-Reply-To: <E959C4978C3B6342920538CF579893F0196EBB7E@SHSMSX103.ccr.corp.intel.com>
[-- Attachment #1.1: Type: text/plain, Size: 2552 bytes --]
On Mon, 2016-05-23 at 13:32 +0000, Wu, Feng wrote:
>
> > > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > > @@ -248,6 +248,36 @@ void vmx_pi_hooks_deassign(struct domain *d)
> > > d->arch.hvm_domain.vmx.pi_switch_to = NULL;
> > > }
> > >
> > > +static void vmx_pi_blocking_list_cleanup(struct domain *d)
> > > +{
> > > + unsigned int cpu;
> > > +
> > > + for_each_online_cpu ( cpu )
> > > + {
> > > + struct vcpu *v;
> > > + unsigned long flags;
> > > + struct arch_vmx_struct *vmx, *tmp;
> > > + spinlock_t *lock = &per_cpu(vmx_pi_blocking, cpu).lock;
> > > + struct list_head *blocked_vcpus =
> > > &per_cpu(vmx_pi_blocking,
> > > cpu).list;
> > > +
> > > + spin_lock_irqsave(lock, flags);
> > > +
> > > + list_for_each_entry_safe(vmx, tmp, blocked_vcpus,
> > > pi_blocking.list)
> > > + {
> > > + v = container_of(vmx, struct vcpu, arch.hvm_vmx);
> > > +
> > > + if (v->domain == d)
> > > + {
> > > + list_del(&vmx->pi_blocking.list);
> > > + ASSERT(vmx->pi_blocking.lock == lock);
> > > + vmx->pi_blocking.lock = NULL;
> > > + }
> > > + }
> > > +
> > > + spin_unlock_irqrestore(lock, flags);
> > > + }
> > >
> > So, I'm probably missing something very ver basic, but I don't see
> > what's the reason why we need this loop... can't we arrange for
> > checking
> >
> > list_empty(&v->arch.hvm_vmx.pi_blocking.list)
> Yes, I also cannot find the reason why can't we use this good
> suggestion, Except we need use list_del_init() instead of
> list_del() in the current code.
>
Yes, I saw that, and it's well worth doing that, to get rid of the
loop. :-)
> Or we can just check whether
> ' vmx->pi_blocking.lock ' is NULL?
>
I guess that will work as well. Still, if it were me doing this, I'd go
for the list_del_init()/list_empty() approach.
> I total don't know why I
> missed it! :)
>
:-)
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-05-23 14:45 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-20 8:53 [PATCH 0/3] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-05-20 8:53 ` [PATCH 1/3] VMX: Properly adjuest the status of pi descriptor Feng Wu
2016-05-23 5:15 ` Tian, Kevin
2016-05-23 5:27 ` Wu, Feng
2016-05-23 6:52 ` Tian, Kevin
2016-05-23 7:16 ` Wu, Feng
2016-05-23 9:03 ` Jan Beulich
2016-05-23 9:21 ` Wu, Feng
2016-05-23 11:04 ` Jan Beulich
2016-05-23 12:30 ` Jan Beulich
2016-05-20 8:53 ` [PATCH 2/3] VMX: Make hook pi_do_resume always available Feng Wu
2016-05-23 12:32 ` Jan Beulich
2016-05-23 12:51 ` Dario Faggioli
2016-05-20 8:53 ` [PATCH 3/3] VMX: Remove the vcpu from the per-cpu blocking list after domain termination Feng Wu
2016-05-23 5:19 ` Tian, Kevin
2016-05-23 5:48 ` Wu, Feng
2016-05-23 6:54 ` Tian, Kevin
2016-05-23 9:08 ` Jan Beulich
2016-05-23 9:17 ` Wu, Feng
2016-05-23 10:35 ` Wu, Feng
2016-05-23 11:11 ` Jan Beulich
2016-05-23 12:24 ` Wu, Feng
2016-05-23 12:46 ` Jan Beulich
2016-05-23 13:41 ` Wu, Feng
2016-05-23 12:30 ` Dario Faggioli
2016-05-23 13:32 ` Wu, Feng
2016-05-23 14:45 ` Dario Faggioli [this message]
2016-05-23 12:35 ` Jan Beulich
2016-05-23 13:33 ` Wu, Feng
2016-05-20 10:27 ` [PATCH 0/3] VMX: Properly handle pi descriptor and per-cpu blocking list Jan Beulich
2016-05-20 10:46 ` Wu, Feng
2016-05-23 8:08 ` Jan Beulich
2016-05-23 8:44 ` Wu, Feng
2016-05-23 8:51 ` Jan Beulich
2016-05-23 12:39 ` Dario Faggioli
2016-05-24 10:07 ` Wu, Feng
2016-05-24 13:33 ` Wu, Feng
2016-05-24 14:46 ` Dario Faggioli
2016-05-25 13:28 ` Wu, Feng
2016-05-24 14:02 ` Dario Faggioli
2016-05-25 12:39 ` Wu, Feng
2016-06-23 12:33 ` Wu, Feng
2016-06-23 15:11 ` Dario Faggioli
2016-06-24 6:11 ` Wu, Feng
2016-06-24 7:22 ` Dario Faggioli
2016-06-24 7:59 ` Wu, Feng
2016-06-24 10:27 ` Dario Faggioli
2016-06-24 13:25 ` Wu, Feng
2016-06-24 23:43 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1464014738.21930.60.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=feng.wu@intel.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).