From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 1/2] xen: fix a (latent) cpupool-related race during domain destroy
Date: Fri, 15 Jul 2016 16:23:25 +0200 [thread overview]
Message-ID: <1468592605.13039.104.camel@citrix.com> (raw)
In-Reply-To: <5788DC89.4090009@suse.com>
[-- Attachment #1.1: Type: text/plain, Size: 3595 bytes --]
On Fri, 2016-07-15 at 14:52 +0200, Juergen Gross wrote:
> On 15/07/16 13:52, Dario Faggioli wrote:
> >
> > On Fri, 2016-07-15 at 12:36 +0200, Juergen Gross wrote:
> > >
> > > On 15/07/16 12:14, Dario Faggioli wrote:
> > > >
> > > > In particular, I'm probably not fully understanding, from that
> > > > commit
> > > > changelog, what is the set of operations/command that I should
> > > > run
> > > > to
> > > > check whether or not I reintroduced the issue back.
> > > You need to create a domain in a cpupool and destroy it again
> > > while
> > > some dom0 process still is holding a reference to it (resulting
> > > in a
> > > zombie domain). Then try to destroy the cpupool.
> > >
> > Ah, I see. I wasn't get the fact that it needed to be a zombie
> > domain
> > from anywhere.
> >
> > >
> > > >
> > > > What am I missing?
> > > The domain being a zombie domain might change the picture. Moving
> > > it
> > > to
> > > cpupool0 was failing before my patch and it might do so again
> > > with
> > > your
> > > patch applied.
> > >
> > Mmmm... I don't immediately see the reason why moving a zombie
> > domain
> > fails either, but I guess I'll have to try.
> Searching through the history I found commit
> 934e7baa6c12d19cfaf24e8f8e27d6c6a8b8c5e4 which might has removed the
> problematic condition (cpupool->n_dom being non-zero while d->cpupool
> was NULL already).
>
Yeah.. And there's also this:
/*
* unassign a specific cpu from a cpupool
* we must be sure not to run on the cpu to be unassigned! to achieve this
* the main functionality is performed via continue_hypercall_on_cpu on a
* specific cpu.
* if the cpu to be removed is the last one of the cpupool no active domain
* must be bound to the cpupool. dying domains are moved to cpupool0 as they
* might be zombies.
* possible failures:
* - last cpu and still active domains in cpupool
* - cpu just being unplugged
*/
static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu)
{
int work_cpu;
int ret;
struct domain *d;
cpupool_dprintk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n",
c->cpupool_id, cpu);
[...]
for_each_domain_in_cpupool(d, c)
{
if ( !d->is_dying )
{
ret = -EBUSY;
break;
}
ret = cpupool_move_domain_locked(d, cpupool0);
if ( ret )
break;
}
[...]
So it really looks like it ought to be possible to move zombies to
cpupool0 these days. :-)
> > Therefore, I still think this patch is correct, but I'm up for
> > investigating further and finding a way to solve the "zombie in
> > cpupool" issue as well.
> I'm not saying your patch is wrong. I just wanted to give you a hint
> about the history of the stuff you are changing. :-)
>
Sure, and that's much appreciated! :-)
> If it is working I'd really prefer it over the current situation.
>
Right. I've got to leave now. But I'll produce some zombies on Monday,
and will see if they move. :-D
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-15 14:24 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-14 16:17 [PATCH v2 0/2] xen: cpupool (small) improvement and (latent) bug fix Dario Faggioli
2016-07-14 16:18 ` [PATCH v2 1/2] xen: fix a (latent) cpupool-related race during domain destroy Dario Faggioli
2016-07-14 17:08 ` Andrew Cooper
2016-07-15 9:38 ` Juergen Gross
2016-07-15 10:14 ` Dario Faggioli
2016-07-15 10:36 ` Juergen Gross
2016-07-15 11:52 ` Dario Faggioli
2016-07-15 12:52 ` Juergen Gross
2016-07-15 14:23 ` Dario Faggioli [this message]
2016-07-18 14:03 ` Dario Faggioli
2016-07-18 14:09 ` Juergen Gross
2016-07-28 17:29 ` Dario Faggioli
2016-08-03 11:54 ` George Dunlap
2016-08-03 12:27 ` George Dunlap
2016-07-14 16:18 ` [PATCH v2 2/2] xen: cpupool: small optimization when moving between pools Dario Faggioli
2016-07-15 9:39 ` Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1468592605.13039.104.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).