From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
xen-devel <xen-devel@lists.xenproject.org>,
osstest service owner <osstest-admin@xenproject.org>,
Juergen Gross <jgross@suse.com>
Subject: RFC/PATCH: xen: race during domain destruction [Re: [xen-4.7-testing test] 105948: regressions - FAIL]
Date: Fri, 24 Feb 2017 17:14:37 +0100 [thread overview]
Message-ID: <1487952877.5548.26.camel@citrix.com> (raw)
In-Reply-To: <58AD5DDA020000780013C9ED@prv-mh.provo.novell.com>
[-- Attachment #1.1: Type: text/plain, Size: 4311 bytes --]
[Adding Juergen]
On Wed, 2017-02-22 at 01:46 -0700, Jan Beulich wrote:
> > > > On 22.02.17 at 01:02, <andrew.cooper3@citrix.com> wrote:
> > (XEN) Xen call trace:
> > (XEN) [<ffff82d080126e70>]
> > sched_credit2.c#vcpu_is_migrateable+0x22/0x9a
> > (XEN) [<ffff82d080129763>]
> > sched_credit2.c#csched2_schedule+0x823/0xb4e
> > (XEN) [<ffff82d08012c17e>] schedule.c#schedule+0x108/0x609
> > (XEN) [<ffff82d08012f8bd>] softirq.c#__do_softirq+0x7f/0x8a
> > (XEN) [<ffff82d08012f912>] do_softirq+0x13/0x15
> > (XEN) [<ffff82d080164b17>] domain.c#idle_loop+0x55/0x62
> > (XEN)
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 14:
> > (XEN) Assertion 'd->cpupool != NULL' failed at
> > ...5948.build-amd64/xen/xen/include/xen/sched-if.h:200
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> >
> > I am guessing the most recent credit2 backports weren't quite so
> > safe?
>
Well, what I'd say we're facing is the surfacing of a latent bug.
> However, comparing with the staging version of the file
> (which is heavily different), the immediate code involved here isn't
> all that different, so I wonder whether (a) this is a problem on
> staging too or (b) we're missing another backport. Dario?
>
So, according to my investigation, this is a genuine race. It affects
this branch as well as staging, but it manifests less frequently (or, I
should say, very rarely) in the latter.
The problem is that the Credit2's load balancer operates not only on
runnable vCPUs, but also on blocked, sleeping, and paused ones (and
that's by design).
In this case, the original domain is in the process of being destroyed,
after migration completed, and reaches the point where, within
domain_destroy(), we call cpupool_rm_domain(). This remove the domain
from any cpupool, and sets d->cpupool = NULL.
Then, on another pCPU --since the vCPUs of the domain are still around
(until we call sched_destroy_vcpu(), which happens much later-- and
they also are still assigned to a Credit2 runqueue, balance_load()
picks up one of them for moving to another runqueue, and things explode
when we realize that the vCPU is actually out of any pool!
So, I've thought quite a bit of how to solve this. Possibilities are to
act at the Credit2 level, or outside of it.
I drafted a couple of solutions only affecting sched_credit2.c, but
could not be satisfied with the results. And that's because I
ultimately think it should be safe for a scheduler that it can play
with a vCPU that it can reach out to, and that means the vCPU must be
in a pool.
And that's why I came up with the patch below.
This is a draft and is on top of staging-4.7. I will properly submit it
against staging, if you agree with me it's an ok thing to do.
Basically, I anticipate a little bit calling sched_destroy_vcpu(), so
that it happens before cpupool_rm_domain(). This ensures that vCPUs
have valid cpupool information until the very last moment that they are
accessible from a scheduler.
---
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 45273d4..4db7750 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -643,7 +643,10 @@ int domain_kill(struct domain *d)
if ( cpupool_move_domain(d, cpupool0) )
return -ERESTART;
for_each_vcpu ( d, v )
+ {
unmap_vcpu_info(v);
+ sched_destroy_vcpu(v);
+ }
d->is_dying = DOMDYING_dead;
/* Mem event cleanup has to go here because the rings
* have to be put before we call put_domain. */
@@ -807,7 +810,6 @@ static void complete_domain_destroy(struct rcu_head *head)
continue;
tasklet_kill(&v->continue_hypercall_tasklet);
vcpu_destroy(v);
- sched_destroy_vcpu(v);
destroy_waitqueue_vcpu(v);
}
---
Let me know.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-24 16:14 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-21 23:45 [xen-4.7-testing test] 105948: regressions - FAIL osstest service owner
2017-02-22 0:02 ` Andrew Cooper
2017-02-22 8:46 ` Jan Beulich
2017-02-22 9:59 ` Dario Faggioli
2017-02-23 23:25 ` Dario Faggioli
2017-02-24 16:14 ` Dario Faggioli [this message]
2017-02-26 15:53 ` RFC/PATCH: xen: race during domain destruction [Re: [xen-4.7-testing test] 105948: regressions - FAIL] Dario Faggioli
2017-02-27 15:18 ` Dario Faggioli
2017-02-28 9:48 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1487952877.5548.26.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jgross@suse.com \
--cc=osstest-admin@xenproject.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).