From: Julien Grall <julien.grall@arm.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
osstest service owner <osstest-admin@xenproject.org>,
xen-devel@lists.xensource.com
Cc: Meng Xu <xumengpanda@gmail.com>,
Stefano Stabellini <stefano@stabellini.net>
Subject: Re: Guest start issue on ARM (maybe related to Credit2) [Was: Re: [xen-unstable test] 113807: regressions - FAIL]
Date: Thu, 28 Sep 2017 00:51:11 +0100 [thread overview]
Message-ID: <7fa4f13e-c4fe-cb7e-0438-f2f170c948bb@arm.com> (raw)
In-Reply-To: <1506459110.27663.41.camel@citrix.com>
Hi Dario,
On 09/26/2017 09:51 PM, Dario Faggioli wrote:
> On Tue, 2017-09-26 at 18:28 +0100, Julien Grall wrote:
>> On 09/26/2017 08:33 AM, Dario Faggioli wrote:
>>>>
>>> Here's the logs:
>>> http://logs.test-lab.xenproject.org/osstest/logs/113816/test-armhf-
>>> armhf-xl-rtds/info.html
>>
>> It does not seem to be similar, in the credit2 case the kernel is
>> stuck at very early boot.
>> Here it seems it is running (there are grants setup).
>>
> Yes, I agree, it's not totally similar.
>
>> This seem to be confirmed from the guest console log, I can see the
>> prompt. Interestingly
>> when the guest job fails, it has been waiting for a long time disk
>> and hvc0. Although, it
>> does not timeout.
>>
> Ah, I see what you mean, I found it in the guest console log.
>
>> I am actually quite surprised that we start a 4 vCPUs guest on a 2
>> pCPUs platform. The total of
>> vCPUs is 6 (2 DOM0 + 4 DOMU). The processors in are not the greatest
>> for testing. So I was
>> wondering if we end up to have too many vCPUs running on the platform
>> and making it unreliable
>> the test?
>>
> Well, doing that, with this scheduler, is certainly *not* the best
> recipe for determinism and reliability.
>
> In fact, RTDS is a non-work conserving scheduler. This means that (with
> default parameters) each vCPU gets at most 40% CPU time, even if there
> are idle cycles.
>
> With 6 vCPU, there's a total demand of 240% of CPU time, and with 2
> pCPUs, there's at most 200% of that, which means we're in overload
> (well, at least that's the case if/when all the vCPUs try to execute
> for their guaranteed 40%).
>
> Things *should really not* explode (like as in Xen crashes) if that
> happens; actually, from a scheduler perspective, it should really not
> be too big of a deal (especially if the overload is transient, like I
> guess it should be in this case). However, it's entirely possible that
> some specific vCPUs failing to be scheduler for a certain amount of
> time, causes something _inside_ the guest to timeout, or get stuck or
> wedged, which may be what happens here.
Looking at the log I don't see any crash of Xen and it seems to
be responsive.
I don't know much about the scheduler and how to interpret the logs:
Sep 25 22:43:21.495119 (XEN) Domain info:
Sep 25 22:43:21.503073 (XEN) domain: 0
Sep 25 22:43:21.503100 (XEN) [ 0.0 ] cpu 0, (10000000, 4000000), cur_b=3895333 cur_d=1611120000000 last_start=1611116505875
Sep 25 22:43:21.511080 (XEN) onQ=0 runnable=0 flags=0 effective hard_affinity=0-1
Sep 25 22:43:21.519082 (XEN) [ 0.1 ] cpu 1, (10000000, 4000000), cur_b=3946375 cur_d=1611130000000 last_start=1611126446583
Sep 25 22:43:21.527023 (XEN) onQ=0 runnable=1 flags=0 effective hard_affinity=0-1
Sep 25 22:43:21.535063 (XEN) domain: 5
Sep 25 22:43:21.535089 (XEN) [ 5.0 ] cpu 0, (10000000, 4000000), cur_b=3953875 cur_d=1611120000000 last_start=1611110106041
Sep 25 22:43:21.543073 (XEN) onQ=0 runnable=0 flags=0 effective hard_affinity=0-1
Sep 25 22:43:21.551078 (XEN) [ 5.1 ] cpu 1, (10000000, 4000000), cur_b=3938167 cur_d=1611140000000 last_start=1611130169791
Sep 25 22:43:21.559063 (XEN) onQ=0 runnable=0 flags=0 effective hard_affinity=0-1
Sep 25 22:43:21.559096 (XEN) [ 5.2 ] cpu 1, (10000000, 4000000), cur_b=3952500 cur_d=1611140000000 last_start=1611130107958
Sep 25 22:43:21.575067 (XEN) onQ=0 runnable=0 flags=0 effective hard_affinity=0-1
Sep 25 22:43:21.575101 (XEN) [ 5.3 ] cpu 0, (10000000, 4000000), cur_b=3951875 cur_d=1611120000000 last_start=1611110154166
Sep 25 22:43:21.583196 (XEN) onQ=0 runnable=0 flags=0 effective hard_affinity=0-1
Also, it seems to fail fairly reliably, so it might be possible
to set up a reproducer.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-09-27 23:51 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-25 9:46 [xen-unstable test] 113807: regressions - FAIL osstest service owner
2017-09-25 14:07 ` Guest start issue on ARM (maybe related to Credit2) [Was: Re: [xen-unstable test] 113807: regressions - FAIL] Dario Faggioli
2017-09-25 16:23 ` Julien Grall
2017-09-25 17:29 ` Dario Faggioli
2017-09-26 7:33 ` Dario Faggioli
2017-09-26 17:28 ` Julien Grall
2017-09-26 20:51 ` Dario Faggioli
2017-09-27 23:51 ` Julien Grall [this message]
2017-09-27 23:52 ` Julien Grall
2017-09-28 9:38 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7fa4f13e-c4fe-cb7e-0438-f2f170c948bb@arm.com \
--to=julien.grall@arm.com \
--cc=dario.faggioli@citrix.com \
--cc=osstest-admin@xenproject.org \
--cc=stefano@stabellini.net \
--cc=xen-devel@lists.xensource.com \
--cc=xumengpanda@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).