From: Julien Grall <julien.grall@arm.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
Stefano Stabellini <sstabellini@kernel.org>
Cc: edgar.iglesias@xilinx.com, george.dunlap@eu.citrix.com,
nd@arm.com, Punit Agrawal <punit.agrawal@arm.com>,
xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: introduce vwfi parameter
Date: Sun, 19 Feb 2017 21:27:48 +0000 [thread overview]
Message-ID: <a173bcb3-04f0-4b02-70df-6b9c33a56cbf@arm.com> (raw)
In-Reply-To: <1487382463.6732.146.camel@citrix.com>
Hi Dario,
On 02/18/2017 01:47 AM, Dario Faggioli wrote:
> On Fri, 2017-02-17 at 14:50 -0800, Stefano Stabellini wrote:
>> On Fri, 17 Feb 2017, Julien Grall wrote:
>>> Please explain in which context this will be beneficial. My gut
>>> feeling is
>>> only will make performance worst if a multiple vCPU of the same
>>> guest is
>>> running on vCPU
>>
>> I am not a scheduler expert, but I don't think so. Let me explain the
>> difference:
>>
>> - vcpu_block blocks a vcpu until an event occurs, for example until
>> it
>> receives an interrupt
>>
>> - vcpu_yield stops the vcpu from running until the next scheduler
>> slot
>>
> So, what happens when you yield, depends on how yield is implemented in
> the specific scheduler, and what other vcpus are runnable in the
> system.
>
> Currently, neither Credit1 nor Credit2 (and nor the Linux scheduler,
> AFAICR) really stop the yielding vcpus. Broadly speaking, the following
> two scenarios are possible:
> - vcpu A yields, and there is one or more runnable but not already
> running other vcpus. In this case, A is indeed descheduled and put
> back in a scheduler runqueue in such a way that one or more of the
> runnable but not running other vcpus have a chance to execute,
> before the scheduler would consider A again. This may be
> implemented by putting A on the tail of the runqueue, so all the
> other vcpus will get a chance to run (this is basically what
> happens in Credit1, modulo periodic runq sorting). Or it may be
> implemented by ignoring A for the next <number> scheduling
> decisions after it yielded (this is basically what happens in
> Credit2). Both approaches have pros and cons, but the common botton
> line is that others are given a chance to run.
>
> - vcpu A yields, and there are no runnable but not running vcpus
> around. In this case, A gets to run again. Full stop.
Which turn to be the busy looping I was mentioning when one vCPU is
assigned to a pCPU. This is not the goal of WFI and I would be really
surprised that embedded folks will be happy with a solution using more
power.
> And when a vcpu that has yielded is picked up back for execution
> --either immediately or after a few others-- it can run again. And if
> it yields again (and again, and again), we just go back to option 1 or
> 2 above.
>
>> In both cases the vcpus is not run until the next slot, so I don't
>> think
>> it should make the performance worse in multi-vcpus scenarios. But I
>> can
>> do some tests to double check.
>>
> All the above being said, I also don't think it will affect much multi-
> vcpus VM's performance. In fact, even if the yielding vcpu is never
> really stopped, the other ones are indeed given a chance to execute if
> they want and are capable of.
>
> But sure it would not harm verifying with some tests.
>
>>> The main point of using wfi is for power saving. With this change,
>>> you will
>>> end up in a busy loop and as you said consume more power.
>>
>> That's not true: the vcpu is still descheduled until the next slot.
>> There is no busy loop (that would be indeed very bad).
>>
> Well, as a matter of fact there may be busy-looping involved... But
> isn't it the main point of this all. AFAIR, idle=pool in Linux does
> very much the same, and has the same risk of potentially letting tasks
> busy loop.
>
> What will never happen is that a yielding vcpu, by busy looping,
> prevents other runnable (and non yielding) vcpus to run. And if it
> does, it's a bug. :-)
I didn't say it will prevent another vCPU to run. But it will at least
use slot that could have been used for good purpose by another pCPU.
So in similar workload Xen will perform worst with vwfi=idle, not even
mentioning the power consumption...
>
>>> I don't think this is acceptable even to get a better interrupt
>>> latency. Some
>>> workload will care about interrupt latency and power.
>>>
>>> I think a better approach would be to check whether the scheduler
>>> has another
>>> vCPU to run. If not wait for an interrupt in the trap.
>>>
>>> This would save the context switch to the idle vCPU if we are still
>>> on the
>>> time slice of the vCPU.
>>
>> From my limited understanding of how schedulers work, I think this
>> cannot work reliably. It is the scheduler that needs to tell the
>> arch-specific code to put a pcpu to sleep, not the other way around.
>>
> Yes, that is basically true.
>
> Another way to explain it would be by saying that, if there were other
> vCPUs to run, we wouldn't have gone idle (and entered the idle loop).
>
> In fact, in work conserving schedulers, if pCPU x becomes idle, it
> means there is _nothing_ that can execute on x itself around. And our
> schedulers are (with the exception of ARRINC, and if not using caps in
> Credit1) work conserving, or at least they want and try to be an as
> much work conserving as possible.
My knowledge of the scheduler is limited. Does the scheduler take into
account the cost of context switch when scheduling? When do you decide
when to run the idle vCPU? Is it only the no other vCPU are runnable or
do you have an heuristic?
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-19 21:35 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1487286292-29502-1-git-send-email-sstabellini@kernel.org>
[not found] ` <a271394a-6c76-027c-fb08-b3fe775224ba@arm.com>
2017-02-17 22:50 ` [PATCH] xen/arm: introduce vwfi parameter Stefano Stabellini
2017-02-18 1:47 ` Dario Faggioli
2017-02-19 21:27 ` Julien Grall [this message]
2017-02-20 10:43 ` George Dunlap
2017-02-20 11:15 ` Dario Faggioli
2017-02-19 21:34 ` Julien Grall
2017-02-20 11:35 ` Dario Faggioli
2017-02-20 18:43 ` Stefano Stabellini
2017-02-20 18:45 ` George Dunlap
2017-02-20 18:49 ` Stefano Stabellini
2017-02-20 18:47 ` Stefano Stabellini
2017-02-20 18:53 ` Julien Grall
2017-02-20 19:20 ` Dario Faggioli
2017-02-20 19:38 ` Julien Grall
2017-02-20 22:53 ` Dario Faggioli
2017-02-21 0:38 ` Stefano Stabellini
2017-02-21 8:10 ` Julien Grall
2017-02-21 9:24 ` Dario Faggioli
2017-02-21 13:04 ` Julien Grall
2017-02-21 7:59 ` Julien Grall
2017-02-21 9:09 ` Dario Faggioli
2017-02-21 12:30 ` Julien Grall
2017-02-21 13:46 ` George Dunlap
2017-02-21 15:07 ` Dario Faggioli
2017-02-21 17:49 ` Stefano Stabellini
2017-02-21 17:56 ` Julien Grall
2017-02-21 18:30 ` Stefano Stabellini
2017-02-21 19:20 ` Julien Grall
2017-02-22 4:21 ` Edgar E. Iglesias
2017-02-22 17:22 ` Stefano Stabellini
2017-02-23 9:19 ` Edgar E. Iglesias
2017-02-21 18:17 ` George Dunlap
2017-02-22 16:40 ` Dario Faggioli
2017-02-21 15:14 ` Julien Grall
2017-02-21 16:59 ` George Dunlap
2017-02-21 18:03 ` Stefano Stabellini
2017-02-21 18:24 ` Julien Grall
2017-02-21 16:51 ` Dario Faggioli
2017-02-21 17:39 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a173bcb3-04f0-4b02-70df-6b9c33a56cbf@arm.com \
--to=julien.grall@arm.com \
--cc=dario.faggioli@citrix.com \
--cc=edgar.iglesias@xilinx.com \
--cc=george.dunlap@eu.citrix.com \
--cc=nd@arm.com \
--cc=punit.agrawal@arm.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).