xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: edgar.iglesias@xilinx.com, george.dunlap@eu.citrix.com,
	nd@arm.com, Punit Agrawal <punit.agrawal@arm.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/arm: introduce vwfi parameter
Date: Mon, 20 Feb 2017 23:53:35 +0100	[thread overview]
Message-ID: <1487631215.6732.266.camel@citrix.com> (raw)
In-Reply-To: <1845466e-c6a3-bcb9-0813-80ecdf267f03@arm.com>


[-- Attachment #1.1: Type: text/plain, Size: 4767 bytes --]

On Mon, 2017-02-20 at 19:38 +0000, Julien Grall wrote:
> On 20/02/17 19:20, Dario Faggioli wrote:
> > E.g., if vCPU x of domain A wants to go idle with a WFI/WFE, but
> > the
> > host is overbooked and currently really busy, Xen wants to run some
> > other vCPU (of either the same of another domain).
> > 
> > That's actually the whole point of virtualization, and the reason
> > why
> > overbooking an host with more vCPUs (from multiple guests) than it
> > has
> > pCPUs works at all. If we start letting guests put the host's pCPUs
> > to
> > sleep, not only the scheduler, but many things would break, IMO!
> 
> I am not speaking about general case but when you get 1 vCPU pinned
> to 1 
> pCPU (I think this is Stefano use case). No other vCPU will run on
> this 
> pCPU. So it would be fine to let the guest do the WFI.
> 
Mmm... ok, yes, in that case, it may make sense and work, from a, let's
say, purely functional perspective. But still I struggle to place this
in a bigger picture.

For instance, as you say, executing a WFI from a guest directly on
hardware, only makes sense if we have 1:1 static pinning. Which means
it can't just be done by default, or with a boot parameter, because we
need to check and enforce that there's only 1:1 pinning around.

Is it possible to decide whether to trap and emulate WFI, or just
execute it, online, and change such decision dynamically? And even if
yes, how would the whole thing work? When the direct execution is
enabled for a domain we automatically enforce 1:1 pinning for that
domain, and kick all the other domain out of its pcpus? What if they
have their own pinning, what if they also have 'direct WFI' behavior
enabled?

If it is not possible to change all this online and on a per-domain
basis, what do we do? When dooted with the 'direct WFI' flag, we only
accept 1:1 pinning? Who should enforce that, the setvcpuaffinity
hypercall?

These are just examples, my point being that in theory, if we consider
a very specific usecase or set of usecase, there's a lot we can do. But
when you say "why don't you let the guest directly execute WFI", in
response to a patch and a discussion like this, people may think that
you are actually proposing doing it as a solution, which is not
possible without figuring out all the open questions above (actually,
probably, more) and without introducing a lot of cross-subsystem
policing inside Xen, which is often something we don't want.

But, if you let me say this again, it looks to me we are trying to
solve too many problem all at once in this thread, should we try
slowing down/refocusing? :-)

> If you run multiple vCPU in the same pCPU you would have a bigger 
> interrupt latency. And blocked the vCPU or yield will likely have
> the 
> same number unless you know the interrupt will come right now. 
>
Maybe. At least on x86, that would depend on the actual load. If all
your pCPUs are more than 100% loaded, yes. If the load is less than
that, you may still see improvements.

> But in 
> that case, using WFI in the guest may not have been the right things
> to do.
> 
But if the guest is, let's say, Linux, does it use WFI or not? And is
it the right thing or not?

Again, the fact you're saying this probably means there's something I
am either missing or ignoring about ARM.

> I have heard use case where people wants to disable the scheduler
> (e.g a 
> nop scheduler) because they know only 1 vCPU will ever run on the
> pCPU. 
> This is exactly the use case I am thinking about.
> 
Sure! Except that, in Xen, we don't know whether we have, and always
will, 1 vCPU ever run on each pCPU. Nor we have a way to enforce that,
neither in toolstack nor in the hypervisor. :-P

> > So, I'm not sure what we're talking about, but what I'm quite sure
> > is
> > that we don't want a guest to be able to decide when and until what
> > time/event, a pCPU goes idle.
> 
> Well, if the guest is not using the WFI/WFE at all you would need an 
> interrupt from the scheduler to get it running. 
>
If the guest is not using WFI, it's busy looping, isn't it?

> So here it is similar, 
> the scheduler would have setup a timer and the processor will awake
> when 
> receiving the timer interrupt to enter in the hypervisor.
> 
> So, yes in fine the guest will waste its slot. 
>
Did I say it already that this concept of "slots" does not apply here?
:-D

> Cheers,
> 
Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-02-20 22:53 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1487286292-29502-1-git-send-email-sstabellini@kernel.org>
     [not found] ` <a271394a-6c76-027c-fb08-b3fe775224ba@arm.com>
2017-02-17 22:50   ` [PATCH] xen/arm: introduce vwfi parameter Stefano Stabellini
2017-02-18  1:47     ` Dario Faggioli
2017-02-19 21:27       ` Julien Grall
2017-02-20 10:43         ` George Dunlap
2017-02-20 11:15         ` Dario Faggioli
2017-02-19 21:34     ` Julien Grall
2017-02-20 11:35       ` Dario Faggioli
2017-02-20 18:43         ` Stefano Stabellini
2017-02-20 18:45           ` George Dunlap
2017-02-20 18:49             ` Stefano Stabellini
2017-02-20 18:47       ` Stefano Stabellini
2017-02-20 18:53         ` Julien Grall
2017-02-20 19:20           ` Dario Faggioli
2017-02-20 19:38             ` Julien Grall
2017-02-20 22:53               ` Dario Faggioli [this message]
2017-02-21  0:38                 ` Stefano Stabellini
2017-02-21  8:10                   ` Julien Grall
2017-02-21  9:24                     ` Dario Faggioli
2017-02-21 13:04                       ` Julien Grall
2017-02-21  7:59                 ` Julien Grall
2017-02-21  9:09                   ` Dario Faggioli
2017-02-21 12:30                     ` Julien Grall
2017-02-21 13:46                       ` George Dunlap
2017-02-21 15:07                         ` Dario Faggioli
2017-02-21 17:49                           ` Stefano Stabellini
2017-02-21 17:56                             ` Julien Grall
2017-02-21 18:30                               ` Stefano Stabellini
2017-02-21 19:20                                 ` Julien Grall
2017-02-22  4:21                                   ` Edgar E. Iglesias
2017-02-22 17:22                                     ` Stefano Stabellini
2017-02-23  9:19                                       ` Edgar E. Iglesias
2017-02-21 18:17                             ` George Dunlap
2017-02-22 16:40                               ` Dario Faggioli
2017-02-21 15:14                         ` Julien Grall
2017-02-21 16:59                           ` George Dunlap
2017-02-21 18:03                           ` Stefano Stabellini
2017-02-21 18:24                             ` Julien Grall
2017-02-21 16:51                       ` Dario Faggioli
2017-02-21 17:39                         ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1487631215.6732.266.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=nd@arm.com \
    --cc=punit.agrawal@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).