From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>,
xen-devel <xen-devel@lists.xenproject.org>
Cc: Jonathan Davies <Jonathan.Davies@citrix.com>,
Julien Grall <julien.grall@arm.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Marcus Granado <marcus.granado@citrix.com>
Subject: Re: [PATCH 1/3] xen: sched: introduce the 'null' semi-static scheduler
Date: Thu, 6 Apr 2017 16:43:26 +0200 [thread overview]
Message-ID: <1491489806.18721.18.camel@citrix.com> (raw)
In-Reply-To: <CAFLBxZbFwgoMkNicsqXeBan7SvtE+Y0odyPPXmD6fws=ACCThA@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 2674 bytes --]
On Mon, 2017-03-27 at 11:48 +0100, George Dunlap wrote:
> On Mon, Mar 27, 2017 at 11:31 AM, George Dunlap
> <george.dunlap@citrix.com> wrote:
> >
> > Would it be possible instead to have domain assignment, vcpu-add /
> > remove, pcpu remove, &c just fail (perhaps with -ENOSPC and/or
> > -EBUSY)
> > if we ever reach a situation where |vcpus| > |pcpus|?
> >
> > Or, to fail as many operations *as possible* which would bring us
> > to
> > that state, use the `waitqueue` idea as a backup for situations
> > where we
> > can't really avoid it?
>
> I suppose one reason is that it looks like a lot of the operations
> can't really fail -- insert_vcpu and deinit_pdata both return void,
> and the scheduler isn't even directly involved in setting the hard
> affinity, so doesn't get a chance to object that with the new hard
> affinity there is nowhere to run the vcpu.
>
This is exactly how it is.
The waitqueue handling is the most complicated thing to deal with in
this scheduler, and I expect it to be completely useless, at least if
the scheduler is used in the way we think it should be actually used.
*But* I feel like assuming that this will happen 100% of the time is
unrealistic, and a waitqueue was the best I could come up with.
As you say, there are a whole bunch of operations that just can't be
forced to fail by the scheduler. E.g., I won't be able to forbid
removing a pCPU from a sched_null pool because it has a vCPU assigned
to it, nor adding a domain (and hence it's vCPUs) to such pool, if
there are not enough free pCPUs. :-/
> I don't want to wait to re-write the interfaces to get this scheduler
> in, so I suppose the waitqueue thing will have to do for now. :-)
>
Yep. :-D
Let me add that, FWIW, I've tested situations where a (Linux) VM with 4
vCPUs was in a null pool with 4 pCPUs and then, with all the vCPUs
running, I removed and re-added 3 of the 4 pCPUs of the pool. And while
I agree that this should not be done, and that it's at high risk of
confusing, stalling or deadlocking the guest kernel, nothing exploded.
Doing the same thing to dom0, for instance, proved to be a lot less
safe. :-)
What I certainly can do is adding a warning when a vCPU hits the
waitqueue. Chatty indeed, but we _do_want_ to be a bit nasty to the
ones that misuse the scheduler... it's for their own good! :-P
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-04-06 14:43 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-17 18:42 [PATCH 0/3] The 'null' Scheduler Dario Faggioli
2017-03-17 18:42 ` [PATCH 1/3] xen: sched: introduce the 'null' semi-static scheduler Dario Faggioli
2017-03-20 23:21 ` Stefano Stabellini
2017-03-21 8:26 ` Dario Faggioli
2017-03-27 10:31 ` George Dunlap
2017-03-27 10:48 ` George Dunlap
2017-04-06 14:43 ` Dario Faggioli [this message]
2017-04-06 15:07 ` Dario Faggioli
2017-03-17 18:43 ` [PATCH 2/3] xen: sched_null: support for hard affinity Dario Faggioli
2017-03-20 23:46 ` Stefano Stabellini
2017-03-21 8:47 ` Dario Faggioli
2017-03-17 18:43 ` [PATCH 3/3] tools: sched: add support for 'null' scheduler Dario Faggioli
2017-03-20 22:28 ` Stefano Stabellini
2017-03-21 17:09 ` Wei Liu
2017-03-27 10:50 ` George Dunlap
2017-04-06 10:49 ` Dario Faggioli
2017-04-06 13:59 ` George Dunlap
2017-04-06 15:18 ` Dario Faggioli
2017-04-07 9:42 ` Wei Liu
2017-04-07 10:05 ` Dario Faggioli
2017-04-07 10:13 ` Wei Liu
2017-03-20 22:23 ` [PATCH 0/3] The 'null' Scheduler Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1491489806.18721.18.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=Jonathan.Davies@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=julien.grall@arm.com \
--cc=marcus.granado@citrix.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).