xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <mengxu@cis.upenn.edu>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Design RFC] Towards work-conserving RTDS scheduler
Date: Thu, 18 Aug 2016 12:22:06 +0200	[thread overview]
Message-ID: <1471515726.6806.72.camel@citrix.com> (raw)
In-Reply-To: <CAENZ-+mWfR49UipuF9AaYQhACYHW8rEwG+kVUQVC+W=9Os3n4w@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 3525 bytes --]

On Tue, 2016-08-09 at 09:57 -0400, Meng Xu wrote:
> On Mon, Aug 8, 2016 at 5:38 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > 
> > I'm just thinking out loud and
> > wondering:
> >  - could it be useful to have a scheduling analysis in place for
> > the
> >    scheduler in work conserving mode (one, of course, that takes
> > into
> >    account and give guarantees on the otherwise idle bandwidth... I
> >    know that the existing one holds! :-P) ?
> >  - if yes, do you already have one --or do you think it will be
> >    possible to develop one-- for your priority-index based model?
> I think I could potentially develop one such analysis.
> 
Great. Let me know if you need any help writing the paper! :-P

> > Actually, it's quite likely that you either have already noticed
> > this
> > and done the analysis, or that someone else in literature has done
> > something similar --maybe with other schedulers-- before.
> Yes, I noticed this but I don't have analysis yet. ;-) I will do some
> math formulas to model this situation.
> I'm thinking the desired design will be
> 1) Work-conserving scheduler;
> 2) A *tight* schedulability analysis. If we cannot get tight
> analysis,
> we should reduce the abstraction overhead, i.e., num_cores -
> utilization of all VCPUs. (In order to achieve better analysis, we
> may
> need to change the scheduling policy a bit. I'm not very clear about
> how to do it yet, but I will think about it.)
> 
Err... I'm not sure I got what you exactly mean here, but this is your
field, just go ahead with it without bothering explaining everything to
me. :-)

> > Anyway, the idea itself looks fair enough to me. I'd like to hear,
> > if
> > that's fine with you, how you plan to actually implement it, as
> > there
> > of course are multiple different ways to do it, and there are, IMO,
> > a
> > couple of things that should be kept in mind.
> How about letting me think about the analysis first. If we can have
> both the work-conserving algorithm and the analysis, that will be
> better. If we finally decide not to have the analysis, we can fall
> back to the discussion of the current design?
> 
Sure.

> > Finally, about the work-conserving*ness on-off switch, what added
> > difficulty or increase in code complexity prevents us to, instead
> > of
> > this:
> > 
> > "2) Priority index: It indicates the current  priority level of the
> >     VCPU. When a VCPU’s budget is depleted in the current period,
> > its
> >     priority index will increase by 1 and its budget will be
> >     replenished."
> > 
> > do something like this:
> > 
> > "2) Priority index: It indicates the current  priority level of the
> >     VCPU. When a VCPU's budget is depleted in the current period:
> >      2a) if the VCPU has the work conserving flag set, its priority
> >          index will be increased by 1, and its budget replenished;
> >      2b) if the VCPU has the work conserving flag cleat, it's
> > blocked
> >          until next period."
> > 
> > ?
> Agree. We can have the per-VCPU working-conserving flag.
> 
Glad you see it useful/doable too.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-08-18 10:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-04  5:15 [Design RFC] Towards work-conserving RTDS scheduler Meng Xu
2016-08-08  9:38 ` Dario Faggioli
2016-08-09 13:57   ` Meng Xu
2016-08-18 10:22     ` Dario Faggioli [this message]
2016-08-18 15:07       ` Meng Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1471515726.6806.72.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=mengxu@cis.upenn.edu \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).