From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <mengxu@cis.upenn.edu>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
Tianyang Chen <tiche@seas.upenn.edu>,
Meng Xu <xumengpanda@gmail.com>,
George Dunlap <george.dunlap@citrix.com>,
Dagaen Golomb <dgolomb@seas.upenn.edu>
Subject: Re: [PATCH v7]xen: sched: convert RTDS from time to event driven model
Date: Thu, 10 Mar 2016 17:43:18 +0100 [thread overview]
Message-ID: <1457628198.3102.525.camel@citrix.com> (raw)
In-Reply-To: <CAENZ-+nqqP3aPF6Xhg61-ahLqtj60Nsr7b5bp+T0nAso_O_Bsw@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 3604 bytes --]
On Thu, 2016-03-10 at 10:28 -0500, Meng Xu wrote:
> On Thu, Mar 10, 2016 at 5:38 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> >
> > I don't think we really need to count anything. In fact, what I had
> > in
> > mind and tried to put down in pseudocode is that we traverse the
> > list
> > of replenishment events twice. During the first traversal, we do
> > not
> > remove the elements that we replenish (i.e., the ones that we call
> > rt_update_deadline() on). Therefore, we can just do the second
> > traversal, find them all in there, handle the tickling, and --in
> > this
> > case-- remove and re-insert them. Wouldn't this work?
> My concern is that:
> Once we run rt_update_deadline() in the first traversal of the list,
> we have updated the cur_deadline and cur_budget already.
> Since the replenish queue is sorted by the cur_deadline, how can we
> know which vcpu has been updated in the first traversal and need to
> be
> reinsert? We don't have to traverse the whole replq to reinsert all
> vcpus since some of them haven't been replenished yet.
>
Ah, you're right, doing all the rt_update_deadline() in the first loop,
we screw the stop condition of the second loop.
I still don't like counting, it looks fragile. :-/
This that you propose here...
> If we wan to avoid the counting, we can add a flag like
> #define __RTDS_delayed_reinsert_replq 4
> #define RTDS_delayed_reinsert_replq (1<<
> __RTDS_delayed_reinsert_replq)
> so that we know when we should stop at the second traversal.
>
...seems like it could work, but I also am not super happy about it, as
it does not look to me there should be the need of such a generic piece
of information such as a flag, for this very specific purpose.
I mean, I know we have plenty of free bits in flag, but it's something
that happens *all* *inside* one function (replenishment timer handler).
What about an internal (to the timer replenishment fucntion),
temporary, list. Something along the lines of:
...
LIST_HEAD(tmp_replq);
list_for_each_safe(iter, tmp, replq)
{
svc = replq_elem(iter);
if ( now < svc->cur_deadline )
break;
list_del(&svc->replq_elem);
rt_update_deadline(now, svc);
list_add(&svc->replq_elem, &tmp_replq);
}
list_for_each_safe(iter, tmp, tmp_replq)
{
svc = replq_elem(iter);
< tickling logic >
list_del(&svc->replq_elem);
deadline_queue_insert(&replq_elem, svc, &svc->replq_elem, replq);
}
...
So, basically, the idea is:
- first, we fetch all the vcpus that needs a replenishment, remove
them from replenishment queue, do the replenishment and stash them
in a temp list;
- second, for all the vcpus that we replenished (which we know which
ones they are: all the ones in the temp list!) we apply the proper
tickling logic, remove them from the temp list and queue their new
replenishment event.
It may look a bit convoluted, all these list moving, but I do like the
fact that is is super self-contained.
How does that sound / What did I forget this time ? :-)
BTW, I hope I got the code snippet right, but please, let's focus and
discuss the idea.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-03-10 16:43 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-05 1:39 [PATCH v7]xen: sched: convert RTDS from time to event driven model Tianyang Chen
2016-03-09 4:33 ` Meng Xu
2016-03-09 15:46 ` Dario Faggioli
2016-03-10 4:00 ` Meng Xu
2016-03-10 10:38 ` Dario Faggioli
2016-03-10 15:28 ` Meng Xu
2016-03-10 16:43 ` Dario Faggioli [this message]
2016-03-10 18:08 ` Meng Xu
2016-03-10 23:53 ` Dario Faggioli
2016-03-11 0:33 ` Meng Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1457628198.3102.525.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=dgolomb@seas.upenn.edu \
--cc=george.dunlap@citrix.com \
--cc=mengxu@cis.upenn.edu \
--cc=tiche@seas.upenn.edu \
--cc=xen-devel@lists.xenproject.org \
--cc=xumengpanda@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).