From: Michal Soltys <soltys@ziu.info>
To: "John A. Sullivan III" <jsullivan@opensourcedevel.com>
Cc: netdev@vger.kernel.org
Subject: Re: Latency guarantees in HFSC rt service curves
Date: Sat, 17 Dec 2011 19:40:36 +0100 [thread overview]
Message-ID: <4EECE224.90800@ziu.info> (raw)
In-Reply-To: <1323934716.8451.272.camel@denise.theartistscloset.com>
Sorry for late reply.
On 15.12.2011 08:38, John A. Sullivan III wrote:
> Yes, granted. What I'm trying to do in the documentation is translate
> the model's mathematical concepts into concepts more germane to
> system > administrators. A sys admin is not very likely to think in
> terms of > deadline times but will think, "I've got a momentary
> boost in > bandwidth to allow me to ensure proper latency for time
> sensitive > traffic." Of course, that could be where I'm getting in
> trouble :)
Tough to say. I'd say you can't completely avoid math here (I'm not
saying not to trim it a bit though). In the same way one can't really go
through TBF without understanding what token bucket is. Well, the
complexity of both is on different level, but you see my point.
>> Whole RT design - with eligible/deadline split - is to allow convex
>> curves to send "earlier", pushing the deadlines to the "right" -
>> which in turn allows newly backlogged class to have brief priority.
>> But it all remains under interface limit and over-time fulfills
>> guarantees (even if locally they are violated).
> To the right? I would have thought to the left on the x axis, i.e.,
> the deadline time becomes sooner? Ah, unless you are referring to the
> other queue's deadline times and mean not literally changing the
> deadline time but jumping in front of the ones on the right of the new
> queue's deadline time.
I mean - for convex curves - deadlines are further to the right, as such
class is eligible earlier (for convex curves, eligible is just linear m2
without m1 part), thus => receive more service => deadline projection is
shifted to the right. When some other concave curve becomes active, its
deadlines will be naturally preferred if both curves are eligible.
> The thought behind oversubscribing m1 . . . well . . . not
> intentionally oversubscribing - just not being very careful about
> setting m1 to make sure it is not oversubscribed (perhaps I wasn't
> clear that the oversubscription is not intentional) - is that it is
> not likely that all queues are continually backlogged thus I can get
> away with an accidental over-allocation in most cases as it will
> quickly sort itself out as soon as a queue goes idle. As a result, I
> can calculate m1 solely based upon the latency requirements of the
> traffic not accounting for the impact of the bandwidth momentarily
> required to do that, i.e., not being too concerned if I have
> accidentally oversubscribed m1.
Well, that's one way to do it. If your aim is that say certain n% of
leaves are used at the same time (with rare exceptions), you could set
RT curves (m1 parts) as if you had less leaves. If you leave m2 alone
and don't care about mentioned exceptions, it should work.
But IMHO that's more like a corner case (with alternative solutions
available), than a "cookbook" recommendation.
> The advantages of doing it via the m1 portion of the rt curve rather
> than the ls curve are:
>
> 1) It is guaranteed whereas the ls will only work when there is
> available bandwidth. Granted, my assumption that it is rare for all
> classes to be continually backlogged implies there is always some
> extra bandwidth available. And granted that it is not guaranteed if
> too many oversubscribed m1's kick in at the same time.
>
> 2) It seems less complicated than trying to figure out what my
> possibly available ls ratios should be to meet my latency requirements
> (which then also recouples bandwidth and latency). m1 is much more
> direct and reliable.
Like mentioned above - it's one way to do things and you're right. But
I think you might be underestimating LS a bit. By its nature, it
schedules at speed normalized to the interface's capacity after all (or
UL limits, if applicable) - so the less of the classes are acutally
active, the more they get from LS. You mentioned earlier that you saw LS
being more aggressive in allocating bandwidth - maybe that was the
effect you were seeing ?
next prev parent reply other threads:[~2011-12-17 18:40 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-09 21:51 Latency guarantees in HFSC rt service curves John A. Sullivan III
2011-12-10 13:03 ` Michal Soltys
2011-12-10 15:35 ` John A. Sullivan III
2011-12-10 17:57 ` Michal Soltys
2011-12-10 18:35 ` John A. Sullivan III
2011-12-13 23:46 ` Michal Soltys
2011-12-14 5:16 ` John A. Sullivan III
2011-12-14 5:24 ` John A. Sullivan III
2011-12-15 1:48 ` Michal Soltys
2011-12-15 7:38 ` John A. Sullivan III
2011-12-17 18:40 ` Michal Soltys [this message]
2011-12-17 20:41 ` John A. Sullivan III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4EECE224.90800@ziu.info \
--to=soltys@ziu.info \
--cc=jsullivan@opensourcedevel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).