netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Latency guarantees in HFSC rt service curves
@ 2011-12-09 21:51 John A. Sullivan III
  2011-12-10 13:03 ` Michal Soltys
  0 siblings, 1 reply; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-09 21:51 UTC (permalink / raw)
  To: netdev

Hello, all.  Sorry to be SPAMming the list but I think I must have a
fundamental conceptual misunderstanding of HFSC's decoupling of latency
and bandwidth guarantees.  I understand how it is done.  I understand in
theory why it is critically important.  Where I'm failing is seeing how
the current implementation makes any real world difference.  I'm sure it
does, I just don't see it.

Here's how I've explained it at the end of the 14 pages of documentation
I've created to try to explain IFB and HFSC from a system
administrator's perspective.  As you can see, I'm relying upon the
excellent illustration of this part of HFSC in the SIGCOM97 paper on
page 4:

"To illustrate the advantage of decoupling delay and bandwidth
allocation with non-linear service curves, consider the example in
Figure 2, where a video and a FTP session share a 10 Mbps link . . . .
Let the video source sends 30 8KB frames per second, which corresponds
to a required bandwidth of 2 Mbps. The remaining 8 Mbps is reserved by a
continuously backlogged FTP session. For simplicity, let all packets be
of size 8 KB. Thus, it takes roughly 6.5 ms to transmit a packet."

"As can be seen, the deadlines of the video packets occur every 33 ms,
while the deadlines of the FTP packets occur every 8.2 ms. This results
in a delay of approximately 26 ms for a video packet."

Let's work through the math to make that more understandable.  HFSC is
committed to deliver 2 Mbps to video and each packet is 8KB long.  Thus,
HFSC's commitment to deliver that packet is within (8000 * 8)bits /
2,000,000(b/s) = 32ms.  I'm not quite sure why I come up with 32 and
they say 33 but we'll use 33.  In other words, to meet the deadline
based solely upon the rate, the bandwidth part of the rt service curve,
the packet needs to be finished dequeueing at 33ms.  Since it only takes
6.5ms to send the packet, HFSC can sit on the packet it received for 33
- 6.5 = 26.5ms.  This adds unnecessary latency to the video stream.

In the second scenario, we introduce an initial, elevated bandwidth
guarantee for the first 10ms.  The bandwidth for the first 10ms is now
6.6 Mbps instead of 2 Mbps.  We do the math again and HFSC's commitment
to video to maintain 6.6 Mbps is to finish dequeueing the packet within
(8000 * 8)bits / 6,600,000(b/s) = 10ms.  Since it takes 6.5 ms to send
the packet, HFSC can sit on the packet for no more than 10 - 6.5 = 3.5
ms.  Quite a difference!

That's our documentation so I think I get it but here's my problem.
Practically speaking, as long as it's not extreme, latency at the
beginning of a video stream (I'm using video because that is the example
given) is not an issue.  The problem is if I introduce latency in the
video stream once it has started.  So what is the advantage of starting
the stream in 3.5ms versus 26.5ms? The subsequent packets where latency
really matters are all governed by the m2 curve at 2 Mbps in this
example.

Moreover, let's say I have three video streams which start
simultaneously.  Only the first packet of the first stream receives the
6.6Mbps bandwidth guarantee of the first 10ms so the other videos
receive no practical benefit whatsoever from this m1 curve.

I'm sure that's not the case but what am I missing? Thanks - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-09 21:51 Latency guarantees in HFSC rt service curves John A. Sullivan III
@ 2011-12-10 13:03 ` Michal Soltys
  2011-12-10 15:35   ` John A. Sullivan III
  0 siblings, 1 reply; 12+ messages in thread
From: Michal Soltys @ 2011-12-10 13:03 UTC (permalink / raw)
  To: John A. Sullivan III; +Cc: netdev

On 11-12-09 22:51, John A. Sullivan III wrote:
> 
> Let's work through the math to make that more understandable. HFSC is
> committed to deliver 2 Mbps to video and each packet is 8KB long.
> Thus, HFSC's commitment to deliver that packet is within (8000 *
> 8)bits / 2,000,000(b/s) = 32ms.  I'm not quite sure why I come up with
> 32 and they say 33 but we'll use 33.  In other words, to meet the
> deadline based solely upon the rate, the bandwidth part of the rt
> service curve, the packet needs to be finished dequeueing at 33ms.
> Since it only takes 6.5ms to send the packet, HFSC can sit on the
> packet it received for 33 - 6.5 = 26.5ms.  This adds unnecessary
> latency to the video stream.

For the record, HFSC will only sit on anything if respective classes
(subtrees) are limited by UL. And RT curves ignore those. If you have
one leaf active, then it will just dequeue asap (modulo UL). If you
have more, then RT and afterwards LS will arbitrate the packets
w.r.t. to the curves you set (and if applicable - UL will stall LS
to match specified speeds).

> That's our documentation so I think I get it but here's my problem.
> Practically speaking, as long as it's not extreme, latency at the
> beginning of a video stream (I'm using video because that is the
> example given) is not an issue. 

> The problem is if I introduce latency in the video stream once it has
> started.  So what is the advantage of starting the stream in 3.5ms
> versus 26.5ms?

> The subsequent packets where latency really matters are all governed
> by the m2 curve at 2 Mbps in this example.

That 2mbit is worst-case scenario (congested link). Remember
about LS which will govern the remaining bandwidth as soon as all RT
requirements are fulfilled.

If you do need a bigger /guarantee/ no matter what, you need steeper m2.
Or different approach. HFSC won't give more that it's asked to do for -
if the RT curve's m2 is set to 2mbit, then packets enqueued in the leaf
with such curve will get 2mbit (modulo cpu/network/uplink
capability/etc.).

> Moreover, let's say I have three video streams which start
> simultaneously.  Only the first packet of the first stream receives
> the 6.6Mbps bandwidth guarantee of the first 10ms so the other videos
> receive no practical benefit whatsoever from this m1 curve.

If you want guarantees per flow, you have to setup for that (same
applies to other classful qdiscs). Rough simplistic scheme would be:

For N flows, you need N classes (+ appropriate filter setup to direct
them to respective leafs) - or - more elaborate qdisc at the leaf that
will go over packets (by quantity or their length) in e.g. round robin
fashion from different flows it can distinguish (and longer period of m1
that will be sufficient to cover more packets). Or something in between.

You have lots of tools to choose from (not all listed of course) -
fliters such as fw, u32, flow; qdiscs (meant to attach to leaf classes)
such as choke, red, sfq, drr; iptables targets (mark, classify). And
more.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-10 13:03 ` Michal Soltys
@ 2011-12-10 15:35   ` John A. Sullivan III
  2011-12-10 17:57     ` Michal Soltys
  0 siblings, 1 reply; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-10 15:35 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

Thanks again for your help, Michal.  I'll answer in line - John

On Sat, 2011-12-10 at 14:03 +0100, Michal Soltys wrote:
> On 11-12-09 22:51, John A. Sullivan III wrote:
> > 
> > Let's work through the math to make that more understandable. HFSC is
> > committed to deliver 2 Mbps to video and each packet is 8KB long.
> > Thus, HFSC's commitment to deliver that packet is within (8000 *
> > 8)bits / 2,000,000(b/s) = 32ms.  I'm not quite sure why I come up with
> > 32 and they say 33 but we'll use 33.  In other words, to meet the
> > deadline based solely upon the rate, the bandwidth part of the rt
> > service curve, the packet needs to be finished dequeueing at 33ms.
> > Since it only takes 6.5ms to send the packet, HFSC can sit on the
> > packet it received for 33 - 6.5 = 26.5ms.  This adds unnecessary
> > latency to the video stream.
> 
> For the record, HFSC will only sit on anything if respective classes
> (subtrees) are limited by UL. And RT curves ignore those. If you have
> one leaf active, then it will just dequeue asap (modulo UL). If you
> have more, then RT and afterwards LS will arbitrate the packets
> w.r.t. to the curves you set (and if applicable - UL will stall LS
> to match specified speeds).
Yes, that's what I thought.  I think the example given is that the FTP
queue is constantly backlogged at which point, as you point out, rt will
arbitrate.
> 
> > That's our documentation so I think I get it but here's my problem.
> > Practically speaking, as long as it's not extreme, latency at the
> > beginning of a video stream (I'm using video because that is the
> > example given) is not an issue. 
> 
> > The problem is if I introduce latency in the video stream once it has
> > started.  So what is the advantage of starting the stream in 3.5ms
> > versus 26.5ms?
> 
> > The subsequent packets where latency really matters are all governed
> > by the m2 curve at 2 Mbps in this example.
> 
> That 2mbit is worst-case scenario (congested link). Remember
> about LS which will govern the remaining bandwidth as soon as all RT
> requirements are fulfilled.
Yes, with your help I think I've got that.
> 
> If you do need a bigger /guarantee/ no matter what, you need steeper m2.
> Or different approach. HFSC won't give more that it's asked to do for -
> if the RT curve's m2 is set to 2mbit, then packets enqueued in the leaf
> with such curve will get 2mbit (modulo cpu/network/uplink
> capability/etc.).
Yes, makes perfect sense.  But I suppose I'm focusing on m1 and its
practical purpose.
> 
> > Moreover, let's say I have three video streams which start
> > simultaneously.  Only the first packet of the first stream receives
> > the 6.6Mbps bandwidth guarantee of the first 10ms so the other videos
> > receive no practical benefit whatsoever from this m1 curve.
> 
> If you want guarantees per flow, you have to setup for that (same
> applies to other classful qdiscs). Rough simplistic scheme would be:
> 
> For N flows, you need N classes (+ appropriate filter setup to direct
> them to respective leafs) - or - more elaborate qdisc at the leaf that
> will go over packets (by quantity or their length) in e.g. round robin
> fashion from different flows it can distinguish (and longer period of m1
> that will be sufficient to cover more packets). Or something in between.
<snip>
Makes perfect sense but seems to confirm what I was thinking.  There
seems to be little practical use for the m1 curve.  Assuming the queues
are often backlogged (or we would not be using traffic shaping), m1 only
applies for a typically very short period of time, perhaps one packet,
after that, the latency is determined exclusively by m2.  So, unless
I've missed something (which is not unlikely), m1 is very interesting in
theory but not very useful in the real world.  Am I missing something?

Thus, the biggest advantage of HFSC over something like HTB is that we
have separate controls for the bandwidth guarantees and the ratio for
sharing available excess bandwidth.  The decoupling of latency and
bandwidth guarantees, which is a remarkable accomplishment, seems to
fall into the category of technical fact but not practically useful.
I'd very much like to be wrong :) Thanks - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-10 15:35   ` John A. Sullivan III
@ 2011-12-10 17:57     ` Michal Soltys
  2011-12-10 18:35       ` John A. Sullivan III
  0 siblings, 1 reply; 12+ messages in thread
From: Michal Soltys @ 2011-12-10 17:57 UTC (permalink / raw)
  To: John A. Sullivan III; +Cc: netdev

On 11-12-10 16:35, John A. Sullivan III wrote:
> Makes perfect sense but seems to confirm what I was thinking.  There
> seems to be little practical use for the m1 curve.  Assuming the
> queues are often backlogged (or we would not be using traffic
> shaping), m1 only applies for a typically very short period of time,
> perhaps one packet, after that, the latency is determined exclusively
> by m2.  So, unless I've missed something (which is not unlikely), m1
> is very interesting in theory but not very useful in the real world.
> Am I missing something?

You forgot about how curves get updated on fresh backlog periods.

If your important traffic designated to some leaf is not permanently
backlogged, it will be constantly switching between active/inactive
states. Any switch to active state will update its curves (minimum of
previous one vs. fresh one anchored at current (time,service)), during
which it will regain some/all of the m1 time.

For simplicity, say you have uplink 10mbit, divided into two chunks
chunks (A and B) with convex/concave curves. On A there's 24/7 torrent
daemon, on B there's some low bandwidth latency sensitive voip/game/etc.
The B will send 1 packet, maybe a few and go inactive - possibly for
tens/hundreds of miliseconds. Next time the class becomes backlogged,
the curves will be updated, and almost for sure the whole new one will
be chosen as the minimum one - and m1 will be used. In a sort of way -
m1 will be (for the most part) responsible for "activation"
latency-sensitive bandwidth, and m2 will be more responsbile for the
bursts. Difference between m1 and m2 and 'd' duration of m1 will skew
the role.

Perhaps easier example: setup as above, but put a ping on B with 100ms
delay between sends. Every single one of those will go at m1 speed
(crazy curve setups aside).

Similary, if you consider A's RT set to say 5mbit, and B to 4mbit/2mbit
(and LS fifty/fifty). And some video in B now, that doesn't push itself
more than 2mbit. Each packet of B will use m1.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-10 17:57     ` Michal Soltys
@ 2011-12-10 18:35       ` John A. Sullivan III
  2011-12-13 23:46         ` Michal Soltys
  0 siblings, 1 reply; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-10 18:35 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

On Sat, 2011-12-10 at 18:57 +0100, Michal Soltys wrote:
> On 11-12-10 16:35, John A. Sullivan III wrote:
> > Makes perfect sense but seems to confirm what I was thinking.  There
> > seems to be little practical use for the m1 curve.  Assuming the
> > queues are often backlogged (or we would not be using traffic
> > shaping), m1 only applies for a typically very short period of time,
> > perhaps one packet, after that, the latency is determined exclusively
> > by m2.  So, unless I've missed something (which is not unlikely), m1
> > is very interesting in theory but not very useful in the real world.
> > Am I missing something?
> 
> You forgot about how curves get updated on fresh backlog periods.

I was wondering if that was the key!
> 
> If your important traffic designated to some leaf is not permanently
> backlogged, it will be constantly switching between active/inactive
> states. Any switch to active state will update its curves (minimum of
> previous one vs. fresh one anchored at current (time,service)), during
> which it will regain some/all of the m1 time.
> 
> For simplicity, say you have uplink 10mbit, divided into two chunks
> chunks (A and B) with convex/concave curves. On A there's 24/7 torrent
> daemon, on B there's some low bandwidth latency sensitive voip/game/etc.
> The B will send 1 packet, maybe a few and go inactive - possibly for
> tens/hundreds of miliseconds. Next time the class becomes backlogged,
> the curves will be updated, and almost for sure the whole new one will
> be chosen as the minimum one - and m1 will be used. In a sort of way -
> m1 will be (for the most part) responsible for "activation"
> latency-sensitive bandwidth, and m2 will be more responsbile for the
> bursts. Difference between m1 and m2 and 'd' duration of m1 will skew
> the role.
> 
> Perhaps easier example: setup as above, but put a ping on B with 100ms
> delay between sends. Every single one of those will go at m1 speed
> (crazy curve setups aside).
> 
> Similary, if you consider A's RT set to say 5mbit, and B to 4mbit/2mbit
> (and LS fifty/fifty). And some video in B now, that doesn't push itself
> more than 2mbit. Each packet of B will use m1.
<snip>
So, again, trying to wear the eminently practical hat of a sys admin,
for periodic traffic, i.e., protocols like VoiP and video that are
sending packets are regular intervals and thus likely to reset the curve
after each packet, m1 helps reduce latency while m2 is a way of reducing
the chance of overrunning the circuit under heavy load, i.e., where the
concave queue is backlogged.

When we start multiplexing streams of periodic traffic, we are still
fine as long as we are not backlogged.  Once we are backlogged, we drop
down to the m2 curve which prevents us from overrunning the circuit
(assuming the sum of our rt m2 curves <= circuit size) and hopefully
still provides adequate latency.  If we are badly backlogged, we have a
problem with which HFSC can't help us :) (and I suppose where short
queues are helpful).

Thus concave curves seem very helpful for periodic traffic (probably the
type of traffic in mind when it was created) but less so for bursting
traffic.  Specifically, I'm thinking of the use case someone proposed to
accelerate text delivery from web sites.  As long as each web site
connection is isolated, it will work fine but, for a web server with any
kind of continuous load, they will be living on the m2 curve.  Then
again, maybe the queues empty more often than I expect :)

Thanks again.  A week ago, I didn't even know HFSC existed until I was
working on integrating Endian products (http://www.endian.com) with
Firepipes (http://iscs.sf.net) and noticed they were using hfsc.

If you have a chance, could you look at the email I sent entitled "An
error in my HFSC sysadmin documentation".  That's the last piece I need
to fall in place before I can share my documentation of this week's
research.  Thanks - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-10 18:35       ` John A. Sullivan III
@ 2011-12-13 23:46         ` Michal Soltys
  2011-12-14  5:16           ` John A. Sullivan III
  0 siblings, 1 reply; 12+ messages in thread
From: Michal Soltys @ 2011-12-13 23:46 UTC (permalink / raw)
  To: John A. Sullivan III; +Cc: netdev

On 10.12.2011 19:35, John A. Sullivan III wrote:
> On Sat, 2011-12-10 at 18:57 +0100, Michal Soltys wrote: <snip> So,
> again, trying to wear the eminently practical hat of a sys admin, for
> periodic traffic, i.e., protocols like VoiP and video that are sending
> packets are regular intervals and thus likely to reset the curve after
> each packet, m1 helps reduce latency while m2 is a way of reducing the
> chance of overrunning the circuit under heavy load, i.e., where the
> concave queue is backlogged.
>

Well, updated, - not "reset". The latter [kind of] implies the return to
original form, and just so we're on the same page - the curve would have
to be completely under the old one for that to happen. There's also no
chance of overloading anything - except if a machine is shaping on
behalf of something upstream with smaller uplink, and doing so without
UL keeping LS in check.

> When we start multiplexing streams of periodic traffic, we are still
> fine as long as we are not backlogged.  Once we are backlogged, we
> drop down to the m2 curve which prevents us from overrunning the
> circuit (assuming the sum of our rt m2 curves<= circuit size) and
> hopefully still provides adequate latency.  If we are badly
> backlogged, we have a problem with which HFSC can't help us :) (and I
> suppose where short queues are helpful).
>

Keep in mind that not backlogged classes are not hurt by backlogged
ones. And you can't overrun - be it m1 or m2, or at which point any
class activates (providing curves make sense). If everything is
backlogged, you still get nicely multiplexed traffic as defined by
your hierarchy.

If anything briefly stops being backlogged, it will get bonus on next
backlog period - while making sure, it wouldn't go with new curve above
previous backlog period's one (and assuming its actual rate is not above
the defined RT curve, as in such case it will get bonus from excess
bandwidth available as specified by LS (or will get none, if LS is not
defined)).

> If you have a chance, could you look at the email I sent entitled "An
> error in my HFSC sysadmin documentation".  That's the last piece I
> need to fall in place before I can share my documentation of this
> week's research.  Thanks - John

Yes, will do.

Btw, have you check the manpages of recent iproute2 w.r.t. hfsc ?
Lots of this should be explained, or at least mentioned there.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-13 23:46         ` Michal Soltys
@ 2011-12-14  5:16           ` John A. Sullivan III
  2011-12-14  5:24             ` John A. Sullivan III
  2011-12-15  1:48             ` Michal Soltys
  0 siblings, 2 replies; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-14  5:16 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

Thanks again, Michal.  I'll respond in the text - John

On Wed, 2011-12-14 at 00:46 +0100, Michal Soltys wrote:
> On 10.12.2011 19:35, John A. Sullivan III wrote:
> > On Sat, 2011-12-10 at 18:57 +0100, Michal Soltys wrote: <snip> So,
> > again, trying to wear the eminently practical hat of a sys admin, for
> > periodic traffic, i.e., protocols like VoiP and video that are sending
> > packets are regular intervals and thus likely to reset the curve after
> > each packet, m1 helps reduce latency while m2 is a way of reducing the
> > chance of overrunning the circuit under heavy load, i.e., where the
> > concave queue is backlogged.
> >
> 
> Well, updated, - not "reset". The latter [kind of] implies the return to
> original form, and just so we're on the same page - the curve would have
> to be completely under the old one for that to happen. There's also no
> chance of overloading anything - except if a machine is shaping on
> behalf of something upstream with smaller uplink, and doing so without
> UL keeping LS in check.
> 
> > When we start multiplexing streams of periodic traffic, we are still
> > fine as long as we are not backlogged.  Once we are backlogged, we
> > drop down to the m2 curve which prevents us from overrunning the
> > circuit (assuming the sum of our rt m2 curves<= circuit size) and
> > hopefully still provides adequate latency.  If we are badly
> > backlogged, we have a problem with which HFSC can't help us :) (and I
> > suppose where short queues are helpful).
> >
> 
> Keep in mind that not backlogged classes are not hurt by backlogged
> ones. And you can't overrun - be it m1 or m2, or at which point any
> class activates (providing curves make sense). If everything is
> backlogged, you still get nicely multiplexed traffic as defined by
> your hierarchy.
I think I understand the technology (although I confess I did not take
the time to fully digest how the curves are updated) but I'm grappling
more with the practical application and I may be making a false
assumption here.

Once again, with my practical, "how do I use this" hat on, my
instinctive thought is that I would ensure that the sum of the rt m2
curves do not exceed ul, well, not ul literally as rt disregards ul but,
more accurately, circuit capacity so that my circuit is not
oversubscribed and my guarantees are sustainable.  But, since m1 on a
concave curve is a momentary "super charge", I don't need to worry about
oversubscribing since the rate is not sustained.  Thus, if all the
traffic were to be backlogged at m1 rates, we would overrun the circuit.
That's what I meant by m2 working as a circuit breaker.

As I think it through further, I realize that is not true in theory and
may be the source of the error I reference later in this email.  I
suppose it may also be problematic in practice so let me walk through it
a bit further with you if I may.

Let's imagine we have five classes of service.  We don't need to
identify them for this exercise but what we do know is that the sum of
their m2 curves <= circuit capacity and the sum of their m1 curves >
circuit capacity because, operating under my false assumption, this is
only for that temporary boost.  If all five classes receive their first
packet at the identical moment, we will fail to meet the guarantees.

More practically, let's assume the first four are continuously
backlogged and the fifth is receiving a heavy but not backlogged stream
of periodic (as in at regular intervals) traffic such that the curve is
recalculated so that each packet is sent on the m1 curve, I think it is
possible that we once again exceed the capacity of the link.  Ah, no.  I
see it.  Now that I've interrupted writing this email to take the time
to digest how the curve is updated, I see it more clearly.

Since the curves always take the minimum of what the data would have
been if we had kept transmitting the data at the old curve rate or what
the data would be if we reset the curve and started over again at m1 (my
practical translation of the algorithm), the sustained rate even if the
queue constantly goes idle and, as soon as it goes idle receives a new
packet will never exceed the volume of traffic that would have been
transmitted by the rt curve if it was under continuous backlog (not
taking into account any linksharing bandwidth).

I realize I am babbling in my own context so let me state in another way
in case I'm not being clear.  My concern was that, if the four classes
are continuously backlogged and the fifth class with a concave rt
service curve constantly goes idle and then immediately receives a
packet, it will always be transmitting at the m1 rate and thus exceed
the capacity of the circuit (since the sum of the five m2 curves =
capacity and the m1 of the fifth class is greater than the m2 of the
fifth class).  However, the curves do not recalculate to m1
automatically.  They calculate to whichever would be less - the total
data transmitted as if we had continued to transmit on the rt curve as
if it was continually backlogged or the amount of data transmitted if we
were starting the curve over again.  Thus the total bits dequeued if the
circuit cycles between active and passive (backlogged and not
backlogged) will never exceed the total number of bits dequeued if the
queue was continuously backlogged even for concave curves.

Here is ascii art from http://www.sonycsl.co.jp/~kjc/software/TIPS.txt
depicting what I am trying to say:
                                                     ________
                                     ________--------
                                    /                  ______
                                   /   ________--------
                       ________---+----        
                      /          /             
                     /          /              
    total    ->     /          + new coordinate                
                   /                           
                  /                            
           service curve       |
                 of            +
          previous period   current
                             time

			Update Operation

                                                       ______
                                       ________--------
                                  +----        
                                 /             
                                /              
    total    ->                + new coordinate                
                                               
                                               
                               |
                               +
                             current
                             time

			New Service Curve

So now I'll put my practical hat back on where practice will violate
theory but with no practical deleterious effect except in the most
extreme cases.  I propose that, from a practical, system administrator
perspective, it is feasible and probably ideal to define rt service
curves so that the sum of all m2 curves <= circuit capacity and m1
curves should be specified to meet latency requirements REGARDLESS OF
EXCEEDING CIRCUIT CAPACITY.

In theory, we can exceed circuit capacity and violate our guarantees as
in the example I gave where all five queues go active at exactly the
same time.  In practicality, the backlog will never exceed the sum of
the m1 curves times their duration minus the circuit capacity over that
same time period.  In other words, let's say we have a 1.5 Mbps link and
we have five rt curves all with m1 of 1 Mbps for 10 ms.  In 10 ms, we
will have queued 5 * 1 * .01 = 50kbits of traffic and dequeued 1.5 * .01
= 15 kbits of traffic thus, our backlog will never exceed 35kbits.  This
is likely to be quickly drained at the first idle time unless we truly
have all five queues continuously backlogged.

So, my approach of ensuring the sum of m2 curves <= capacity but m1
curves are specified based solely on latency requirements even if they
overload the circuit works in practice except under the most extreme
conditions (where ALL queues are continuously backlogged and even then
the damage is limited) even though it violates the theory.

Am I accurate here?

> 
> If anything briefly stops being backlogged, it will get bonus on next
> backlog period
I am assuming this assumes a concave curve and has to do with the
recalculation of the curve and that this would not be true if m1=m2?

>  - while making sure, it wouldn't go with new curve above
> previous backlog period's one (and assuming its actual rate is not above
> the defined RT curve, as in such case it will get bonus from excess
> bandwidth available as specified by LS (or will get none, if LS is not
> defined)).
When I first read this, I had no clue what you meant :) Now that I've
taken time to digest the curve recalculation, I think you are saying the
exact same thing I said above only in far fewer words ;)
> 
> > If you have a chance, could you look at the email I sent entitled "An
> > error in my HFSC sysadmin documentation".  That's the last piece I
> > need to fall in place before I can share my documentation of this
> > week's research.  Thanks - John
> 
> Yes, will do.
I'm guessing I slightly violated the rt guarantee precisely because of
what I described above.
> 
> Btw, have you check the manpages of recent iproute2 w.r.t. hfsc ?
> Lots of this should be explained, or at least mentioned there.
Yes, you were so kind as to give me links to them. I found them very
helpful - much more mathematical and theoretical than I am used to in a
man page but I'm not sure it would be possible to explain HFSC any other
way.  Thanks again - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-14  5:16           ` John A. Sullivan III
@ 2011-12-14  5:24             ` John A. Sullivan III
  2011-12-15  1:48             ` Michal Soltys
  1 sibling, 0 replies; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-14  5:24 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

On Wed, 2011-12-14 at 00:16 -0500, John A. Sullivan III wrote:
> <snip>> 
> > > If you have a chance, could you look at the email I sent entitled "An
> > > error in my HFSC sysadmin documentation".  That's the last piece I
> > > need to fall in place before I can share my documentation of this
> > > week's research.  Thanks - John
> > 
> > Yes, will do.
> I'm guessing I slightly violated the rt guarantee precisely because of
> what I described above.
<snip>
Alas, not.  I just reviewed what I sent and it was calculating deadline
time with monolinear service curves so I'm still making an error
somewhere.  Thanks - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-14  5:16           ` John A. Sullivan III
  2011-12-14  5:24             ` John A. Sullivan III
@ 2011-12-15  1:48             ` Michal Soltys
  2011-12-15  7:38               ` John A. Sullivan III
  1 sibling, 1 reply; 12+ messages in thread
From: Michal Soltys @ 2011-12-15  1:48 UTC (permalink / raw)
  To: John A. Sullivan III; +Cc: netdev

On 11-12-14 06:16, John A. Sullivan III wrote:
> I think I understand the technology (although I confess I did not take
> the time to fully digest how the curves are updated) but I'm grappling
> more with the practical application and I may be making a false
> assumption here.

Hmm, imho you can't really skip that part - or part about "why split RT
into eligible/deadline" to see the mechanics behind RT curves.

>
> Once again, with my practical, "how do I use this" hat on, my
> instinctive thought is that I would ensure that the sum of the rt m2
> curves do not exceed ul, well, not ul literally as rt disregards ul
> but, more accurately, circuit capacity so that my circuit is not
> oversubscribed and my guarantees are sustainable.

> But, since m1 on a concave curve is a momentary "super charge", I
> don't need to worry about oversubscribing since the rate is not
> sustained.

It's just a slope, from which you derive the time(s) used in deciding
"what to send next":

> Thus, if all the traffic were to be backlogged at m1 rates, we would
> overrun the circuit.

Curves don't really change that in any way.

If you did the above for m1 (or m2) for RT curves, the eligiblity would
lose the point (leafs would always be eligible), and the whole thing
would kind-of degenerate into more complex version of LS. There's a
reason why BSDs won't even let you define RT curves that sum at any
point to more than 80% (aside keeping LS useful, perhaps also related to
thier timer resolution [years ago]). Unless they changed it in recent
versions - I didn't really have any modern BSD under my hands for a
while.

> and the sum of their m1 curves > circuit capacity because, operating
> under my false assumption, this is only for that temporary boost. If
> all five classes receive their first packet at the identical moment,
> we will fail to meet the guarantees.

I'd say more aggressive eliglibility and earlier deadline - than any
boost. Which will briefly (if enough leafs are backlogged) turn whole
carefully designed RT mechanics into dimensionless LS-like thingy. So -
while that is possible - why do it ? Setup RT that fits the achievable
speed, and leave the excess bandwidth to LS.

Whole RT design - with eligible/deadline split - is to allow convex
curves to send "earlier", pushing the deadlines to the "right" - which
in turn allows newly backlogged class to have brief priority. But it all
remains under interface limit and over-time fulfills guarantees (even if
locally they are violated).

> I realize I am babbling in my own context so let me state in another
> way in case I'm not being clear.  My concern was that, if the four
> classes are continuously backlogged and the fifth class with a concave
> rt service curve constantly goes idle and then immediately receives a
> packet, it will always be transmitting at the m1 rate and thus exceed
> the capacity of the circuit (since the sum of the five m2 curves =
> capacity and the m1 of the fifth class is greater than the m2 of the
> fifth class).

Actually it won't. See what I wrote about the eligible/deadline above.
The 4 classes (suppose they are under convex RT) will be eligible
earlier than they would normally be (and send more). When the 5th class
becomes active with its steeper m1, it literally "has a green light" to
send whenever it's eligible, as the deadlines of the 4 classes are
further to the right. If all classes remain backlogged from now on, it
all evens out properly. If the 5th goes idle, other classes will
"buffer up" again (from a lack of better term).

> So now I'll put my practical hat back on where practice will violate
> theory but with no practical deleterious effect except in the most
> extreme cases.  I propose that, from a practical, system administrator
> perspective, it is feasible and probably ideal to define rt service
> curves so that the sum of all m2 curves<= circuit

Going above would really make LS pointless, and turn RT into LS-like
thing (see above).

> curves should be specified to meet latency requirements REGARDLESS OF
> EXCEEDING CIRCUIT CAPACITY.

Again (see above) - that will not get you anywhere. m1 or m2 (look at
the curve as a whole, not just its parts) - it just makes RT lose its
properties and actual meaning behind numbers. Briefly, but still.

I can think of scenarios where such setups would work fine (aside if it
would be really needed), but let's leave that for different occasion :)

>>
>>  If anything briefly stops being backlogged, it will get bonus on next
>>  backlog period
> I am assuming this assumes a concave curve and has to do with the
> recalculation of the curve and that this would not be true if m1=m2?

yes


ps.
Going through other mails will take me far more time I think.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-15  1:48             ` Michal Soltys
@ 2011-12-15  7:38               ` John A. Sullivan III
  2011-12-17 18:40                 ` Michal Soltys
  0 siblings, 1 reply; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-15  7:38 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

On Thu, 2011-12-15 at 02:48 +0100, Michal Soltys wrote:
> On 11-12-14 06:16, John A. Sullivan III wrote:
> > I think I understand the technology (although I confess I did not take
> > the time to fully digest how the curves are updated) <snip>
> 
> Hmm, imho you can't really skip that part - or part about "why split RT
> into eligible/deadline" to see the mechanics behind RT curves.
<grin> I did eventually realize that which is why I stopped the email in
mid-stream and took several hours to work through it and then deleted
half of what I wrote :)
> 
> >
> > <snip>
> 
> > Thus, if all the traffic were to be backlogged at m1 rates, we would
> > overrun the circuit.
> 
> Curves don't really change that in any way.
Yes, I suppose that was my point just phrased a different way - in other
words, if we are careless or ignorant about curves, we can get ourselves
in trouble!
> 
> If you did the above for m1 (or m2) for RT curves, the eligiblity would
> lose the point (leafs would always be eligible), and the whole thing
> would kind-of degenerate into more complex version of LS. There's a
> reason why BSDs won't even let you define RT curves that sum at any
> point to more than 80% (aside keeping LS useful, perhaps also related to
> thier timer resolution [years ago]). Unless they changed it in recent
> versions - I didn't really have any modern BSD under my hands for a
> while.
> 
> > and the sum of their m1 curves > circuit capacity because, operating
> > under my false assumption, this is only for that temporary boost. If
> > all five classes receive their first packet at the identical moment,
> > we will fail to meet the guarantees.
> 
> I'd say more aggressive eliglibility and earlier deadline - than any
> boost. 
Yes, granted.  What I'm trying to do in the documentation is translate
the model's mathematical concepts into concepts more germane to system
administrators.  A sys admin is not very likely to think in terms of
deadline times but will think, "I've got a momentary boost in bandwidth
to allow me to ensure proper latency for time sensitive traffic."  Of
course, that could be where I'm getting in trouble :)

> Which will briefly (if enough leafs are backlogged) turn whole
> carefully designed RT mechanics into dimensionless LS-like thingy. So -
> while that is possible - why do it ? Setup RT that fits the achievable
> speed, and leave the excess bandwidth to LS.
Because doing so would recouple bandwidth and latency, I think - that's
what I'm trying to sort through and will address more fully toward the
end of this reply.
> 
> Whole RT design - with eligible/deadline split - is to allow convex
> curves to send "earlier", pushing the deadlines to the "right" - which
> in turn allows newly backlogged class to have brief priority. But it all
> remains under interface limit and over-time fulfills guarantees (even if
> locally they are violated).
To the right? I would have thought to the left on the x axis, i.e., the
deadline time becomes sooner? Ah, unless you are referring to the other
queue's deadline times and mean not literally changing the deadline time
but jumping in front of the ones on the right of the new queue's
deadline time.
> 
> > I realize I am babbling in my own context so let me state in another
> > way in case I'm not being clear.  My concern was that, if the four
> > classes are continuously backlogged and the fifth class with a concave
> > rt service curve constantly goes idle and then immediately receives a
> > packet, it will always be transmitting at the m1 rate and thus exceed
> > the capacity of the circuit (since the sum of the five m2 curves =
> > capacity and the m1 of the fifth class is greater than the m2 of the
> > fifth class).
> 
> Actually it won't. See what I wrote about the eligible/deadline above.
> The 4 classes (suppose they are under convex RT) will be eligible
> earlier than they would normally be (and send more). When the 5th class
> becomes active with its steeper m1, it literally "has a green light" to
> send whenever it's eligible, as the deadlines of the 4 classes are
> further to the right. If all classes remain backlogged from now on, it
> all evens out properly. If the 5th goes idle, other classes will
> "buffer up" again (from a lack of better term).
Right - that was why I put my concern in the past tense.  Once I
realized how the curves updated, I realized my concern was not valid.  I
left it there to give context to the rest of the email.
> 
> > So now I'll put my practical hat back on where practice will violate
> > theory but with no practical deleterious effect except in the most
> > extreme cases.  I propose that, from a practical, system administrator
> > perspective, it is feasible and probably ideal to define rt service
> > curves so that the sum of all m2 curves<= circuit
> 
> Going above would really make LS pointless, and turn RT into LS-like
> thing (see above).
> 
> > curves should be specified to meet latency requirements REGARDLESS OF
> > EXCEEDING CIRCUIT CAPACITY.
> 
> Again (see above) - that will not get you anywhere. m1 or m2 (look at
> the curve as a whole, not just its parts) - it just makes RT lose its
> properties and actual meaning behind numbers. Briefly, but still.
Perhaps but here is my thinking (which may be quite wrong) and why I
said earlier, doing what you propose seems to recouple bandwidth and
latency (although I do realize they are ultimately linked and the m1/m2
split is really manipulating early bandwidth to achieve independent
latency guarantees).

Where I am focused (and perhaps stuck) is the idea of decoupling latency
and bandwidth guarantees.  ls curves, like the m2 portion of the rt
curves seem to have more to do with sustained bandwidth when excess
available bandwidth is available.  I'm assuming in contrast that the m1
portion of the rt curve is for momentarily increasing bandwidth to help
move time sensitive traffic to the head of the queues when the class
first becomes active.  On the other hand, I do see that, in almost all
my examples, when I have allocated ls ratios differently from the m2
portion of the rt curve, it has been to allocate bandwidth more
aggressively to the time sensitive applications.  Perhaps this is what
you are telling me when you say use ls instead of oversubscribing m1 -
just using different words :)

The thought behind oversubscribing m1 . . . well . . . not intentionally
oversubscribing - just not being very careful about setting m1 to make
sure it is not oversubscribed (perhaps I wasn't clear that the
oversubscription is not intentional) - is that it is not likely that all
queues are continually backlogged thus I can get away with an accidental
over-allocation in most cases as it will quickly sort itself out as soon
as a queue goes idle.  As a result, I can calculate m1 solely based upon
the latency requirements of the traffic not accounting for the impact of
the bandwidth momentarily required to do that, i.e., not being too
concerned if I have accidentally oversubscribed m1.

The advantages of doing it via the m1 portion of the rt curve rather
than the ls curve are:

1) It is guaranteed whereas the ls will only work when there is
available bandwidth.  Granted, my assumption that it is rare for all
classes to be continually backlogged implies there is always some extra
bandwidth available.  And granted that it is not guaranteed if too many
oversubscribed m1's kick in at the same time.

2) It seems less complicated than trying to figure out what my possibly
available ls ratios should be to meet my latency requirements (which
then also recouples bandwidth and latency).  m1 is much more direct and
reliable.

This is where you and I seem to differ (and I gladly defer to the fact
that you know 10,000 times more about this than I do!).  Is that because
we are prioritizing differently ("how can I do this simply with clear
divisions between latency and bandwidth even if it momentarily violates
the model" versus "how do we preserve the model from momentarily
degrading") or because I have missed the point somewhere and focusing so
obsessively on separating latency and bandwidth guarantees is obscuring
my ability to see how doing this via ls ratios is more advantageous?
> 
<snip>
> 
> ps.
> Going through other mails will take me far more time I think.
> 
No problem; I greatly appreciate all the time you've taken to help me
understand this and hope I am coming across as exploring rather than
being argumentative.  I hope the resulting documentation will, in a
small way, help return the favor.

The most important other email by far and away is the "An error in my
HFSC sysadmin documentation" which is fairly short.  The long ones are
just the resultant documentation which I should eventually submit to you
in a graphics supporting format anyway.

Thank you very much as always - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-15  7:38               ` John A. Sullivan III
@ 2011-12-17 18:40                 ` Michal Soltys
  2011-12-17 20:41                   ` John A. Sullivan III
  0 siblings, 1 reply; 12+ messages in thread
From: Michal Soltys @ 2011-12-17 18:40 UTC (permalink / raw)
  To: John A. Sullivan III; +Cc: netdev

Sorry for late reply.

On 15.12.2011 08:38, John A. Sullivan III wrote:

> Yes, granted.  What I'm trying to do in the documentation is translate
> the model's mathematical concepts into concepts more germane to
> system > administrators.  A sys admin is not very likely to think in
> terms of > deadline times but will think, "I've got a momentary
> boost in > bandwidth to allow me to ensure proper latency for time
> sensitive > traffic."  Of course, that could be where I'm getting in
> trouble :)

Tough to say. I'd say you can't completely avoid math here (I'm not
saying not to trim it a bit though). In the same way one can't really go
through TBF without understanding what token bucket is. Well, the
complexity of both is on different level, but you see my point.

>> Whole RT design - with eligible/deadline split - is to allow convex
>> curves to send "earlier", pushing the deadlines to the "right" -
>> which in turn allows newly backlogged class to have brief priority.
>> But it all remains under interface limit and over-time fulfills
>> guarantees (even if locally they are violated).
> To the right? I would have thought to the left on the x axis, i.e.,
> the deadline time becomes sooner? Ah, unless you are referring to the
> other queue's deadline times and mean not literally changing the
> deadline time but jumping in front of the ones on the right of the new
> queue's deadline time.

I mean - for convex curves - deadlines are further to the right, as such
class is eligible earlier (for convex curves, eligible is just linear m2
without m1 part), thus => receive more service => deadline projection is
shifted to the right. When some other concave curve becomes active, its
deadlines will be naturally preferred if both curves are eligible.

> The thought behind oversubscribing m1 . . . well . . . not
> intentionally oversubscribing - just not being very careful about
> setting m1 to make sure it is not oversubscribed (perhaps I wasn't
> clear that the oversubscription is not intentional) - is that it is
> not likely that all queues are continually backlogged thus I can get
> away with an accidental over-allocation in most cases as it will
> quickly sort itself out as soon as a queue goes idle.  As a result, I
> can calculate m1 solely based upon the latency requirements of the
> traffic not accounting for the impact of the bandwidth momentarily
> required to do that, i.e., not being too concerned if I have
> accidentally oversubscribed m1.

Well, that's one way to do it. If your aim is that say certain n% of
leaves are used at the same time (with rare exceptions), you could set
RT curves (m1 parts) as if you had less leaves. If you leave m2 alone
and don't care about mentioned exceptions, it should work.

But IMHO that's more like a corner case (with alternative solutions
available), than a "cookbook" recommendation.

> The advantages of doing it via the m1 portion of the rt curve rather
> than the ls curve are:
>
> 1) It is guaranteed whereas the ls will only work when there is
> available bandwidth.  Granted, my assumption that it is rare for all
> classes to be continually backlogged implies there is always some
> extra bandwidth available.  And granted that it is not guaranteed if
> too many oversubscribed m1's kick in at the same time.
>
> 2) It seems less complicated than trying to figure out what my
> possibly available ls ratios should be to meet my latency requirements
> (which then also recouples bandwidth and latency).  m1 is much more
> direct and reliable.

Like mentioned above - it's one way to do things and you're right.  But
I think you might be underestimating LS a bit. By its nature, it
schedules at speed normalized to the interface's capacity after all (or
UL limits, if applicable) - so the less of the classes are acutally
active, the more they get from LS. You mentioned earlier that you saw LS
being more aggressive in allocating bandwidth - maybe that was the
effect you were seeing ?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Latency guarantees in HFSC rt service curves
  2011-12-17 18:40                 ` Michal Soltys
@ 2011-12-17 20:41                   ` John A. Sullivan III
  0 siblings, 0 replies; 12+ messages in thread
From: John A. Sullivan III @ 2011-12-17 20:41 UTC (permalink / raw)
  To: Michal Soltys; +Cc: netdev

On Sat, 2011-12-17 at 19:40 +0100, Michal Soltys wrote:
> Sorry for late reply.
No problem - you're my lifeline on this so any response is greatly
appreciated!
> 
> On 15.12.2011 08:38, John A. Sullivan III wrote:
> 
<snip>
> > The advantages of doing it via the m1 portion of the rt curve rather
> > than the ls curve are:
> >
> > 1) It is guaranteed whereas the ls will only work when there is
> > available bandwidth.  Granted, my assumption that it is rare for all
> > classes to be continually backlogged implies there is always some
> > extra bandwidth available.  And granted that it is not guaranteed if
> > too many oversubscribed m1's kick in at the same time.
> >
> > 2) It seems less complicated than trying to figure out what my
> > possibly available ls ratios should be to meet my latency requirements
> > (which then also recouples bandwidth and latency).  m1 is much more
> > direct and reliable.
> 
> Like mentioned above - it's one way to do things and you're right.  But
> I think you might be underestimating LS a bit. By its nature, it
> schedules at speed normalized to the interface's capacity after all (or
> UL limits, if applicable) - so the less of the classes are acutally
> active, the more they get from LS. You mentioned earlier that you saw LS
> being more aggressive in allocating bandwidth - maybe that was the
> effect you were seeing ?
I think this is the crux right here.  As I think it through, I'm
probably just raising a tempest in a tea pot by my ignorance so I think
I'll stop here and just implement what you have advised.  I'll outline
the way I was thinking below just in case it does have merit and should
not be discarded.

I think LS can be used to do it but here is how I instinctively (and
perhaps erroneously) viewed rt m1, rt m2 and ls.  I thought, rt m2 are
my sustained guarantees so, what I'll do is divide all my available
bandwidth across all my various traffic flows via rt m2.  This reflects
that I was planning to use ls in a different way.  My thinking was that
rt m2 reflects how I want the bandwidth allocated if all queues are
backlogged and hence why I matched the sum of the rt m2 curves to the
total available bandwidth.

However, constant backlog is not likely to be the case most of the time.
When it is not, that's where I used ls curves.  So, when they are all
backlogged, I have tight control over how I have allocated all the
bandwidth. When they are not, I have a separate control (the ls curves)
which can be used to allocate that extra bandwidth in ratios completely
different from the rt m2 ratios if that is appropriate (as it is likely
to be).

Notice at this point, I have only considered bandwidth and have used the
rt m2 curves and the ls curves solely as bandwidth control mechanisms.
This is where I decided I might want to bend the rules and, if m2 does
not provide sufficiently low latency, I would add an m1 curve to
guarantee that latency.  Since the m2 curves add up to the total
available bandwidth, by definition my higher m1 curves are going to
exceed it but that's OK because the queues are not all backlogged most
of the time.

Thus rt m1 and ls are doing the same thing in the sense that they are
allocated with the assumption that not all queues are backlogged
(because, in my scenario, if all queues are backlogged, we will
eventually be using exclusively rt m2 curves), but they are tuned
differently.  rt m1 is tuned for latency and takes precedence whereas ls
is tuned for bandwidth as long as I don't need the extra bandwidth to
meet the rt m1 guarantees.

So, that was my newbie thinking.  It sounds like what I should really do
is target my rt m2 curves for truly minimum needed bandwidth, tune my rt
m1 curves for latency, ensure the sum of the greater of the m1 or m2 rt
curves does not exceed total bandwidth, and then use ls to allocate
anything left over.

So, sorry for all the gyrations.  Unless anyone tells me differently,
I'll use that last paragraph as my summary guidelines.  Thanks - John

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-12-17 20:41 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-09 21:51 Latency guarantees in HFSC rt service curves John A. Sullivan III
2011-12-10 13:03 ` Michal Soltys
2011-12-10 15:35   ` John A. Sullivan III
2011-12-10 17:57     ` Michal Soltys
2011-12-10 18:35       ` John A. Sullivan III
2011-12-13 23:46         ` Michal Soltys
2011-12-14  5:16           ` John A. Sullivan III
2011-12-14  5:24             ` John A. Sullivan III
2011-12-15  1:48             ` Michal Soltys
2011-12-15  7:38               ` John A. Sullivan III
2011-12-17 18:40                 ` Michal Soltys
2011-12-17 20:41                   ` John A. Sullivan III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).