* [PATCH][NET_SCHED] Update htb rate when stats are polled. @ 2007-05-25 17:11 Ranjit Manomohan 2007-05-26 4:10 ` Ilpo Järvinen 2007-05-26 7:58 ` Patrick McHardy 0 siblings, 2 replies; 6+ messages in thread From: Ranjit Manomohan @ 2007-05-25 17:11 UTC (permalink / raw) To: netdev; +Cc: ranjitm Currently the HTB rate for a class is update very slowly (once every 16 seconds). This patch updates the rate whenever the stats are requested from user space. This enables more accurate rate monitoring. Signed-off-by: Ranjit Manomohan <ranjitm@google.com> diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index 035788c..1d6ec60 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -92,6 +92,7 @@ struct htb_class { struct gnet_stats_basic bstats; struct gnet_stats_queue qstats; struct gnet_stats_rate_est rate_est; + unsigned long rate_est_when; /* last time rate was calculated */ struct tc_htb_xstats xstats; /* our special stats */ int refcnt; /* usage count of this class */ @@ -679,6 +680,23 @@ static int htb_requeue(struct sk_buff *s #ifdef HTB_RATECM #define RT_GEN(D,R) R+=D-(R/HTB_EWMAC);D=0 + +/* Update packet/byte rate for a class. */ +static void calc_rate(struct htb_class *cl) +{ + unsigned long now = jiffies; + if (time_after(now, (cl->rate_est_when + HZ))) { + unsigned int elapsed_secs = + (now - cl->rate_est_when)/HZ; + cl->sum_bytes /= elapsed_secs; + cl->sum_packets /= elapsed_secs; + RT_GEN (cl->sum_bytes,cl->rate_bytes); + RT_GEN (cl->sum_packets,cl->rate_packets); + cl->rate_est_when = now; + } else if time_before(now, cl->rate_est_when) + cl->rate_est_when = now; /* Wraparound */ +} + static void htb_rate_timer(unsigned long arg) { struct Qdisc *sch = (struct Qdisc *)arg; @@ -697,10 +715,8 @@ static void htb_rate_timer(unsigned long if (++q->recmp_bucket >= HTB_HSIZE) q->recmp_bucket = 0; - hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) { - RT_GEN(cl->sum_bytes, cl->rate_bytes); - RT_GEN(cl->sum_packets, cl->rate_packets); - } + hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) + calc_rate(cl); spin_unlock_bh(&sch->dev->queue_lock); } #endif @@ -1176,8 +1192,9 @@ htb_dump_class_stats(struct Qdisc *sch, struct htb_class *cl = (struct htb_class *)arg; #ifdef HTB_RATECM - cl->rate_est.bps = cl->rate_bytes / (HTB_EWMAC * HTB_HSIZE); - cl->rate_est.pps = cl->rate_packets / (HTB_EWMAC * HTB_HSIZE); + calc_rate(cl); + cl->rate_est.bps = cl->rate_bytes / HTB_EWMAC; + cl->rate_est.pps = cl->rate_packets / HTB_EWMAC; #endif if (!cl->level && cl->un.leaf.q) @@ -1464,6 +1481,7 @@ static int htb_change_class(struct Qdisc cl->mbuffer = 60 * PSCHED_TICKS_PER_SEC; /* 1min */ cl->t_c = psched_get_time(); cl->cmode = HTB_CAN_SEND; + cl->rate_est_when = jiffies; /* attach to the hash list and parent's family */ hlist_add_head(&cl->hlist, q->hash + htb_hash(classid)); ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH][NET_SCHED] Update htb rate when stats are polled. 2007-05-25 17:11 [PATCH][NET_SCHED] Update htb rate when stats are polled Ranjit Manomohan @ 2007-05-26 4:10 ` Ilpo Järvinen 2007-05-26 7:58 ` Patrick McHardy 1 sibling, 0 replies; 6+ messages in thread From: Ilpo Järvinen @ 2007-05-26 4:10 UTC (permalink / raw) To: Ranjit Manomohan; +Cc: Netdev Hi, ...couple of formatting related things, see inline comment... On Fri, 25 May 2007, Ranjit Manomohan wrote: > Currently the HTB rate for a class is update very slowly (once > every 16 seconds). This patch updates the rate whenever the stats > are requested from user space. This enables more accurate rate > monitoring. > > Signed-off-by: Ranjit Manomohan <ranjitm@google.com> > > diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c > index 035788c..1d6ec60 100644 > --- a/net/sched/sch_htb.c > +++ b/net/sched/sch_htb.c > @@ -92,6 +92,7 @@ struct htb_class { > struct gnet_stats_basic bstats; > struct gnet_stats_queue qstats; > struct gnet_stats_rate_est rate_est; > + unsigned long rate_est_when; /* last time rate was calculated */ > struct tc_htb_xstats xstats; /* our special stats */ > int refcnt; /* usage count of this class */ > > @@ -679,6 +680,23 @@ static int htb_requeue(struct sk_buff *s > > #ifdef HTB_RATECM > #define RT_GEN(D,R) R+=D-(R/HTB_EWMAC);D=0 > + > +/* Update packet/byte rate for a class. */ > +static void calc_rate(struct htb_class *cl) > +{ > + unsigned long now = jiffies; > + if (time_after(now, (cl->rate_est_when + HZ))) { > + unsigned int elapsed_secs = > + (now - cl->rate_est_when)/HZ; ...add whitespaces around the operator please, fix indentation and remove unnecessary newline, it should fit to a single line... > + cl->sum_bytes /= elapsed_secs; > + cl->sum_packets /= elapsed_secs; > + RT_GEN (cl->sum_bytes,cl->rate_bytes); > + RT_GEN (cl->sum_packets,cl->rate_packets); Could you please fix these two back to the original formatting that was compliant with the kernel Documentation/CodingStyle guidelines... > + cl->rate_est_when = now; > + } else if time_before(now, cl->rate_est_when) > + cl->rate_est_when = now; /* Wraparound */ > +} > + > static void htb_rate_timer(unsigned long arg) > { > struct Qdisc *sch = (struct Qdisc *)arg; > @@ -697,10 +715,8 @@ static void htb_rate_timer(unsigned long > if (++q->recmp_bucket >= HTB_HSIZE) > q->recmp_bucket = 0; > > - hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) { > - RT_GEN(cl->sum_bytes, cl->rate_bytes); > - RT_GEN(cl->sum_packets, cl->rate_packets); > - } > + hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) ...I guess nobody would complain if you fix whitespacing here too (since you're modifying the line) but that's not mandatory since the problem in the original. > + calc_rate(cl); > spin_unlock_bh(&sch->dev->queue_lock); > } > [...snip...] -- i. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH][NET_SCHED] Update htb rate when stats are polled. 2007-05-25 17:11 [PATCH][NET_SCHED] Update htb rate when stats are polled Ranjit Manomohan 2007-05-26 4:10 ` Ilpo Järvinen @ 2007-05-26 7:58 ` Patrick McHardy 2007-05-26 12:55 ` Patrick McHardy 1 sibling, 1 reply; 6+ messages in thread From: Patrick McHardy @ 2007-05-26 7:58 UTC (permalink / raw) To: Ranjit Manomohan; +Cc: netdev Ranjit Manomohan wrote: > Currently the HTB rate for a class is update very slowly (once > every 16 seconds). This patch updates the rate whenever the stats > are requested from user space. This enables more accurate rate > monitoring. > > +/* Update packet/byte rate for a class. */ > +static void calc_rate(struct htb_class *cl) > +{ > + unsigned long now = jiffies; > + if (time_after(now, (cl->rate_est_when + HZ))) { > + unsigned int elapsed_secs = > + (now - cl->rate_est_when)/HZ; > + cl->sum_bytes /= elapsed_secs; > + cl->sum_packets /= elapsed_secs; > + RT_GEN (cl->sum_bytes,cl->rate_bytes); > + RT_GEN (cl->sum_packets,cl->rate_packets); > + cl->rate_est_when = now; > + } else if time_before(now, cl->rate_est_when) > + cl->rate_est_when = now; /* Wraparound */ > +} We have a generic rate estimator, I think we should convert HTB over to use it and then maybe add this feature to the generic estimator. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH][NET_SCHED] Update htb rate when stats are polled. 2007-05-26 7:58 ` Patrick McHardy @ 2007-05-26 12:55 ` Patrick McHardy 2007-05-29 18:39 ` Ranjit Manomohan 0 siblings, 1 reply; 6+ messages in thread From: Patrick McHardy @ 2007-05-26 12:55 UTC (permalink / raw) To: Ranjit Manomohan; +Cc: netdev [-- Attachment #1: Type: text/plain, Size: 837 bytes --] Patrick McHardy wrote: > Ranjit Manomohan wrote: > >>Currently the HTB rate for a class is update very slowly (once >>every 16 seconds). This patch updates the rate whenever the stats >>are requested from user space. This enables more accurate rate >>monitoring. >> >>+/* Update packet/byte rate for a class. */ >>[..] > > We have a generic rate estimator, I think we should convert HTB over > to use it and then maybe add this feature to the generic estimator. You can use this patch as a base. It needs a bit more work (for example the CONFIG_NET_SCHED_ESTIMATOR ifdefs are unnecessary, but I've added them to remind me to clean this up in all schedulers), but it works well enough for testing. Actually .. do you still need your changes with this patch? You can now replace the default estimator by one with a shorter interval. [-- Attachment #2: x --] [-- Type: text/plain, Size: 6438 bytes --] [NET_SCHED]: sch_htb: use generic estimator Remove the internal rate estimator and use the generic one. Signed-off-by: Patrick McHardy <kaber@trash.net> --- commit add81ec9b23f1fbe973093d5999a3d70e9d4c48b tree d5d119cceed54f14c39cbe6bc75c270a3e89f316 parent b208cb31ea86bc6296e5b08e53bc765c81306286 author Patrick McHardy <kaber@trash.net> Sat, 26 May 2007 14:52:02 +0200 committer Patrick McHardy <kaber@trash.net> Sat, 26 May 2007 14:52:02 +0200 net/sched/sch_htb.c | 91 +++++++++++++++++---------------------------------- 1 files changed, 30 insertions(+), 61 deletions(-) diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index 035788c..b29ff8f 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -69,8 +69,6 @@ */ #define HTB_HSIZE 16 /* classid hash size */ -#define HTB_EWMAC 2 /* rate average over HTB_EWMAC*HTB_HSIZE sec */ -#define HTB_RATECM 1 /* whether to use rate computer */ #define HTB_HYSTERESIS 1 /* whether to use mode hysteresis for speedup */ #define HTB_VER 0x30011 /* major must be matched with number suplied by TC as version */ @@ -95,12 +93,6 @@ struct htb_class { struct tc_htb_xstats xstats; /* our special stats */ int refcnt; /* usage count of this class */ -#ifdef HTB_RATECM - /* rate measurement counters */ - unsigned long rate_bytes, sum_bytes; - unsigned long rate_packets, sum_packets; -#endif - /* topology */ int level; /* our level (see above) */ struct htb_class *parent; /* parent class */ @@ -194,10 +186,6 @@ struct htb_sched { int rate2quantum; /* quant = rate / rate2quantum */ psched_time_t now; /* cached dequeue time */ struct qdisc_watchdog watchdog; -#ifdef HTB_RATECM - struct timer_list rttim; /* rate computer timer */ - int recmp_bucket; /* which hash bucket to recompute next */ -#endif /* non shaped skbs; let them go directly thru */ struct sk_buff_head direct_queue; @@ -677,34 +665,6 @@ static int htb_requeue(struct sk_buff *skb, struct Qdisc *sch) return NET_XMIT_SUCCESS; } -#ifdef HTB_RATECM -#define RT_GEN(D,R) R+=D-(R/HTB_EWMAC);D=0 -static void htb_rate_timer(unsigned long arg) -{ - struct Qdisc *sch = (struct Qdisc *)arg; - struct htb_sched *q = qdisc_priv(sch); - struct hlist_node *p; - struct htb_class *cl; - - - /* lock queue so that we can muck with it */ - spin_lock_bh(&sch->dev->queue_lock); - - q->rttim.expires = jiffies + HZ; - add_timer(&q->rttim); - - /* scan and recompute one bucket at time */ - if (++q->recmp_bucket >= HTB_HSIZE) - q->recmp_bucket = 0; - - hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) { - RT_GEN(cl->sum_bytes, cl->rate_bytes); - RT_GEN(cl->sum_packets, cl->rate_packets); - } - spin_unlock_bh(&sch->dev->queue_lock); -} -#endif - /** * htb_charge_class - charges amount "bytes" to leaf and ancestors * @@ -750,11 +710,6 @@ static void htb_charge_class(struct htb_sched *q, struct htb_class *cl, if (cl->cmode != HTB_CAN_SEND) htb_add_to_wait_tree(q, cl, diff); } -#ifdef HTB_RATECM - /* update rate counters */ - cl->sum_bytes += bytes; - cl->sum_packets++; -#endif /* update byte stats except for leaves which are already updated */ if (cl->level) { @@ -1095,13 +1050,6 @@ static int htb_init(struct Qdisc *sch, struct rtattr *opt) if (q->direct_qlen < 2) /* some devices have zero tx_queue_len */ q->direct_qlen = 2; -#ifdef HTB_RATECM - init_timer(&q->rttim); - q->rttim.function = htb_rate_timer; - q->rttim.data = (unsigned long)sch; - q->rttim.expires = jiffies + HZ; - add_timer(&q->rttim); -#endif if ((q->rate2quantum = gopt->rate2quantum) < 1) q->rate2quantum = 1; q->defcls = gopt->defcls; @@ -1175,18 +1123,15 @@ htb_dump_class_stats(struct Qdisc *sch, unsigned long arg, struct gnet_dump *d) { struct htb_class *cl = (struct htb_class *)arg; -#ifdef HTB_RATECM - cl->rate_est.bps = cl->rate_bytes / (HTB_EWMAC * HTB_HSIZE); - cl->rate_est.pps = cl->rate_packets / (HTB_EWMAC * HTB_HSIZE); -#endif - if (!cl->level && cl->un.leaf.q) cl->qstats.qlen = cl->un.leaf.q->q.qlen; cl->xstats.tokens = cl->tokens; cl->xstats.ctokens = cl->ctokens; if (gnet_stats_copy_basic(d, &cl->bstats) < 0 || +#ifdef CONFIG_NET_ESTIMATOR gnet_stats_copy_rate_est(d, &cl->rate_est) < 0 || +#endif gnet_stats_copy_queue(d, &cl->qstats) < 0) return -1; @@ -1277,6 +1222,9 @@ static void htb_destroy_class(struct Qdisc *sch, struct htb_class *cl) BUG_TRAP(cl->un.leaf.q); qdisc_destroy(cl->un.leaf.q); } +#ifdef CONFIG_NET_ESTIMATOR + gen_kill_estimator(&cl->bstats, &cl->rate_est); +#endif qdisc_put_rtab(cl->rate); qdisc_put_rtab(cl->ceil); @@ -1305,9 +1253,6 @@ static void htb_destroy(struct Qdisc *sch) struct htb_sched *q = qdisc_priv(sch); qdisc_watchdog_cancel(&q->watchdog); -#ifdef HTB_RATECM - del_timer_sync(&q->rttim); -#endif /* This line used to be after htb_destroy_class call below and surprisingly it worked in 2.4. But it must precede it because filter need its target class alive to be able to call @@ -1403,6 +1348,19 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, if (!cl) { /* new class */ struct Qdisc *new_q; int prio; + struct { + struct rtattr rta; + struct gnet_estimator opt; + } est __maybe_unused = { + .rta = { + .rta_len = RTA_LENGTH(sizeof(est.opt)), + .rta_type = TCA_RATE, + }, + .opt = { + .interval = 2, + .ewma_log = 2, + }, + }; /* check for valid classid */ if (!classid || TC_H_MAJ(classid ^ sch->handle) @@ -1418,6 +1376,10 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, if ((cl = kzalloc(sizeof(*cl), GFP_KERNEL)) == NULL) goto failure; +#ifdef CONFIG_NET_ESTIMATOR + gen_new_estimator(&cl->bstats, &cl->rate_est, + &sch->dev->queue_lock, &est.rta); +#endif cl->refcnt = 1; INIT_LIST_HEAD(&cl->sibling); INIT_HLIST_NODE(&cl->hlist); @@ -1469,8 +1431,15 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, hlist_add_head(&cl->hlist, q->hash + htb_hash(classid)); list_add_tail(&cl->sibling, parent ? &parent->children : &q->root); - } else + } else { +#ifdef CONFIG_NET_ESTIMATOR + if (tca[TCA_RATE-1]) + gen_replace_estimator(&cl->bstats, &cl->rate_est, + &sch->dev->queue_lock, + tca[TCA_RATE-1]); +#endif sch_tree_lock(sch); + } /* it used to be a nasty bug here, we have to check that node is really leaf before changing cl->un.leaf ! */ ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH][NET_SCHED] Update htb rate when stats are polled. 2007-05-26 12:55 ` Patrick McHardy @ 2007-05-29 18:39 ` Ranjit Manomohan 2007-05-30 7:33 ` Patrick McHardy 0 siblings, 1 reply; 6+ messages in thread From: Ranjit Manomohan @ 2007-05-29 18:39 UTC (permalink / raw) To: Patrick McHardy; +Cc: netdev On 5/26/07, Patrick McHardy <kaber@trash.net> wrote: > Patrick McHardy wrote: > > Ranjit Manomohan wrote: > > > >>Currently the HTB rate for a class is update very slowly (once > >>every 16 seconds). This patch updates the rate whenever the stats > >>are requested from user space. This enables more accurate rate > >>monitoring. > >> > >>+/* Update packet/byte rate for a class. */ > >>[..] > > > > We have a generic rate estimator, I think we should convert HTB over > > to use it and then maybe add this feature to the generic estimator. > > > You can use this patch as a base. It needs a bit more work > (for example the CONFIG_NET_SCHED_ESTIMATOR ifdefs are > unnecessary, but I've added them to remind me to clean this > up in all schedulers), but it works well enough for testing. > > Actually .. do you still need your changes with this patch? > You can now replace the default estimator by one with a > shorter interval. > > I applied the patch and tried this out and it works fine with short intervals. My changes are no longer necessary. Thanks! -Ranjit > [NET_SCHED]: sch_htb: use generic estimator > > Remove the internal rate estimator and use the generic one. > > Signed-off-by: Patrick McHardy <kaber@trash.net> > > --- > commit add81ec9b23f1fbe973093d5999a3d70e9d4c48b > tree d5d119cceed54f14c39cbe6bc75c270a3e89f316 > parent b208cb31ea86bc6296e5b08e53bc765c81306286 > author Patrick McHardy <kaber@trash.net> Sat, 26 May 2007 14:52:02 +0200 > committer Patrick McHardy <kaber@trash.net> Sat, 26 May 2007 14:52:02 +0200 > > net/sched/sch_htb.c | 91 +++++++++++++++++---------------------------------- > 1 files changed, 30 insertions(+), 61 deletions(-) > > diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c > index 035788c..b29ff8f 100644 > --- a/net/sched/sch_htb.c > +++ b/net/sched/sch_htb.c > @@ -69,8 +69,6 @@ > */ > > #define HTB_HSIZE 16 /* classid hash size */ > -#define HTB_EWMAC 2 /* rate average over HTB_EWMAC*HTB_HSIZE sec */ > -#define HTB_RATECM 1 /* whether to use rate computer */ > #define HTB_HYSTERESIS 1 /* whether to use mode hysteresis for speedup */ > #define HTB_VER 0x30011 /* major must be matched with number suplied by TC as version */ > > @@ -95,12 +93,6 @@ struct htb_class { > struct tc_htb_xstats xstats; /* our special stats */ > int refcnt; /* usage count of this class */ > > -#ifdef HTB_RATECM > - /* rate measurement counters */ > - unsigned long rate_bytes, sum_bytes; > - unsigned long rate_packets, sum_packets; > -#endif > - > /* topology */ > int level; /* our level (see above) */ > struct htb_class *parent; /* parent class */ > @@ -194,10 +186,6 @@ struct htb_sched { > int rate2quantum; /* quant = rate / rate2quantum */ > psched_time_t now; /* cached dequeue time */ > struct qdisc_watchdog watchdog; > -#ifdef HTB_RATECM > - struct timer_list rttim; /* rate computer timer */ > - int recmp_bucket; /* which hash bucket to recompute next */ > -#endif > > /* non shaped skbs; let them go directly thru */ > struct sk_buff_head direct_queue; > @@ -677,34 +665,6 @@ static int htb_requeue(struct sk_buff *skb, struct Qdisc *sch) > return NET_XMIT_SUCCESS; > } > > -#ifdef HTB_RATECM > -#define RT_GEN(D,R) R+=D-(R/HTB_EWMAC);D=0 > -static void htb_rate_timer(unsigned long arg) > -{ > - struct Qdisc *sch = (struct Qdisc *)arg; > - struct htb_sched *q = qdisc_priv(sch); > - struct hlist_node *p; > - struct htb_class *cl; > - > - > - /* lock queue so that we can muck with it */ > - spin_lock_bh(&sch->dev->queue_lock); > - > - q->rttim.expires = jiffies + HZ; > - add_timer(&q->rttim); > - > - /* scan and recompute one bucket at time */ > - if (++q->recmp_bucket >= HTB_HSIZE) > - q->recmp_bucket = 0; > - > - hlist_for_each_entry(cl,p, q->hash + q->recmp_bucket, hlist) { > - RT_GEN(cl->sum_bytes, cl->rate_bytes); > - RT_GEN(cl->sum_packets, cl->rate_packets); > - } > - spin_unlock_bh(&sch->dev->queue_lock); > -} > -#endif > - > /** > * htb_charge_class - charges amount "bytes" to leaf and ancestors > * > @@ -750,11 +710,6 @@ static void htb_charge_class(struct htb_sched *q, struct htb_class *cl, > if (cl->cmode != HTB_CAN_SEND) > htb_add_to_wait_tree(q, cl, diff); > } > -#ifdef HTB_RATECM > - /* update rate counters */ > - cl->sum_bytes += bytes; > - cl->sum_packets++; > -#endif > > /* update byte stats except for leaves which are already updated */ > if (cl->level) { > @@ -1095,13 +1050,6 @@ static int htb_init(struct Qdisc *sch, struct rtattr *opt) > if (q->direct_qlen < 2) /* some devices have zero tx_queue_len */ > q->direct_qlen = 2; > > -#ifdef HTB_RATECM > - init_timer(&q->rttim); > - q->rttim.function = htb_rate_timer; > - q->rttim.data = (unsigned long)sch; > - q->rttim.expires = jiffies + HZ; > - add_timer(&q->rttim); > -#endif > if ((q->rate2quantum = gopt->rate2quantum) < 1) > q->rate2quantum = 1; > q->defcls = gopt->defcls; > @@ -1175,18 +1123,15 @@ htb_dump_class_stats(struct Qdisc *sch, unsigned long arg, struct gnet_dump *d) > { > struct htb_class *cl = (struct htb_class *)arg; > > -#ifdef HTB_RATECM > - cl->rate_est.bps = cl->rate_bytes / (HTB_EWMAC * HTB_HSIZE); > - cl->rate_est.pps = cl->rate_packets / (HTB_EWMAC * HTB_HSIZE); > -#endif > - > if (!cl->level && cl->un.leaf.q) > cl->qstats.qlen = cl->un.leaf.q->q.qlen; > cl->xstats.tokens = cl->tokens; > cl->xstats.ctokens = cl->ctokens; > > if (gnet_stats_copy_basic(d, &cl->bstats) < 0 || > +#ifdef CONFIG_NET_ESTIMATOR > gnet_stats_copy_rate_est(d, &cl->rate_est) < 0 || > +#endif > gnet_stats_copy_queue(d, &cl->qstats) < 0) > return -1; > > @@ -1277,6 +1222,9 @@ static void htb_destroy_class(struct Qdisc *sch, struct htb_class *cl) > BUG_TRAP(cl->un.leaf.q); > qdisc_destroy(cl->un.leaf.q); > } > +#ifdef CONFIG_NET_ESTIMATOR > + gen_kill_estimator(&cl->bstats, &cl->rate_est); > +#endif > qdisc_put_rtab(cl->rate); > qdisc_put_rtab(cl->ceil); > > @@ -1305,9 +1253,6 @@ static void htb_destroy(struct Qdisc *sch) > struct htb_sched *q = qdisc_priv(sch); > > qdisc_watchdog_cancel(&q->watchdog); > -#ifdef HTB_RATECM > - del_timer_sync(&q->rttim); > -#endif > /* This line used to be after htb_destroy_class call below > and surprisingly it worked in 2.4. But it must precede it > because filter need its target class alive to be able to call > @@ -1403,6 +1348,19 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, > if (!cl) { /* new class */ > struct Qdisc *new_q; > int prio; > + struct { > + struct rtattr rta; > + struct gnet_estimator opt; > + } est __maybe_unused = { > + .rta = { > + .rta_len = RTA_LENGTH(sizeof(est.opt)), > + .rta_type = TCA_RATE, > + }, > + .opt = { > + .interval = 2, > + .ewma_log = 2, > + }, > + }; > > /* check for valid classid */ > if (!classid || TC_H_MAJ(classid ^ sch->handle) > @@ -1418,6 +1376,10 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, > if ((cl = kzalloc(sizeof(*cl), GFP_KERNEL)) == NULL) > goto failure; > > +#ifdef CONFIG_NET_ESTIMATOR > + gen_new_estimator(&cl->bstats, &cl->rate_est, > + &sch->dev->queue_lock, &est.rta); > +#endif > cl->refcnt = 1; > INIT_LIST_HEAD(&cl->sibling); > INIT_HLIST_NODE(&cl->hlist); > @@ -1469,8 +1431,15 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, > hlist_add_head(&cl->hlist, q->hash + htb_hash(classid)); > list_add_tail(&cl->sibling, > parent ? &parent->children : &q->root); > - } else > + } else { > +#ifdef CONFIG_NET_ESTIMATOR > + if (tca[TCA_RATE-1]) > + gen_replace_estimator(&cl->bstats, &cl->rate_est, > + &sch->dev->queue_lock, > + tca[TCA_RATE-1]); > +#endif > sch_tree_lock(sch); > + } > > /* it used to be a nasty bug here, we have to check that node > is really leaf before changing cl->un.leaf ! */ > > ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH][NET_SCHED] Update htb rate when stats are polled. 2007-05-29 18:39 ` Ranjit Manomohan @ 2007-05-30 7:33 ` Patrick McHardy 0 siblings, 0 replies; 6+ messages in thread From: Patrick McHardy @ 2007-05-30 7:33 UTC (permalink / raw) To: Ranjit Manomohan; +Cc: netdev Ranjit Manomohan wrote: > On 5/26/07, Patrick McHardy <kaber@trash.net> wrote: > >> You can use this patch as a base. It needs a bit more work >> (for example the CONFIG_NET_SCHED_ESTIMATOR ifdefs are >> unnecessary, but I've added them to remind me to clean this >> up in all schedulers), but it works well enough for testing. >> >> Actually .. do you still need your changes with this patch? >> You can now replace the default estimator by one with a >> shorter interval. >> >> > > I applied the patch and tried this out and it works fine with short > intervals. My changes are no longer necessary. Thanks! OK thanks. I'll queue it for 2.6.23. ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2007-05-30 7:34 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-05-25 17:11 [PATCH][NET_SCHED] Update htb rate when stats are polled Ranjit Manomohan 2007-05-26 4:10 ` Ilpo Järvinen 2007-05-26 7:58 ` Patrick McHardy 2007-05-26 12:55 ` Patrick McHardy 2007-05-29 18:39 ` Ranjit Manomohan 2007-05-30 7:33 ` Patrick McHardy
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).