* [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible.
@ 2007-08-31 12:22 Jesper Dangaard Brouer
2007-09-01 7:10 ` Patrick McHardy
0 siblings, 1 reply; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2007-08-31 12:22 UTC (permalink / raw)
To: netdev@vger.kernel.org
Cc: David S. Miller, Patrick McHardy, Jesper Dangaard Brouer
commit ac093f5c2f1160ece72a6fef5c779c1892fc3152
Author: Jesper Dangaard Brouer <hawk@comx.dk>
Date: Fri Aug 31 11:53:35 2007 +0200
[NET_SCHED]: Making rate table lookups more flexible. Extend the
tc_ratespec struct, with two parameters: 1) "cell_align" that allow
adjusting the alignment of the rate table. 2) "overhead" that allow
adding a packet overhead before the lookup.
Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk>
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index 268c515..a127d63 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -78,7 +78,8 @@ struct tc_ratespec
unsigned char cell_log;
unsigned char __reserved;
unsigned short feature;
- short addend;
+ char cell_align;
+ unsigned char overhead;
unsigned short mpu;
__u32 rate;
};
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 4ebd615..a02ec9e 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -307,7 +307,9 @@ drop:
*/
static inline u32 qdisc_l2t(struct qdisc_rate_table* rtab, unsigned int pktlen)
{
- int slot = pktlen;
+ int slot = pktlen + rtab->rate.cell_align + rtab->rate.overhead;
+ if (slot < 0)
+ slot = 0;
slot >>= rtab->rate.cell_log;
if (slot > 255)
return (rtab->data[255]*(slot >> 8) + rtab->data[slot & 0xFF]);
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-08-31 12:22 [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible Jesper Dangaard Brouer @ 2007-09-01 7:10 ` Patrick McHardy 2007-09-01 21:56 ` Jesper Dangaard Brouer 0 siblings, 1 reply; 9+ messages in thread From: Patrick McHardy @ 2007-09-01 7:10 UTC (permalink / raw) To: jdb; +Cc: netdev@vger.kernel.org, David S. Miller Jesper Dangaard Brouer wrote: > commit ac093f5c2f1160ece72a6fef5c779c1892fc3152 > Author: Jesper Dangaard Brouer <hawk@comx.dk> > Date: Fri Aug 31 11:53:35 2007 +0200 > > [NET_SCHED]: Making rate table lookups more flexible. Extend the > tc_ratespec struct, with two parameters: 1) "cell_align" that allow > adjusting the alignment of the rate table. 2) "overhead" that allow > adding a packet overhead before the lookup. Am I guessing right that the intention is to resurrect the ATM patch? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-01 7:10 ` Patrick McHardy @ 2007-09-01 21:56 ` Jesper Dangaard Brouer 2007-09-02 14:35 ` Patrick McHardy 0 siblings, 1 reply; 9+ messages in thread From: Jesper Dangaard Brouer @ 2007-09-01 21:56 UTC (permalink / raw) To: Patrick McHardy; +Cc: jdb, netdev@vger.kernel.org, David S. Miller On Sat, 1 Sep 2007, Patrick McHardy wrote: > Jesper Dangaard Brouer wrote: >> >> [NET_SCHED]: Making rate table lookups more flexible. Extend the >> tc_ratespec struct, with two parameters: 1) "cell_align" that allow >> adjusting the alignment of the rate table. 2) "overhead" that allow >> adding a packet overhead before the lookup. > > > Am I guessing right that the intention is to resurrect the ATM patch? Yes, you are right. Remember, Jamal ACKed the patch, and you redrew your NAK. This is not a ATM/ADSL only patch. This patch simply adds more flexibility to the rate tables. Afterwards we can start the discussion about how to use this new flexibility in tc/iproute2. BTW. The reason you could not measure any improvements on your SHDSL line is that it uses HDLC and not ATM as data link layer. See your, Jesper Brouer -- ------------------------------------------------------------------- MSc. Master of Computer Science Dept. of Computer Science, University of Copenhagen Author of http://www.adsl-optimizer.dk ------------------------------------------------------------------- ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-01 21:56 ` Jesper Dangaard Brouer @ 2007-09-02 14:35 ` Patrick McHardy 2007-09-02 18:56 ` Jesper Dangaard Brouer 0 siblings, 1 reply; 9+ messages in thread From: Patrick McHardy @ 2007-09-02 14:35 UTC (permalink / raw) To: Jesper Dangaard Brouer; +Cc: jdb, netdev@vger.kernel.org, David S. Miller Jesper Dangaard Brouer wrote: > > On Sat, 1 Sep 2007, Patrick McHardy wrote: > >> Am I guessing right that the intention is to resurrect the ATM patch? > > > Yes, you are right. > Remember, Jamal ACKed the patch, and you redrew your NAK. Mainly out of frustration/boredom with the discussion, I withdrew that again later and even Russell agreed that this should be done differently. > This is not a ATM/ADSL only patch. This patch simply adds more > flexibility to the rate tables. Afterwards we can start the discussion > about how to use this new flexibility in tc/iproute2. I know, but that discussion should happen *before* merging any changes to the kernel. Its pointless to add functionality that won't be used afterwards or may need to be done differently. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-02 14:35 ` Patrick McHardy @ 2007-09-02 18:56 ` Jesper Dangaard Brouer 2007-09-02 21:16 ` Patrick McHardy 0 siblings, 1 reply; 9+ messages in thread From: Jesper Dangaard Brouer @ 2007-09-02 18:56 UTC (permalink / raw) To: Patrick McHardy Cc: Jesper Dangaard Brouer, netdev@vger.kernel.org, David S. Miller On Sun, 2 Sep 2007, Patrick McHardy wrote: > Jesper Dangaard Brouer wrote: >> >> On Sat, 1 Sep 2007, Patrick McHardy wrote: >> > >> This is not a ATM/ADSL only patch. This patch simply adds more >> flexibility to the rate tables. Afterwards we can start the discussion >> about how to use this new flexibility in tc/iproute2. > > I know, but that discussion should happen *before* merging any > changes to the kernel. Let not try to solve too many things at once. We need to do this in small steps. Please, lets not start long and borrowing discussion again, where we try to solve too many things at once. > Its pointless to add functionality that > won't be used afterwards or may need to be done differently. I believe that the functionality _will_ be used, also in the general case. Lets focus on the general case, where the functionality actually is needed right away. In the general case: - The rate table needs to be aligned (cell_align=-1). (currently, we miscalculates up to 7 bytes on every lookup) - The existing tc overhead calc can be made more accurate. (by adding overhead before doing the lookup, instead of the current solution where the rate table is modified with its limited resolution) Patrick, note that your STAB solution will _not_ work without the rate table alignment. See you! Jesper Brouer -- ------------------------------------------------------------------- MSc. Master of Computer Science Dept. of Computer Science, University of Copenhagen Author of http://www.adsl-optimizer.dk ------------------------------------------------------------------- ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-02 18:56 ` Jesper Dangaard Brouer @ 2007-09-02 21:16 ` Patrick McHardy 2007-09-03 14:19 ` Jesper Dangaard Brouer 0 siblings, 1 reply; 9+ messages in thread From: Patrick McHardy @ 2007-09-02 21:16 UTC (permalink / raw) To: Jesper Dangaard Brouer Cc: Jesper Dangaard Brouer, netdev@vger.kernel.org, David S. Miller Jesper Dangaard Brouer wrote: > On Sun, 2 Sep 2007, Patrick McHardy wrote: >> >>> This is not a ATM/ADSL only patch. This patch simply adds more >>> flexibility to the rate tables. Afterwards we can start the discussion >>> about how to use this new flexibility in tc/iproute2. >> >> I know, but that discussion should happen *before* merging any >> changes to the kernel. > > Let not try to solve too many things at once. We need to do this in > small steps. Please, lets not start long and borrowing discussion > again, where we try to solve too many things at once. We don't need many, but we do need *one* thing that actually uses this and isn't controversial before merging. > > >> Its pointless to add functionality that >> won't be used afterwards or may need to be done differently. > > I believe that the functionality _will_ be used, also in the general > case. > > Lets focus on the general case, where the functionality actually is > needed right away. > > In the general case: > > - The rate table needs to be aligned (cell_align=-1). > (currently, we miscalculates up to 7 bytes on every lookup) We will always do that, thats a consequence of storing the transmission times for multiples of 8b. > > - The existing tc overhead calc can be made more accurate. > (by adding overhead before doing the lookup, instead of the > current solution where the rate table is modified with its > limited resolution) Please demonstrate this with patches (one for the overhead calculation, one for the cell_align thing), then we can continue this discussion. > > > Patrick, note that your STAB solution will _not_ work without the rate > table alignment. I can't argue about this without looking into it again first, but it shouldn't really matter for now since we don't have a patch to actually implement it. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-02 21:16 ` Patrick McHardy @ 2007-09-03 14:19 ` Jesper Dangaard Brouer 2007-09-04 16:25 ` Patrick McHardy 0 siblings, 1 reply; 9+ messages in thread From: Jesper Dangaard Brouer @ 2007-09-03 14:19 UTC (permalink / raw) To: Patrick McHardy; +Cc: Jesper Dangaard Brouer, netdev@vger.kernel.org [-- Attachment #1: Type: text/plain, Size: 1296 bytes --] On Sun, 2007-09-02 at 23:16 +0200, Patrick McHardy wrote: > Jesper Dangaard Brouer wrote: > > On Sun, 2 Sep 2007, Patrick McHardy wrote: > >> > > > > Lets focus on the general case, where the functionality actually is > > needed right away. > > > > In the general case: > > > > - The rate table needs to be aligned (cell_align=-1). > > (currently, we miscalculates up to 7 bytes on every lookup) > > We will always do that, thats a consequence of storing the > transmission times for multiples of 8b. The issue is that we use the lower boundary for calculating the transmit cost. Thus, a 15 bytes packet only have a transmit cost of 8 bytes. > > - The existing tc overhead calc can be made more accurate. > > (by adding overhead before doing the lookup, instead of the > > current solution where the rate table is modified with its > > limited resolution) > > Please demonstrate this with patches (one for the overhead > calculation, one for the cell_align thing), then we can > continue this discussion. I have attached a patch for the overhead calculation. I'll look into the "the cell_align thing" tomorrow. -- Med venlig hilsen / Best regards Jesper Brouer ComX Networks A/S Linux Network developer Cand. Scient Datalog / MSc. Author of http://adsl-optimizer.dk [-- Attachment #2: pkt_sched.h.patch --] [-- Type: text/x-patch, Size: 721 bytes --] commit a29d43b78d5ddb20a808d9270c2de358556ba5ee Author: Jesper Dangaard Brouer <hawk@comx.dk> Date: Mon Sep 3 14:35:19 2007 +0200 [IPROUTE2]: Update pkt_sched.h In struct tc_ratespec, replace 'addend', with 'cell_align' and 'overhead'. Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk> diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h index 268c515..a127d63 100644 --- a/include/linux/pkt_sched.h +++ b/include/linux/pkt_sched.h @@ -78,7 +78,8 @@ struct tc_ratespec unsigned char cell_log; unsigned char __reserved; unsigned short feature; - short addend; + char cell_align; + unsigned char overhead; unsigned short mpu; __u32 rate; }; [-- Attachment #3: overhead_to_kernel.patch --] [-- Type: text/x-patch, Size: 1754 bytes --] commit 1e89442468dca983e0e3d24b89093370896f2d6a Author: Jesper Dangaard Brouer <hawk@comx.dk> Date: Mon Sep 3 15:49:41 2007 +0200 [IPROUTE2]: Overhead calculation is now done in the kernel. The only current user is HTB. HTB overhead argument is now passed on the kernel (in the struct tc_ratespec). Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk> diff --git a/tc/q_htb.c b/tc/q_htb.c index 53e3f78..b579ebe 100644 --- a/tc/q_htb.c +++ b/tc/q_htb.c @@ -206,9 +206,11 @@ static int htb_parse_class_opt(struct qdisc_util *qu, int argc, char **argv, str if (!buffer) buffer = opt.rate.rate / get_hz() + mtu; if (!cbuffer) cbuffer = opt.ceil.rate / get_hz() + mtu; -/* encode overhead and mpu, 8 bits each, into lower 16 bits */ - mpu = (unsigned)mpu8 | (unsigned)overhead << 8; - opt.ceil.mpu = mpu; opt.rate.mpu = mpu; + opt.ceil.overhead = overhead; + opt.rate.overhead = overhead; + + opt.ceil.mpu = mpu; + opt.rate.mpu = mpu; if ((cell_log = tc_calc_rtable(opt.rate.rate, rtab, cell_log, mtu, mpu)) < 0) { fprintf(stderr, "htb: failed to calculate rate table.\n"); diff --git a/tc/tc_core.c b/tc/tc_core.c index 58155fb..1ab0ba0 100644 --- a/tc/tc_core.c +++ b/tc/tc_core.c @@ -73,8 +73,6 @@ int tc_calc_rtable(unsigned bps, __u32 *rtab, int cell_log, unsigned mtu, unsigned mpu) { int i; - unsigned overhead = (mpu >> 8) & 0xFF; - mpu = mpu & 0xFF; if (mtu == 0) mtu = 2047; @@ -86,8 +84,6 @@ int tc_calc_rtable(unsigned bps, __u32 *rtab, int cell_log, unsigned mtu, } for (i=0; i<256; i++) { unsigned sz = (i<<cell_log); - if (overhead) - sz += overhead; if (sz < mpu) sz = mpu; rtab[i] = tc_calc_xmittime(bps, sz); ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-03 14:19 ` Jesper Dangaard Brouer @ 2007-09-04 16:25 ` Patrick McHardy 2007-09-05 13:58 ` Jesper Dangaard Brouer 0 siblings, 1 reply; 9+ messages in thread From: Patrick McHardy @ 2007-09-04 16:25 UTC (permalink / raw) To: jdb; +Cc: Jesper Dangaard Brouer, netdev@vger.kernel.org Jesper Dangaard Brouer wrote: > On Sun, 2007-09-02 at 23:16 +0200, Patrick McHardy wrote: > >>Jesper Dangaard Brouer wrote: >> >>>On Sun, 2 Sep 2007, Patrick McHardy wrote: >>> >>>Lets focus on the general case, where the functionality actually is >>>needed right away. >>> >>>In the general case: >>> >>>- The rate table needs to be aligned (cell_align=-1). >>> (currently, we miscalculates up to 7 bytes on every lookup) >> >>We will always do that, thats a consequence of storing the >>transmission times for multiples of 8b. > > > The issue is that we use the lower boundary for calculating the transmit > cost. Thus, a 15 bytes packet only have a transmit cost of 8 bytes. I believe this is something that should be fixed anyway, its better to overestimate than underestimate to stay in control of the queue. We could additionally make the rate tables more finegrained (optionally). >>>- The existing tc overhead calc can be made more accurate. >>> (by adding overhead before doing the lookup, instead of the >>> current solution where the rate table is modified with its >>> limited resolution) >> >>Please demonstrate this with patches (one for the overhead >>calculation, one for the cell_align thing), then we can >>continue this discussion. > > > I have attached a patch for the overhead calculation. Thanks, I probably won't get to looking into this until after the netfilter workshop next week. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible. 2007-09-04 16:25 ` Patrick McHardy @ 2007-09-05 13:58 ` Jesper Dangaard Brouer 0 siblings, 0 replies; 9+ messages in thread From: Jesper Dangaard Brouer @ 2007-09-05 13:58 UTC (permalink / raw) To: Patrick McHardy Cc: Jesper Dangaard Brouer, netdev@vger.kernel.org, Stephen Hemminger [-- Attachment #1: Type: text/plain, Size: 3532 bytes --] On Tue, 2007-09-04 at 18:25 +0200, Patrick McHardy wrote: > Jesper Dangaard Brouer wrote: > > On Sun, 2007-09-02 at 23:16 +0200, Patrick McHardy wrote: > > > >>Jesper Dangaard Brouer wrote: > >> > >>>On Sun, 2 Sep 2007, Patrick McHardy wrote: > >>> > >>>Lets focus on the general case, where the functionality actually is > >>>needed right away. > >>> > >>>In the general case: > >>> > >>>- The rate table needs to be aligned (cell_align=-1). > >>> (currently, we miscalculates up to 7 bytes on every lookup) > >> > >>We will always do that, thats a consequence of storing the > >>transmission times for multiples of 8b. > > > > > > The issue is that we use the lower boundary for calculating the transmit > > cost. Thus, a 15 bytes packet only have a transmit cost of 8 bytes. > > I believe this is something that should be fixed anyway, > its better to overestimate than underestimate to stay > in control of the queue. Well, I have attached a patch that uses the upper boundry instead. The patch uses the cell_align feature. The patch is very simple it self, but figure out what happens the rtab array requires a little illustration: Illustrating the rate table array: Legend description rtab[x] : Array index x of rtab[x] xmit_sz : Transmit size contained in rtab[x] (normal transmit time) maps[a-b] : Packet sizes from a to b, will map into rtab[x] Current/old rate table mapping (cell_log:3): rtab[0]:=xmit_sz:0 maps[0-7] rtab[1]:=xmit_sz:8 maps[8-15] rtab[2]:=xmit_sz:16 maps[16-23] rtab[3]:=xmit_sz:24 maps[24-31] rtab[4]:=xmit_sz:32 maps[32-39] rtab[5]:=xmit_sz:40 maps[40-47] rtab[6]:=xmit_sz:48 maps[48-55] New rate table mapping, with kernel cell_align support. rtab[0]:=xmit_sz:8 maps[0-8] rtab[1]:=xmit_sz:16 maps[9-16] rtab[2]:=xmit_sz:24 maps[17-24] rtab[3]:=xmit_sz:32 maps[25-32] rtab[4]:=xmit_sz:40 maps[33-40] rtab[5]:=xmit_sz:48 maps[41-48] rtab[6]:=xmit_sz:56 maps[49-56] New TC util on a kernel WITHOUT support for cell_align rtab[0]:=xmit_sz:8 maps[0-7] rtab[1]:=xmit_sz:16 maps[8-15] rtab[2]:=xmit_sz:24 maps[16-23] rtab[3]:=xmit_sz:32 maps[24-31] rtab[4]:=xmit_sz:40 maps[32-39] rtab[5]:=xmit_sz:48 maps[40-47] rtab[6]:=xmit_sz:56 maps[48-55] Notice that without the kernel cell_align feature, we are only off by one byte. That should be acceptable, when somebody uses a new TC util on a old kernel. > We could additionally make the > rate tables more finegrained (optionally). That is actually already possible with the approach used to handle overflow of the rate table ("TSO" large packet support). By setting cell_log=0, and letting the overflow code handle the rest, we get a very fingrained lookup. > >>>- The existing tc overhead calc can be made more accurate. > >>> (by adding overhead before doing the lookup, instead of the > >>> current solution where the rate table is modified with its > >>> limited resolution) > >> > >>Please demonstrate this with patches (one for the overhead > >>calculation, one for the cell_align thing), then we can > >>continue this discussion. > > > > > > I have attached a patch for the overhead calculation. Attached is a patch that uses "the cell_align thing". > Thanks, I probably won't get to looking into this until > after the netfilter workshop next week. Okay, but I'll see you at the workshop, so I might bug you there ;-) -- Med venlig hilsen / Best regards Jesper Brouer ComX Networks A/S Linux Network developer Cand. Scient Datalog / MSc. Author of http://adsl-optimizer.dk [-- Attachment #2: upperbound_rate_table_aligned.patch --] [-- Type: text/x-patch, Size: 2156 bytes --] commit 9a21e8bd56a5f057fc9f605e061c22d264ec27ef Author: Jesper Dangaard Brouer <hawk@comx.dk> Date: Wed Sep 5 15:24:51 2007 +0200 [IPROUTE2]: Change the rate table calc of transmit cost to use upper bound value. Patrick McHardy, Cite: 'its better to overestimate than underestimate to stay in control of the queue'. Illustrating the rate table array: Legend description rtab[x] : Array index x of rtab[x] xmit_sz : Transmit size contained in rtab[x] (normally transmit time) maps[a-b] : Packet sizes from a to b, will map into rtab[x] Current/old rate table mapping (cell_log:3): rtab[0]:=xmit_sz:0 maps[0-7] rtab[1]:=xmit_sz:8 maps[8-15] rtab[2]:=xmit_sz:16 maps[16-23] rtab[3]:=xmit_sz:24 maps[24-31] rtab[4]:=xmit_sz:32 maps[32-39] rtab[5]:=xmit_sz:40 maps[40-47] rtab[6]:=xmit_sz:48 maps[48-55] New rate table mapping, with kernel cell_align support. rtab[0]:=xmit_sz:8 maps[0-8] rtab[1]:=xmit_sz:16 maps[9-16] rtab[2]:=xmit_sz:24 maps[17-24] rtab[3]:=xmit_sz:32 maps[25-32] rtab[4]:=xmit_sz:40 maps[33-40] rtab[5]:=xmit_sz:48 maps[41-48] rtab[6]:=xmit_sz:56 maps[49-56] New TC util on a kernel WITHOUT support for cell_align rtab[0]:=xmit_sz:8 maps[0-7] rtab[1]:=xmit_sz:16 maps[8-15] rtab[2]:=xmit_sz:24 maps[16-23] rtab[3]:=xmit_sz:32 maps[24-31] rtab[4]:=xmit_sz:40 maps[32-39] rtab[5]:=xmit_sz:48 maps[40-47] rtab[6]:=xmit_sz:56 maps[48-55] Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk> diff --git a/tc/tc_core.c b/tc/tc_core.c index c713a18..752b07c 100644 --- a/tc/tc_core.c +++ b/tc/tc_core.c @@ -84,11 +84,12 @@ int tc_calc_rtable(struct tc_ratespec *r, __u32 *rtab, int cell_log, unsigned mt cell_log++; } for (i=0; i<256; i++) { - unsigned sz = (i<<cell_log); + unsigned sz = ((i+1)<<cell_log); if (sz < mpu) sz = mpu; rtab[i] = tc_calc_xmittime(bps, sz); } + r->cell_align=-1; // Due to the sz calc r->cell_log=cell_log; return cell_log; } [-- Attachment #3: cleanup_tc_calc_rtable_git.patch --] [-- Type: text/x-patch, Size: 6028 bytes --] commit 29044ac37e30d9662ad1bb83290a007c492ad7b2 Author: Jesper Dangaard Brouer <hawk@comx.dk> Date: Wed Sep 5 10:47:47 2007 +0200 [IPROUTE2]: Cleanup: tc_calc_rtable(). Change tc_calc_rtable() to take a tc_ratespec struct as an argument. (cell_log still needs to be passed on as a parameter, because -1 indicate that the cell_log needs to be computed by the function.). Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk> diff --git a/tc/m_police.c b/tc/m_police.c index 5d2528b..acdfd22 100644 --- a/tc/m_police.c +++ b/tc/m_police.c @@ -263,22 +263,20 @@ int act_parse_police(struct action_util *a,int *argc_p, char ***argv_p, int tca_ } if (p.rate.rate) { - if ((Rcell_log = tc_calc_rtable(p.rate.rate, rtab, Rcell_log, mtu, mpu)) < 0) { + p.rate.mpu = mpu; + if (tc_calc_rtable(&p.rate, rtab, Rcell_log, mtu) < 0) { fprintf(stderr, "TBF: failed to calculate rate table.\n"); return -1; } p.burst = tc_calc_xmittime(p.rate.rate, buffer); - p.rate.cell_log = Rcell_log; - p.rate.mpu = mpu; } p.mtu = mtu; if (p.peakrate.rate) { - if ((Pcell_log = tc_calc_rtable(p.peakrate.rate, ptab, Pcell_log, mtu, mpu)) < 0) { + p.peakrate.mpu = mpu; + if (tc_calc_rtable(&p.peakrate, ptab, Pcell_log, mtu) < 0) { fprintf(stderr, "POLICE: failed to calculate peak rate table.\n"); return -1; } - p.peakrate.cell_log = Pcell_log; - p.peakrate.mpu = mpu; } tail = NLMSG_TAIL(n); diff --git a/tc/q_cbq.c b/tc/q_cbq.c index f2b4ce8..df98312 100644 --- a/tc/q_cbq.c +++ b/tc/q_cbq.c @@ -137,12 +137,11 @@ static int cbq_parse_opt(struct qdisc_util *qu, int argc, char **argv, struct nl if (allot < (avpkt*3)/2) allot = (avpkt*3)/2; - if ((cell_log = tc_calc_rtable(r.rate, rtab, cell_log, allot, mpu)) < 0) { + r.mpu = mpu; + if (tc_calc_rtable(&r, rtab, cell_log, allot) < 0) { fprintf(stderr, "CBQ: failed to calculate rate table.\n"); return -1; } - r.cell_log = cell_log; - r.mpu = mpu; if (ewma_log < 0) ewma_log = TC_CBQ_DEF_EWMA; @@ -336,12 +335,11 @@ static int cbq_parse_class_opt(struct qdisc_util *qu, int argc, char **argv, str unsigned pktsize = wrr.allot; if (wrr.allot < (lss.avpkt*3)/2) wrr.allot = (lss.avpkt*3)/2; - if ((cell_log = tc_calc_rtable(r.rate, rtab, cell_log, pktsize, mpu)) < 0) { + r.mpu = mpu; + if (tc_calc_rtable(&r, rtab, cell_log, pktsize) < 0) { fprintf(stderr, "CBQ: failed to calculate rate table.\n"); return -1; } - r.cell_log = cell_log; - r.mpu = mpu; } if (ewma_log < 0) ewma_log = TC_CBQ_DEF_EWMA; diff --git a/tc/q_htb.c b/tc/q_htb.c index b579ebe..cca77fa 100644 --- a/tc/q_htb.c +++ b/tc/q_htb.c @@ -212,19 +212,17 @@ static int htb_parse_class_opt(struct qdisc_util *qu, int argc, char **argv, str opt.ceil.mpu = mpu; opt.rate.mpu = mpu; - if ((cell_log = tc_calc_rtable(opt.rate.rate, rtab, cell_log, mtu, mpu)) < 0) { + if (tc_calc_rtable(&opt.rate, rtab, cell_log, mtu) < 0) { fprintf(stderr, "htb: failed to calculate rate table.\n"); return -1; } opt.buffer = tc_calc_xmittime(opt.rate.rate, buffer); - opt.rate.cell_log = cell_log; - if ((ccell_log = tc_calc_rtable(opt.ceil.rate, ctab, cell_log, mtu, mpu)) < 0) { + if (tc_calc_rtable(&opt.ceil, ctab, ccell_log, mtu) < 0) { fprintf(stderr, "htb: failed to calculate ceil rate table.\n"); return -1; } opt.cbuffer = tc_calc_xmittime(opt.ceil.rate, cbuffer); - opt.ceil.cell_log = ccell_log; tail = NLMSG_TAIL(n); addattr_l(n, 1024, TCA_OPTIONS, NULL, 0); diff --git a/tc/q_tbf.c b/tc/q_tbf.c index 1fc05f4..c7b4f0f 100644 --- a/tc/q_tbf.c +++ b/tc/q_tbf.c @@ -170,21 +170,20 @@ static int tbf_parse_opt(struct qdisc_util *qu, int argc, char **argv, struct nl opt.limit = lim; } - if ((Rcell_log = tc_calc_rtable(opt.rate.rate, rtab, Rcell_log, mtu, mpu)) < 0) { + opt.rate.mpu = mpu; + if (tc_calc_rtable(&opt.rate, rtab, Rcell_log, mtu) < 0) { fprintf(stderr, "TBF: failed to calculate rate table.\n"); return -1; } opt.buffer = tc_calc_xmittime(opt.rate.rate, buffer); - opt.rate.cell_log = Rcell_log; - opt.rate.mpu = mpu; + if (opt.peakrate.rate) { - if ((Pcell_log = tc_calc_rtable(opt.peakrate.rate, ptab, Pcell_log, mtu, mpu)) < 0) { + opt.peakrate.mpu = mpu; + if (tc_calc_rtable(&opt.peakrate, ptab, Pcell_log, mtu) < 0) { fprintf(stderr, "TBF: failed to calculate peak rate table.\n"); return -1; } opt.mtu = tc_calc_xmittime(opt.peakrate.rate, mtu); - opt.peakrate.cell_log = Pcell_log; - opt.peakrate.mpu = mpu; } tail = NLMSG_TAIL(n); diff --git a/tc/tc_core.c b/tc/tc_core.c index 1ab0ba0..c713a18 100644 --- a/tc/tc_core.c +++ b/tc/tc_core.c @@ -69,10 +69,11 @@ unsigned tc_calc_xmitsize(unsigned rate, unsigned ticks) rtab[pkt_len>>cell_log] = pkt_xmit_time */ -int tc_calc_rtable(unsigned bps, __u32 *rtab, int cell_log, unsigned mtu, - unsigned mpu) +int tc_calc_rtable(struct tc_ratespec *r, __u32 *rtab, int cell_log, unsigned mtu) { int i; + unsigned bps = r->rate; + unsigned mpu = r->mpu; if (mtu == 0) mtu = 2047; @@ -88,6 +89,7 @@ int tc_calc_rtable(unsigned bps, __u32 *rtab, int cell_log, unsigned mtu, sz = mpu; rtab[i] = tc_calc_xmittime(bps, sz); } + r->cell_log=cell_log; return cell_log; } diff --git a/tc/tc_core.h b/tc/tc_core.h index a139da6..e98a7b4 100644 --- a/tc/tc_core.h +++ b/tc/tc_core.h @@ -13,7 +13,7 @@ long tc_core_time2ktime(long time); long tc_core_ktime2time(long ktime); unsigned tc_calc_xmittime(unsigned rate, unsigned size); unsigned tc_calc_xmitsize(unsigned rate, unsigned ticks); -int tc_calc_rtable(unsigned bps, __u32 *rtab, int cell_log, unsigned mtu, unsigned mpu); +int tc_calc_rtable(struct tc_ratespec *r, __u32 *rtab, int cell_log, unsigned mtu); int tc_setup_estimator(unsigned A, unsigned time_const, struct tc_estimator *est); ^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2007-09-05 13:58 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-08-31 12:22 [PATCH 2/2]: [NET_SCHED]: Making rate table lookups more flexible Jesper Dangaard Brouer 2007-09-01 7:10 ` Patrick McHardy 2007-09-01 21:56 ` Jesper Dangaard Brouer 2007-09-02 14:35 ` Patrick McHardy 2007-09-02 18:56 ` Jesper Dangaard Brouer 2007-09-02 21:16 ` Patrick McHardy 2007-09-03 14:19 ` Jesper Dangaard Brouer 2007-09-04 16:25 ` Patrick McHardy 2007-09-05 13:58 ` Jesper Dangaard Brouer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).