* [PATCH net-next 1/2] netem: rate-latency extension
@ 2011-11-24 17:39 Hagen Paul Pfeifer
2011-11-24 17:39 ` [PATCH net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-24 17:39 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
Currently netem is not in the ability to emulate channel bandwidth. Only static
delay (and optional random jitter) can be configured.
To emulate the channel rate the token bucket filter (sch_tbf) can be used. But
TBF has some major emulation flaws. The buffer (token bucket depth/rate) cannot
be 0. Also the idea behind TBF is that the credit (token in buckets) fills if
no packet is transmitted. So that there is always a "positive" credit for new
packets. In real life this behavior contradicts the law of nature where
nothing can travel faster as speed of light. E.g.: on an emulated 1000 byte/s
link a small IPv4/TCP SYN packet with ~50 byte require ~0.05 seconds - not 0
seconds.
Netem is an excellent place to implement a rate limiting feature: static
delay is already implemented, tfifo already has time information and the
user can skip TBF configuration completely.
This patch implement rate latency feature which can be configured via
tc. e.g:
tc qdisc add dev eth0 root netem ratelatency 10kbit
To emulate a link of 5000byte/s and add an additional static delay of 10ms:
tc qdisc add dev eth0 root netem delay 10ms ratelatency 5KBps
Note: similar to TBF the rate-latency extension is bounded to the kernel timing
system. Depending on the architecture timer granularity, higher rates (e.g.
10mbit/s and higher) tend to transmission bursts. Also note: further queues
living in network adaptors; see ethtool(8).
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/linux/pkt_sched.h | 5 +++++
net/sched/sch_netem.c | 40 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+), 0 deletions(-)
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index c533670..cf826d3 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -465,6 +465,7 @@ enum {
TCA_NETEM_REORDER,
TCA_NETEM_CORRUPT,
TCA_NETEM_LOSS,
+ TCA_NETEM_RATELATENCY,
__TCA_NETEM_MAX,
};
@@ -495,6 +496,10 @@ struct tc_netem_corrupt {
__u32 correlation;
};
+struct tc_netem_ratelatency {
+ __u32 ratelatency; /* byte/s */
+};
+
enum {
NETEM_LOSS_UNSPEC,
NETEM_LOSS_GI, /* General Intuitive - 4 state model */
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index eb3b9a8..3ae1cdd 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -79,6 +79,7 @@ struct netem_sched_data {
u32 duplicate;
u32 reorder;
u32 corrupt;
+ u32 ratelatency;
struct crndstate {
u32 last;
@@ -298,6 +299,11 @@ static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma,
return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
}
+static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
+{
+ return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
+}
+
/*
* Insert one skb into qdisc.
* Note: parent depends on return value to account for queue length.
@@ -371,6 +377,24 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
&q->delay_cor, q->delay_dist);
now = psched_get_time();
+
+ if (q->ratelatency) {
+ struct sk_buff_head *list = &q->qdisc->q;
+
+ delay += packet_len_2_sched_time(skb->len, q->ratelatency);
+
+ if (!skb_queue_empty(list)) {
+ /*
+ * Last packet in queue is reference point (now).
+ * First packet in queue is already in flight,
+ * calculate this time bonus and substract
+ * from delay.
+ */
+ delay -= now - netem_skb_cb(skb_peek(list))->time_to_send;
+ now = netem_skb_cb(skb_peek_tail(list))->time_to_send;
+ }
+ }
+
cb->time_to_send = now + delay;
++q->counter;
ret = qdisc_enqueue(skb, q->qdisc);
@@ -535,6 +559,14 @@ static void get_corrupt(struct Qdisc *sch, const struct nlattr *attr)
init_crandom(&q->corrupt_cor, r->correlation);
}
+static void get_ratelatency(struct Qdisc *sch, const struct nlattr *attr)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ const struct tc_netem_ratelatency *r = nla_data(attr);
+
+ q->ratelatency = r->ratelatency;
+}
+
static int get_loss_clg(struct Qdisc *sch, const struct nlattr *attr)
{
struct netem_sched_data *q = qdisc_priv(sch);
@@ -594,6 +626,7 @@ static const struct nla_policy netem_policy[TCA_NETEM_MAX + 1] = {
[TCA_NETEM_CORR] = { .len = sizeof(struct tc_netem_corr) },
[TCA_NETEM_REORDER] = { .len = sizeof(struct tc_netem_reorder) },
[TCA_NETEM_CORRUPT] = { .len = sizeof(struct tc_netem_corrupt) },
+ [TCA_NETEM_RATELATENCY] = { .len = sizeof(struct tc_netem_ratelatency) },
[TCA_NETEM_LOSS] = { .type = NLA_NESTED },
};
@@ -666,6 +699,9 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt)
if (tb[TCA_NETEM_CORRUPT])
get_corrupt(sch, tb[TCA_NETEM_CORRUPT]);
+ if (tb[TCA_NETEM_RATELATENCY])
+ get_ratelatency(sch, tb[TCA_NETEM_RATELATENCY]);
+
q->loss_model = CLG_RANDOM;
if (tb[TCA_NETEM_LOSS])
ret = get_loss_clg(sch, tb[TCA_NETEM_LOSS]);
@@ -846,6 +882,7 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
struct tc_netem_corr cor;
struct tc_netem_reorder reorder;
struct tc_netem_corrupt corrupt;
+ struct tc_netem_ratelatency ratelatency;
qopt.latency = q->latency;
qopt.jitter = q->jitter;
@@ -868,6 +905,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
corrupt.correlation = q->corrupt_cor.rho;
NLA_PUT(skb, TCA_NETEM_CORRUPT, sizeof(corrupt), &corrupt);
+ ratelatency.ratelatency = q->ratelatency;
+ NLA_PUT(skb, TCA_NETEM_RATELATENCY, sizeof(ratelatency), &ratelatency);
+
if (dump_loss_model(q, skb) != 0)
goto nla_put_failure;
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-24 17:39 [PATCH net-next 1/2] netem: rate-latency extension Hagen Paul Pfeifer
@ 2011-11-24 17:39 ` Hagen Paul Pfeifer
2011-11-24 22:14 ` [PATCH net-next 1/2] netem: rate-latency extension Eric Dumazet
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-24 17:39 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
This extension can be used to simulate special link layer
characteristics. Simulate because packet data is not modified, only the
calculation base is changed to delay a packet based on the original
packet size and artificial cell information.
packet_overhead can be used to simulate a link layer header compression
scheme (e.g. set packet_overhead to -20) or with a positive
packet_overhead value an additional MAC header can be simulated. It is
also possible to "replace" the 14 byte Ethernet header with something
else.
cell_size and cell_overhead can be used to simulate link layer schemes,
based on cells, like some TDMA schemes. Another application area are MAC
schemes using a link layer fragmentation with a (small) header each.
Cell size is the maximum amount of data bytes within one cell. Cell
overhead is an additional variable to change the per-cell-overhead (e.g.
5 byte header per fragment).
Example (5 kbit/s, 20 byte per packet overhead, cellsize 100 byte, per
cell overhead 5 byte):
tc qdisc add dev eth0 root netem ratelatency 5kbit 20 100 5
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/linux/pkt_sched.h | 3 +++
net/sched/sch_netem.c | 30 +++++++++++++++++++++++++++---
2 files changed, 30 insertions(+), 3 deletions(-)
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index cf826d3..5ad3858 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -498,6 +498,9 @@ struct tc_netem_corrupt {
struct tc_netem_ratelatency {
__u32 ratelatency; /* byte/s */
+ __s32 packet_overhead;
+ __u32 cell_size;
+ __s32 cell_overhead;
};
enum {
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 3ae1cdd..40ad634 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -80,6 +80,9 @@ struct netem_sched_data {
u32 reorder;
u32 corrupt;
u32 ratelatency;
+ s32 packet_overhead;
+ u32 cell_size;
+ s32 cell_overhead;
struct crndstate {
u32 last;
@@ -299,9 +302,24 @@ static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma,
return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
}
-static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
+static psched_time_t packet_len_2_sched_time(unsigned int len,
+ struct netem_sched_data *q)
{
- return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
+ len += q->packet_overhead;
+
+ if (q->cell_size) {
+ u32 carry = len % q->cell_size;
+ len += carry;
+
+ if (q->cell_overhead) {
+ u32 cells = len / q->cell_size;
+ if (carry)
+ cells += 1;
+ len += cells * q->cell_overhead;
+ }
+ }
+
+ return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / q->ratelatency);
}
/*
@@ -381,7 +399,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
if (q->ratelatency) {
struct sk_buff_head *list = &q->qdisc->q;
- delay += packet_len_2_sched_time(skb->len, q->ratelatency);
+ delay += packet_len_2_sched_time(skb->len, q);
if (!skb_queue_empty(list)) {
/*
@@ -565,6 +583,9 @@ static void get_ratelatency(struct Qdisc *sch, const struct nlattr *attr)
const struct tc_netem_ratelatency *r = nla_data(attr);
q->ratelatency = r->ratelatency;
+ q->packet_overhead = r->packet_overhead;
+ q->cell_size = r->cell_size;
+ q->cell_overhead = r->cell_overhead;
}
static int get_loss_clg(struct Qdisc *sch, const struct nlattr *attr)
@@ -906,6 +927,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
NLA_PUT(skb, TCA_NETEM_CORRUPT, sizeof(corrupt), &corrupt);
ratelatency.ratelatency = q->ratelatency;
+ ratelatency.packet_overhead = q->packet_overhead;
+ ratelatency.cell_size = q->cell_size;
+ ratelatency.cell_overhead = q->cell_overhead;
NLA_PUT(skb, TCA_NETEM_RATELATENCY, sizeof(ratelatency), &ratelatency);
if (dump_loss_model(q, skb) != 0)
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-24 17:39 [PATCH net-next 1/2] netem: rate-latency extension Hagen Paul Pfeifer
2011-11-24 17:39 ` [PATCH net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
@ 2011-11-24 22:14 ` Eric Dumazet
2011-11-24 22:31 ` Hagen Paul Pfeifer
2011-11-25 5:09 ` Stephen Hemminger
2011-11-25 2:22 ` [PATCH v2 net-next 1/2] netem: rate extension Hagen Paul Pfeifer
2011-11-25 2:23 ` [PATCH v2 iproute2 1/2] utils: add s32 parser Hagen Paul Pfeifer
3 siblings, 2 replies; 18+ messages in thread
From: Eric Dumazet @ 2011-11-24 22:14 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: netdev, Stephen Hemminger
Le jeudi 24 novembre 2011 à 18:39 +0100, Hagen Paul Pfeifer a écrit :
> Currently netem is not in the ability to emulate channel bandwidth. Only static
> delay (and optional random jitter) can be configured.
>
> To emulate the channel rate the token bucket filter (sch_tbf) can be used. But
> TBF has some major emulation flaws. The buffer (token bucket depth/rate) cannot
> be 0. Also the idea behind TBF is that the credit (token in buckets) fills if
> no packet is transmitted. So that there is always a "positive" credit for new
> packets. In real life this behavior contradicts the law of nature where
> nothing can travel faster as speed of light. E.g.: on an emulated 1000 byte/s
> link a small IPv4/TCP SYN packet with ~50 byte require ~0.05 seconds - not 0
> seconds.
>
> Netem is an excellent place to implement a rate limiting feature: static
> delay is already implemented, tfifo already has time information and the
> user can skip TBF configuration completely.
>
> This patch implement rate latency feature which can be configured via
> tc. e.g:
>
> tc qdisc add dev eth0 root netem ratelatency 10kbit
>
> To emulate a link of 5000byte/s and add an additional static delay of 10ms:
>
> tc qdisc add dev eth0 root netem delay 10ms ratelatency 5KBps
>
> Note: similar to TBF the rate-latency extension is bounded to the kernel timing
> system. Depending on the architecture timer granularity, higher rates (e.g.
> 10mbit/s and higher) tend to transmission bursts. Also note: further queues
> living in network adaptors; see ethtool(8).
>
> Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
> ---
> include/linux/pkt_sched.h | 5 +++++
> net/sched/sch_netem.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 45 insertions(+), 0 deletions(-)
I like this patch, this is a useful extension.
Only point is why you chose ratelatency instead of rate ?
We want to emulate a real link, and yes, a 1000 bytes packet must be
delayed _before_ we deliver it to the device, but its a detail of how
works netem.
The usual word we use to describe a 1Mbps link is "1Mbps rate" ;)
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-24 22:14 ` [PATCH net-next 1/2] netem: rate-latency extension Eric Dumazet
@ 2011-11-24 22:31 ` Hagen Paul Pfeifer
2011-11-25 1:06 ` Bill Fink
2011-11-25 5:09 ` Stephen Hemminger
1 sibling, 1 reply; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-24 22:31 UTC (permalink / raw)
To: Eric Dumazet; +Cc: netdev, Stephen Hemminger
* Eric Dumazet | 2011-11-24 23:14:58 [+0100]:
>Only point is why you chose ratelatency instead of rate ?
Not sure why, it was called rate in v1, then somebody said ratelatency and I
found it more stating. So in v2 it become ratelatency. I have no strong
opinion here - should I generate v3?
Hagen
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-24 22:31 ` Hagen Paul Pfeifer
@ 2011-11-25 1:06 ` Bill Fink
2011-11-25 1:23 ` Hagen Paul Pfeifer
0 siblings, 1 reply; 18+ messages in thread
From: Bill Fink @ 2011-11-25 1:06 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: Eric Dumazet, netdev, Stephen Hemminger
On Thu, 24 Nov 2011, Hagen Paul Pfeifer wrote:
> * Eric Dumazet | 2011-11-24 23:14:58 [+0100]:
>
> >Only point is why you chose ratelatency instead of rate ?
>
>
> Not sure why, it was called rate in v1, then somebody said ratelatency and I
> found it more stating. So in v2 it become ratelatency. I have no strong
> opinion here - should I generate v3?
>From the user perspective, I also find rate much more natural.
No need to add further to tc obscurity.
I would ask for an update to the netem man page, but I guess
there isn't a netem man page. :-(
-Bill
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-25 1:06 ` Bill Fink
@ 2011-11-25 1:23 ` Hagen Paul Pfeifer
0 siblings, 0 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 1:23 UTC (permalink / raw)
To: Bill Fink; +Cc: Eric Dumazet, netdev, Stephen Hemminger
* Bill Fink | 2011-11-24 20:06:50 [-0500]:
>From the user perspective, I also find rate much more natural.
>No need to add further to tc obscurity.
ok, then I will respin the patch.
>I would ask for an update to the netem man page, but I guess
>there isn't a netem man page. :-(
Someone wrote a man page, but it was never commited to iproute2. I will have a
look.
Hagen
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 net-next 1/2] netem: rate extension
2011-11-24 17:39 [PATCH net-next 1/2] netem: rate-latency extension Hagen Paul Pfeifer
2011-11-24 17:39 ` [PATCH net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
2011-11-24 22:14 ` [PATCH net-next 1/2] netem: rate-latency extension Eric Dumazet
@ 2011-11-25 2:22 ` Hagen Paul Pfeifer
2011-11-25 2:22 ` [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
2011-11-26 11:00 ` [PATCH v2 net-next 1/2] netem: rate extension Eric Dumazet
2011-11-25 2:23 ` [PATCH v2 iproute2 1/2] utils: add s32 parser Hagen Paul Pfeifer
3 siblings, 2 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 2:22 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
Currently netem is not in the ability to emulate channel bandwidth. Only static
delay (and optional random jitter) can be configured.
To emulate the channel rate the token bucket filter (sch_tbf) can be used. But
TBF has some major emulation flaws. The buffer (token bucket depth/rate) cannot
be 0. Also the idea behind TBF is that the credit (token in buckets) fills if
no packet is transmitted. So that there is always a "positive" credit for new
packets. In real life this behavior contradicts the law of nature where
nothing can travel faster as speed of light. E.g.: on an emulated 1000 byte/s
link a small IPv4/TCP SYN packet with ~50 byte require ~0.05 seconds - not 0
seconds.
Netem is an excellent place to implement a rate limiting feature: static
delay is already implemented, tfifo already has time information and the
user can skip TBF configuration completely.
This patch implement rate feature which can be configured via tc. e.g:
tc qdisc add dev eth0 root netem rate 10kbit
To emulate a link of 5000byte/s and add an additional static delay of 10ms:
tc qdisc add dev eth0 root netem delay 10ms rate 5KBps
Note: similar to TBF the rate extension is bounded to the kernel timing
system. Depending on the architecture timer granularity, higher rates (e.g.
10mbit/s and higher) tend to transmission bursts. Also note: further queues
living in network adaptors; see ethtool(8).
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/linux/pkt_sched.h | 5 +++++
net/sched/sch_netem.c | 40 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+), 0 deletions(-)
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index c533670..26c37ca 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -465,6 +465,7 @@ enum {
TCA_NETEM_REORDER,
TCA_NETEM_CORRUPT,
TCA_NETEM_LOSS,
+ TCA_NETEM_RATE,
__TCA_NETEM_MAX,
};
@@ -495,6 +496,10 @@ struct tc_netem_corrupt {
__u32 correlation;
};
+struct tc_netem_rate {
+ __u32 rate; /* byte/s */
+};
+
enum {
NETEM_LOSS_UNSPEC,
NETEM_LOSS_GI, /* General Intuitive - 4 state model */
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index eb3b9a8..9b7af9f 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -79,6 +79,7 @@ struct netem_sched_data {
u32 duplicate;
u32 reorder;
u32 corrupt;
+ u32 rate;
struct crndstate {
u32 last;
@@ -298,6 +299,11 @@ static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma,
return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
}
+static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
+{
+ return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
+}
+
/*
* Insert one skb into qdisc.
* Note: parent depends on return value to account for queue length.
@@ -371,6 +377,24 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
&q->delay_cor, q->delay_dist);
now = psched_get_time();
+
+ if (q->rate) {
+ struct sk_buff_head *list = &q->qdisc->q;
+
+ delay += packet_len_2_sched_time(skb->len, q->rate);
+
+ if (!skb_queue_empty(list)) {
+ /*
+ * Last packet in queue is reference point (now).
+ * First packet in queue is already in flight,
+ * calculate this time bonus and substract
+ * from delay.
+ */
+ delay -= now - netem_skb_cb(skb_peek(list))->time_to_send;
+ now = netem_skb_cb(skb_peek_tail(list))->time_to_send;
+ }
+ }
+
cb->time_to_send = now + delay;
++q->counter;
ret = qdisc_enqueue(skb, q->qdisc);
@@ -535,6 +559,14 @@ static void get_corrupt(struct Qdisc *sch, const struct nlattr *attr)
init_crandom(&q->corrupt_cor, r->correlation);
}
+static void get_rate(struct Qdisc *sch, const struct nlattr *attr)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ const struct tc_netem_rate *r = nla_data(attr);
+
+ q->rate = r->rate;
+}
+
static int get_loss_clg(struct Qdisc *sch, const struct nlattr *attr)
{
struct netem_sched_data *q = qdisc_priv(sch);
@@ -594,6 +626,7 @@ static const struct nla_policy netem_policy[TCA_NETEM_MAX + 1] = {
[TCA_NETEM_CORR] = { .len = sizeof(struct tc_netem_corr) },
[TCA_NETEM_REORDER] = { .len = sizeof(struct tc_netem_reorder) },
[TCA_NETEM_CORRUPT] = { .len = sizeof(struct tc_netem_corrupt) },
+ [TCA_NETEM_RATE] = { .len = sizeof(struct tc_netem_rate) },
[TCA_NETEM_LOSS] = { .type = NLA_NESTED },
};
@@ -666,6 +699,9 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt)
if (tb[TCA_NETEM_CORRUPT])
get_corrupt(sch, tb[TCA_NETEM_CORRUPT]);
+ if (tb[TCA_NETEM_RATE])
+ get_rate(sch, tb[TCA_NETEM_RATE]);
+
q->loss_model = CLG_RANDOM;
if (tb[TCA_NETEM_LOSS])
ret = get_loss_clg(sch, tb[TCA_NETEM_LOSS]);
@@ -846,6 +882,7 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
struct tc_netem_corr cor;
struct tc_netem_reorder reorder;
struct tc_netem_corrupt corrupt;
+ struct tc_netem_rate rate;
qopt.latency = q->latency;
qopt.jitter = q->jitter;
@@ -868,6 +905,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
corrupt.correlation = q->corrupt_cor.rho;
NLA_PUT(skb, TCA_NETEM_CORRUPT, sizeof(corrupt), &corrupt);
+ rate.rate = q->rate;
+ NLA_PUT(skb, TCA_NETEM_RATE, sizeof(rate), &rate);
+
if (dump_loss_model(q, skb) != 0)
goto nla_put_failure;
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-25 2:22 ` [PATCH v2 net-next 1/2] netem: rate extension Hagen Paul Pfeifer
@ 2011-11-25 2:22 ` Hagen Paul Pfeifer
2011-11-28 23:01 ` Eric Dumazet
2011-11-26 11:00 ` [PATCH v2 net-next 1/2] netem: rate extension Eric Dumazet
1 sibling, 1 reply; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 2:22 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
This extension can be used to simulate special link layer
characteristics. Simulate because packet data is not modified, only the
calculation base is changed to delay a packet based on the original
packet size and artificial cell information.
packet_overhead can be used to simulate a link layer header compression
scheme (e.g. set packet_overhead to -20) or with a positive
packet_overhead value an additional MAC header can be simulated. It is
also possible to "replace" the 14 byte Ethernet header with something
else.
cell_size and cell_overhead can be used to simulate link layer schemes,
based on cells, like some TDMA schemes. Another application area are MAC
schemes using a link layer fragmentation with a (small) header each.
Cell size is the maximum amount of data bytes within one cell. Cell
overhead is an additional variable to change the per-cell-overhead (e.g.
5 byte header per fragment).
Example (5 kbit/s, 20 byte per packet overhead, cellsize 100 byte, per
cell overhead 5 byte):
tc qdisc add dev eth0 root netem rate 5kbit 20 100 5
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/linux/pkt_sched.h | 3 +++
net/sched/sch_netem.c | 30 +++++++++++++++++++++++++++---
2 files changed, 30 insertions(+), 3 deletions(-)
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index 26c37ca..63845cf 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -498,6 +498,9 @@ struct tc_netem_corrupt {
struct tc_netem_rate {
__u32 rate; /* byte/s */
+ __s32 packet_overhead;
+ __u32 cell_size;
+ __s32 cell_overhead;
};
enum {
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 9b7af9f..11ca527 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -80,6 +80,9 @@ struct netem_sched_data {
u32 reorder;
u32 corrupt;
u32 rate;
+ s32 packet_overhead;
+ u32 cell_size;
+ s32 cell_overhead;
struct crndstate {
u32 last;
@@ -299,9 +302,24 @@ static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma,
return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
}
-static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
+static psched_time_t packet_len_2_sched_time(unsigned int len,
+ struct netem_sched_data *q)
{
- return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
+ len += q->packet_overhead;
+
+ if (q->cell_size) {
+ u32 carry = len % q->cell_size;
+ len += carry;
+
+ if (q->cell_overhead) {
+ u32 cells = len / q->cell_size;
+ if (carry)
+ cells += 1;
+ len += cells * q->cell_overhead;
+ }
+ }
+
+ return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / q->rate);
}
/*
@@ -381,7 +399,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
if (q->rate) {
struct sk_buff_head *list = &q->qdisc->q;
- delay += packet_len_2_sched_time(skb->len, q->rate);
+ delay += packet_len_2_sched_time(skb->len, q);
if (!skb_queue_empty(list)) {
/*
@@ -565,6 +583,9 @@ static void get_rate(struct Qdisc *sch, const struct nlattr *attr)
const struct tc_netem_rate *r = nla_data(attr);
q->rate = r->rate;
+ q->packet_overhead = r->packet_overhead;
+ q->cell_size = r->cell_size;
+ q->cell_overhead = r->cell_overhead;
}
static int get_loss_clg(struct Qdisc *sch, const struct nlattr *attr)
@@ -906,6 +927,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
NLA_PUT(skb, TCA_NETEM_CORRUPT, sizeof(corrupt), &corrupt);
rate.rate = q->rate;
+ rate.packet_overhead = q->packet_overhead;
+ rate.cell_size = q->cell_size;
+ rate.cell_overhead = q->cell_overhead;
NLA_PUT(skb, TCA_NETEM_RATE, sizeof(rate), &rate);
if (dump_loss_model(q, skb) != 0)
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 iproute2 1/2] utils: add s32 parser
2011-11-24 17:39 [PATCH net-next 1/2] netem: rate-latency extension Hagen Paul Pfeifer
` (2 preceding siblings ...)
2011-11-25 2:22 ` [PATCH v2 net-next 1/2] netem: rate extension Hagen Paul Pfeifer
@ 2011-11-25 2:23 ` Hagen Paul Pfeifer
2011-11-25 2:23 ` [PATCH v2 iproute2 2/2] tc: netem rate shaping and cell extension Hagen Paul Pfeifer
3 siblings, 1 reply; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 2:23 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/utils.h | 1 +
lib/utils.c | 14 ++++++++++++++
2 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/include/utils.h b/include/utils.h
index 47f8e07..496db68 100644
--- a/include/utils.h
+++ b/include/utils.h
@@ -85,6 +85,7 @@ extern int get_time_rtt(unsigned *val, const char *arg, int *raw);
#define get_short get_s16
extern int get_u64(__u64 *val, const char *arg, int base);
extern int get_u32(__u32 *val, const char *arg, int base);
+extern int get_s32(__s32 *val, const char *arg, int base);
extern int get_u16(__u16 *val, const char *arg, int base);
extern int get_s16(__s16 *val, const char *arg, int base);
extern int get_u8(__u8 *val, const char *arg, int base);
diff --git a/lib/utils.c b/lib/utils.c
index efaf377..6788dd9 100644
--- a/lib/utils.c
+++ b/lib/utils.c
@@ -198,6 +198,20 @@ int get_u8(__u8 *val, const char *arg, int base)
return 0;
}
+int get_s32(__s32 *val, const char *arg, int base)
+{
+ long res;
+ char *ptr;
+
+ if (!arg || !*arg)
+ return -1;
+ res = strtoul(arg, &ptr, base);
+ if (!ptr || ptr == arg || *ptr || res > INT32_MAX || res < INT32_MIN)
+ return -1;
+ *val = res;
+ return 0;
+}
+
int get_s16(__s16 *val, const char *arg, int base)
{
long res;
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 iproute2 2/2] tc: netem rate shaping and cell extension
2011-11-25 2:23 ` [PATCH v2 iproute2 1/2] utils: add s32 parser Hagen Paul Pfeifer
@ 2011-11-25 2:23 ` Hagen Paul Pfeifer
0 siblings, 0 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 2:23 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Hagen Paul Pfeifer
This patch add rate shaping as well as cell support. The link-rate can be
specified via rate options. Three optional arguments control the cell
knobs: packet-overhead, cell-size, cell-overhead. To ratelimit eth0 root
queue to 5kbit/s, with a 20 byte packet overhead, 100 byte cell size and
a 5 byte per cell overhead:
tc qdisc add dev eth0 root netem rate 5kbit 20 100 5
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
include/linux/pkt_sched.h | 8 ++++++
tc/q_netem.c | 53 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 60 insertions(+), 1 deletions(-)
diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
index c533670..eaf4e9e 100644
--- a/include/linux/pkt_sched.h
+++ b/include/linux/pkt_sched.h
@@ -465,6 +465,7 @@ enum {
TCA_NETEM_REORDER,
TCA_NETEM_CORRUPT,
TCA_NETEM_LOSS,
+ TCA_NETEM_RATE,
__TCA_NETEM_MAX,
};
@@ -495,6 +496,13 @@ struct tc_netem_corrupt {
__u32 correlation;
};
+struct tc_netem_rate {
+ __u32 rate; /* byte/s */
+ __s32 packet_overhead;
+ __u32 cell_size;
+ __s32 cell_overhead;
+};
+
enum {
NETEM_LOSS_UNSPEC,
NETEM_LOSS_GI, /* General Intuitive - 4 state model */
diff --git a/tc/q_netem.c b/tc/q_netem.c
index 6dc40bd..1fdfa44 100644
--- a/tc/q_netem.c
+++ b/tc/q_netem.c
@@ -34,7 +34,8 @@ static void explain(void)
" [ drop PERCENT [CORRELATION]] \n" \
" [ corrupt PERCENT [CORRELATION]] \n" \
" [ duplicate PERCENT [CORRELATION]]\n" \
-" [ reorder PRECENT [CORRELATION] [ gap DISTANCE ]]\n");
+" [ reorder PRECENT [CORRELATION] [ gap DISTANCE ]]\n" \
+" [ rate RATE [PACKETOVERHEAD] [CELLSIZE] [CELLOVERHEAD]]\n");
}
static void explain1(const char *arg)
@@ -131,6 +132,7 @@ static int netem_parse_opt(struct qdisc_util *qu, int argc, char **argv,
struct tc_netem_corr cor;
struct tc_netem_reorder reorder;
struct tc_netem_corrupt corrupt;
+ struct tc_netem_rate rate;
__s16 *dist_data = NULL;
int present[__TCA_NETEM_MAX];
@@ -139,6 +141,7 @@ static int netem_parse_opt(struct qdisc_util *qu, int argc, char **argv,
memset(&cor, 0, sizeof(cor));
memset(&reorder, 0, sizeof(reorder));
memset(&corrupt, 0, sizeof(corrupt));
+ memset(&rate, 0, sizeof(rate));
memset(present, 0, sizeof(present));
while (argc > 0) {
@@ -244,6 +247,34 @@ static int netem_parse_opt(struct qdisc_util *qu, int argc, char **argv,
free(dist_data);
return -1;
}
+ } else if (matches(*argv, "rate") == 0) {
+ ++present[TCA_NETEM_RATE];
+ NEXT_ARG();
+ if (get_rate(&rate.rate, *argv)) {
+ explain1("rate");
+ return -1;
+ }
+ if (NEXT_IS_NUMBER()) {
+ NEXT_ARG();
+ if (get_s32(&rate.packet_overhead, *argv, 0)) {
+ explain1("rate");
+ return -1;
+ }
+ }
+ if (NEXT_IS_NUMBER()) {
+ NEXT_ARG();
+ if (get_u32(&rate.cell_size, *argv, 0)) {
+ explain1("rate");
+ return -1;
+ }
+ }
+ if (NEXT_IS_NUMBER()) {
+ NEXT_ARG();
+ if (get_s32(&rate.cell_overhead, *argv, 0)) {
+ explain1("rate");
+ return -1;
+ }
+ }
} else if (strcmp(*argv, "help") == 0) {
explain();
return -1;
@@ -290,6 +321,10 @@ static int netem_parse_opt(struct qdisc_util *qu, int argc, char **argv,
addattr_l(n, 1024, TCA_NETEM_CORRUPT, &corrupt, sizeof(corrupt)) < 0)
return -1;
+ if (present[TCA_NETEM_RATE] &&
+ addattr_l(n, 1024, TCA_NETEM_RATE, &rate, sizeof(rate)) < 0)
+ return -1;
+
if (dist_data) {
if (addattr_l(n, MAX_DIST * sizeof(dist_data[0]),
TCA_NETEM_DELAY_DIST,
@@ -306,6 +341,7 @@ static int netem_print_opt(struct qdisc_util *qu, FILE *f, struct rtattr *opt)
const struct tc_netem_corr *cor = NULL;
const struct tc_netem_reorder *reorder = NULL;
const struct tc_netem_corrupt *corrupt = NULL;
+ const struct tc_netem_rate *rate = NULL;
struct tc_netem_qopt qopt;
int len = RTA_PAYLOAD(opt) - sizeof(qopt);
SPRINT_BUF(b1);
@@ -339,6 +375,11 @@ static int netem_print_opt(struct qdisc_util *qu, FILE *f, struct rtattr *opt)
return -1;
corrupt = RTA_DATA(tb[TCA_NETEM_CORRUPT]);
}
+ if (tb[TCA_NETEM_RATE]) {
+ if (RTA_PAYLOAD(tb[TCA_NETEM_RATE]) < sizeof(*rate))
+ return -1;
+ rate = RTA_DATA(tb[TCA_NETEM_RATE]);
+ }
}
fprintf(f, "limit %d", qopt.limit);
@@ -382,6 +423,16 @@ static int netem_print_opt(struct qdisc_util *qu, FILE *f, struct rtattr *opt)
sprint_percent(corrupt->correlation, b1));
}
+ if (rate && rate->rate) {
+ fprintf(f, " rate %s", sprint_rate(rate->rate, b1));
+ if (rate->packet_overhead)
+ fprintf(f, " packetoverhead %d", rate->packet_overhead);
+ if (rate->cell_size)
+ fprintf(f, " cellsize %u", rate->cell_size);
+ if (rate->cell_overhead)
+ fprintf(f, " celloverhead %d", rate->cell_overhead);
+ }
+
if (qopt.gap)
fprintf(f, " gap %lu", (unsigned long)qopt.gap);
--
1.7.7
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-24 22:14 ` [PATCH net-next 1/2] netem: rate-latency extension Eric Dumazet
2011-11-24 22:31 ` Hagen Paul Pfeifer
@ 2011-11-25 5:09 ` Stephen Hemminger
2011-11-25 6:13 ` Eric Dumazet
1 sibling, 1 reply; 18+ messages in thread
From: Stephen Hemminger @ 2011-11-25 5:09 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Hagen Paul Pfeifer, netdev
On Thu, 24 Nov 2011 23:14:58 +0100
Eric Dumazet <eric.dumazet@gmail.com> wrote:
> I like this patch, this is a useful extension.
>
> Only point is why you chose ratelatency instead of rate ?
>
> We want to emulate a real link, and yes, a 1000 bytes packet must be
> delayed _before_ we deliver it to the device, but its a detail of how
> works netem.
>
> The usual word we use to describe a 1Mbps link is "1Mbps rate" ;
I would rather a new qdisc then add more features to the already complex
netem. Initially, there where was a rate control built into netem, but
the consensus was to use stacking to do it.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-25 5:09 ` Stephen Hemminger
@ 2011-11-25 6:13 ` Eric Dumazet
2011-11-25 12:02 ` Hagen Paul Pfeifer
0 siblings, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2011-11-25 6:13 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Hagen Paul Pfeifer, netdev
Le jeudi 24 novembre 2011 à 21:09 -0800, Stephen Hemminger a écrit :
> I would rather a new qdisc then add more features to the already complex
> netem. Initially, there where was a rate control built into netem, but
> the consensus was to use stacking to do it.
Yes, but Hagen change adds a few lines to netem, and netem already
handles throttling. This is why I believe its a nice enhancement.
Being able to simulate a ratelimit (in bits per second by the way, the
usual bandwith unit, not bytes per second...) in a very easy way seems a
good thing, even if it handles only the egress side.
As Hagen mentioned, a standard qdisc is able to rate limit, but the
first packet sent has a null delay, even if its 64Kbyte packet. It
doesnt mimic a true link.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next 1/2] netem: rate-latency extension
2011-11-25 6:13 ` Eric Dumazet
@ 2011-11-25 12:02 ` Hagen Paul Pfeifer
0 siblings, 0 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-25 12:02 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Stephen Hemminger, netdev
* Eric Dumazet | 2011-11-25 07:13:20 [+0100]:
>Yes, but Hagen change adds a few lines to netem, and netem already
>handles throttling. This is why I believe its a nice enhancement.
We first modified TBF, but TBF address a slightly different task. So the patch
was a little bit awkward and complex (more awkward then the two additional
netem enqueue() lines). So in the end: yes, netem is the right place for this:
only a few lines in netem are required. Additionally: setup a qdisc chain with
TBF, netem, ... is also error prone. Students of mine repeatedly make mistakes
here. This change make a complete emulation setup even more easy. But this is
only a side note.
Hagen
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 net-next 1/2] netem: rate extension
2011-11-25 2:22 ` [PATCH v2 net-next 1/2] netem: rate extension Hagen Paul Pfeifer
2011-11-25 2:22 ` [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
@ 2011-11-26 11:00 ` Eric Dumazet
1 sibling, 0 replies; 18+ messages in thread
From: Eric Dumazet @ 2011-11-26 11:00 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: netdev, Stephen Hemminger
Le vendredi 25 novembre 2011 à 03:22 +0100, Hagen Paul Pfeifer a écrit :
> Currently netem is not in the ability to emulate channel bandwidth. Only static
> delay (and optional random jitter) can be configured.
>
> To emulate the channel rate the token bucket filter (sch_tbf) can be used. But
> TBF has some major emulation flaws. The buffer (token bucket depth/rate) cannot
> be 0. Also the idea behind TBF is that the credit (token in buckets) fills if
> no packet is transmitted. So that there is always a "positive" credit for new
> packets. In real life this behavior contradicts the law of nature where
> nothing can travel faster as speed of light. E.g.: on an emulated 1000 byte/s
> link a small IPv4/TCP SYN packet with ~50 byte require ~0.05 seconds - not 0
> seconds.
>
> Netem is an excellent place to implement a rate limiting feature: static
> delay is already implemented, tfifo already has time information and the
> user can skip TBF configuration completely.
>
> This patch implement rate feature which can be configured via tc. e.g:
>
> tc qdisc add dev eth0 root netem rate 10kbit
>
> To emulate a link of 5000byte/s and add an additional static delay of 10ms:
>
> tc qdisc add dev eth0 root netem delay 10ms rate 5KBps
>
> Note: similar to TBF the rate extension is bounded to the kernel timing
> system. Depending on the architecture timer granularity, higher rates (e.g.
> 10mbit/s and higher) tend to transmission bursts. Also note: further queues
> living in network adaptors; see ethtool(8).
>
> Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
> ---
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-25 2:22 ` [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
@ 2011-11-28 23:01 ` Eric Dumazet
2011-11-28 23:30 ` Hagen Paul Pfeifer
0 siblings, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2011-11-28 23:01 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: netdev, Stephen Hemminger
Le vendredi 25 novembre 2011 à 03:22 +0100, Hagen Paul Pfeifer a écrit :
> This extension can be used to simulate special link layer
> characteristics. Simulate because packet data is not modified, only the
> calculation base is changed to delay a packet based on the original
> packet size and artificial cell information.
>
> packet_overhead can be used to simulate a link layer header compression
> scheme (e.g. set packet_overhead to -20) or with a positive
> packet_overhead value an additional MAC header can be simulated. It is
> also possible to "replace" the 14 byte Ethernet header with something
> else.
>
> cell_size and cell_overhead can be used to simulate link layer schemes,
> based on cells, like some TDMA schemes. Another application area are MAC
> schemes using a link layer fragmentation with a (small) header each.
> Cell size is the maximum amount of data bytes within one cell. Cell
> overhead is an additional variable to change the per-cell-overhead (e.g.
> 5 byte header per fragment).
>
> Example (5 kbit/s, 20 byte per packet overhead, cellsize 100 byte, per
> cell overhead 5 byte):
>
> tc qdisc add dev eth0 root netem rate 5kbit 20 100 5
>
> Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
> ---
> include/linux/pkt_sched.h | 3 +++
> net/sched/sch_netem.c | 30 +++++++++++++++++++++++++++---
> 2 files changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h
> index 26c37ca..63845cf 100644
> --- a/include/linux/pkt_sched.h
> +++ b/include/linux/pkt_sched.h
> @@ -498,6 +498,9 @@ struct tc_netem_corrupt {
>
> struct tc_netem_rate {
> __u32 rate; /* byte/s */
> + __s32 packet_overhead;
> + __u32 cell_size;
> + __s32 cell_overhead;
> };
>
> enum {
> diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
> index 9b7af9f..11ca527 100644
> --- a/net/sched/sch_netem.c
> +++ b/net/sched/sch_netem.c
> @@ -80,6 +80,9 @@ struct netem_sched_data {
> u32 reorder;
> u32 corrupt;
> u32 rate;
> + s32 packet_overhead;
> + u32 cell_size;
> + s32 cell_overhead;
>
> struct crndstate {
> u32 last;
> @@ -299,9 +302,24 @@ static psched_tdiff_t tabledist(psched_tdiff_t mu, psched_tdiff_t sigma,
> return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
> }
>
> -static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
> +static psched_time_t packet_len_2_sched_time(unsigned int len,
> + struct netem_sched_data *q)
> {
> - return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
> + len += q->packet_overhead;
> +
> + if (q->cell_size) {
> + u32 carry = len % q->cell_size;
> + len += carry;
I dont understand this part (len += carry;)
Also you use a lot of divides... Probably OK for netem...
> +
> + if (q->cell_overhead) {
> + u32 cells = len / q->cell_size;
> + if (carry)
> + cells += 1;
> + len += cells * q->cell_overhead;
> + }
> + }
> +
> + return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / q->rate);
> }
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-28 23:01 ` Eric Dumazet
@ 2011-11-28 23:30 ` Hagen Paul Pfeifer
2011-11-28 23:48 ` Eric Dumazet
0 siblings, 1 reply; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-28 23:30 UTC (permalink / raw)
To: Eric Dumazet; +Cc: netdev, Stephen Hemminger
* Eric Dumazet | 2011-11-29 00:01:07 [+0100]:
>> -static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
>> +static psched_time_t packet_len_2_sched_time(unsigned int len,
>> + struct netem_sched_data *q)
>> {
>> - return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
>> + len += q->packet_overhead;
>> +
>> + if (q->cell_size) {
>> + u32 carry = len % q->cell_size;
>> + len += carry;
>
>I dont understand this part (len += carry;)
Say the original packet is 100 byte, cellsize is 40 byte: three full size link
layer frames are required: 40 + 40 + 40 == 100 + 20. This is used for TDMA,
ATM or slot schemes where the remainder cannot be used.
Later in code carry is reused if cell overhead is configured.
>Also you use a lot of divides... Probably OK for netem...
I know, but
1) the branch is not hot (not taken at all if rate is not called)
2) cell_size could not restricted to a power of two - so I saw no real
optimization potential.
Hagen
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-28 23:30 ` Hagen Paul Pfeifer
@ 2011-11-28 23:48 ` Eric Dumazet
2011-11-29 0:07 ` Hagen Paul Pfeifer
0 siblings, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2011-11-28 23:48 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: netdev, Stephen Hemminger
Le mardi 29 novembre 2011 à 00:30 +0100, Hagen Paul Pfeifer a écrit :
> * Eric Dumazet | 2011-11-29 00:01:07 [+0100]:
>
> >> -static psched_time_t packet_len_2_sched_time(unsigned int len, u32 rate)
> >> +static psched_time_t packet_len_2_sched_time(unsigned int len,
> >> + struct netem_sched_data *q)
> >> {
> >> - return PSCHED_NS2TICKS((u64)len * NSEC_PER_SEC / rate);
> >> + len += q->packet_overhead;
> >> +
> >> + if (q->cell_size) {
> >> + u32 carry = len % q->cell_size;
> >> + len += carry;
> >
> >I dont understand this part (len += carry;)
>
> Say the original packet is 100 byte, cellsize is 40 byte: three full size link
> layer frames are required: 40 + 40 + 40 == 100 + 20. This is used for TDMA,
> ATM or slot schemes where the remainder cannot be used.
>
I still dont understand.
Say you send 119 bytes
119 % 40 = 39
119 + 39 = 158
Is it was is really needed ?
> Later in code carry is reused if cell overhead is configured.
>
In this example, cells will be :
158 / 40 = 3 + one (because carry is not 0)
len += 4 * cell_overhead
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior
2011-11-28 23:48 ` Eric Dumazet
@ 2011-11-29 0:07 ` Hagen Paul Pfeifer
0 siblings, 0 replies; 18+ messages in thread
From: Hagen Paul Pfeifer @ 2011-11-29 0:07 UTC (permalink / raw)
To: Eric Dumazet; +Cc: netdev, Stephen Hemminger
* Eric Dumazet | 2011-11-29 00:48:45 [+0100]:
>Is it was is really needed ?
argl ... I will repost a final version - thank you Eric!
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2011-11-29 0:07 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-24 17:39 [PATCH net-next 1/2] netem: rate-latency extension Hagen Paul Pfeifer
2011-11-24 17:39 ` [PATCH net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
2011-11-24 22:14 ` [PATCH net-next 1/2] netem: rate-latency extension Eric Dumazet
2011-11-24 22:31 ` Hagen Paul Pfeifer
2011-11-25 1:06 ` Bill Fink
2011-11-25 1:23 ` Hagen Paul Pfeifer
2011-11-25 5:09 ` Stephen Hemminger
2011-11-25 6:13 ` Eric Dumazet
2011-11-25 12:02 ` Hagen Paul Pfeifer
2011-11-25 2:22 ` [PATCH v2 net-next 1/2] netem: rate extension Hagen Paul Pfeifer
2011-11-25 2:22 ` [PATCH v2 net-next 2/2] netem: add cell concept to simulate special MAC behavior Hagen Paul Pfeifer
2011-11-28 23:01 ` Eric Dumazet
2011-11-28 23:30 ` Hagen Paul Pfeifer
2011-11-28 23:48 ` Eric Dumazet
2011-11-29 0:07 ` Hagen Paul Pfeifer
2011-11-26 11:00 ` [PATCH v2 net-next 1/2] netem: rate extension Eric Dumazet
2011-11-25 2:23 ` [PATCH v2 iproute2 1/2] utils: add s32 parser Hagen Paul Pfeifer
2011-11-25 2:23 ` [PATCH v2 iproute2 2/2] tc: netem rate shaping and cell extension Hagen Paul Pfeifer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).