* tbf/htb qdisc limitations @ 2010-10-08 20:58 Steven Brudenell 2010-10-10 11:23 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Steven Brudenell @ 2010-10-08 20:58 UTC (permalink / raw) To: netdev hi folks, i was disappointed recently to find that i can't set the "burst" parameters very high on the tbf or htb qdiscs. the actual limit of the burst parameters varies, according to the rate parameter. at the relatively low rate i want to set, i want to have the burst parameter be several gigabytes, but i'm actually limited to only a few megabytes. (motivation: a fully-automated way to stay inside the monthly transfer limits imposed by many ISPs these days, without resorting to a constant rate limit. for example, comcast limits its customers to 250GB/month, which is about 101KB/s; many cellular data plans in the US limit to 5GB/month =~ 2KB/s). i'll gladly code a patch, but i'd like the list's advice on whether this is necessary, and a little bit about how to proceed: 1) what is the purpose of the "rate tables" used in these qdiscs -- why use them in favor of dividing bytes by time to compute a rate? i assume the answer has something to do with restrictions on using floating point math (maybe even integer division?) at different places / interruptability states in the kernel. maybe this is documented on kernelnewbies somewhere but i couldn't find it. 2) is there an established procedure for versioning a netlink interface? today the netlink interface for tbf and htb is horribly implementation-coupled (the "burst" parameters need to be munged according to the "rate" parameters and kernel tick rate). i think i would need to change these interfaces in order to change the accounting implementation in the corresponding qdisc. however, i probably want to remain compatible with old userspace. ~steve ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-08 20:58 tbf/htb qdisc limitations Steven Brudenell @ 2010-10-10 11:23 ` Jarek Poplawski 2010-10-11 22:27 ` Steven Brudenell 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-10 11:23 UTC (permalink / raw) To: Steven Brudenell; +Cc: netdev Steven Brudenell wrote: > hi folks, > > i was disappointed recently to find that i can't set the "burst" > parameters very high on the tbf or htb qdiscs. the actual limit of the > burst parameters varies, according to the rate parameter. at the > relatively low rate i want to set, i want to have the burst parameter > be several gigabytes, but i'm actually limited to only a few > megabytes. > > (motivation: a fully-automated way to stay inside the monthly transfer > limits imposed by many ISPs these days, without resorting to a > constant rate limit. for example, comcast limits its customers to > 250GB/month, which is about 101KB/s; many cellular data plans in the > US limit to 5GB/month =~ 2KB/s). I'm not sure you checked how the "burst" works, and doubt it could help you here. Anyway, do you think: rate 2KB/s with burst 5GB config would be useful for you? > > i'll gladly code a patch, but i'd like the list's advice on whether > this is necessary, and a little bit about how to proceed: > > 1) what is the purpose of the "rate tables" used in these qdiscs -- > why use them in favor of dividing bytes by time to compute a rate? i > assume the answer has something to do with restrictions on using > floating point math (maybe even integer division?) at different places > / interruptability states in the kernel. maybe this is documented on > kernelnewbies somewhere but i couldn't find it. > > 2) is there an established procedure for versioning a netlink > interface? today the netlink interface for tbf and htb is horribly > implementation-coupled (the "burst" parameters need to be munged > according to the "rate" parameters and kernel tick rate). i think i > would need to change these interfaces in order to change the > accounting implementation in the corresponding qdisc. however, i > probably want to remain compatible with old userspace. My proposal is you don't bother with 1) and 2), but first do the hack in tbf or htb directly, using or omitting rate tables, how you like, and test this idea. But it seems the right way is to collect monthly stats with some userspace tool and change qdisc config dynamically. You might look at network admins' lists for small ISPs exemplary scripts doing such nasty things to their users, or have a look at ppp accounting tools. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-10 11:23 ` Jarek Poplawski @ 2010-10-11 22:27 ` Steven Brudenell 2010-10-12 10:10 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Steven Brudenell @ 2010-10-11 22:27 UTC (permalink / raw) To: Jarek Poplawski; +Cc: netdev > I'm not sure you checked how the "burst" works, and doubt it could > help you here. Anyway, do you think: rate 2KB/s with burst 5GB > config would be useful for you? i actually really do want something like 2KB/s with 5GB burst (modifying parameters such that burst + rate * 30 days <= 5GB, but you get the idea). but this isn't possible given the implementation: i see that overall, virtual "tokens" map to "scheduler ticks", where a "scheduler tick" is 64ns. (net/sched/sch_{tbf,htb}.c, include/net/pkt_sched.h -- these 64ns units are called "ticks" despite being unrelated to HZ). the "burst" parameter is also stored and passed from userspace as a u32. so, the maximum configurable burst in both cases is rate * 275s, since we can only track 275s worth of "scheduler ticks" in a u32 ( (1<<32) / NSEC_PER_SEC * 64 =~ 275s ). > My proposal is you don't bother with 1) and 2), but first do the > hack in tbf or htb directly, using or omitting rate tables, how > you like, and test this idea. i'll give it a shot, though given that i hate writing the same code twice, i would prefer to know the right way to change netlink before i write a functional test. due to the implementation coupling i don't see any way to make any permanent change *without* changing the netlink interface -- even changing that u32 to a u64, which would only need to be a u64 in userspace because userspace does the munging today! (what's worse, today userspace has to specify the full rate table over netlink, instead of just specifying the rate and having the kernel driver compute the table or whatever other data structure it deems necessary. i think decoupling interface from implementation is a worthy goal by itself. if they were decoupled, i could have just coded a patch and not bothered y'all in the first place....) > But it seems the right way is to collect monthly stats with some > userspace tool and change qdisc config dynamically. You might > look at network admins' lists for small ISPs exemplary scripts > doing such nasty things to their users, or have a look at ppp > accounting tools. <non technical sidetrack> i disagree outright that a userspace tool is the "right" way to solve my constraints. my constraints are: 1) i need to guarantee i never ever go over the monthly transfer limit (bad experiences with Comcast... you can check out of Red Tape Hotel any time you like, but you can never leave). 2) i want to be able to transfer short bursts at top speed whenever possible (that's what i'm paying for in the first place). 3) i need to ration transfer usage so i am never stuck in a situation of being limited to snail speeds until the end of the month (on a Comcast connection in my area, i can reasonably sustain 2MB/sec downstream, which eats 250GB in ~36 hours, so this constraint becomes important). tbf with a large burst size seems ideal for my constraints. i can't quantify this, but it seems like no simpler strategy satisfies the constraints well and no more complex strategy is necessary. i think any userspace solution i could write would end up trying to emulate tbf with large burst. a userspace tool updating qdisc parameters, even if run in an infinite loop, would always have a big chunky time resolution compared to an inline packet shaper (which is important for #2, and for #1 to a degree). i could write a packet shaper in userspace, but this does not make sense to given that kernel qos already exists, and already has a tbf implementation that just needs a little love. </non technical sidetrack> given all that, i'd just like to know 1) whether it's forbidden or bad to do floating point math in a packet scheduler, and 2) the best way to go about making breaking changes to netlink. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-11 22:27 ` Steven Brudenell @ 2010-10-12 10:10 ` Jarek Poplawski 2010-10-12 19:31 ` Steven Brudenell 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-12 10:10 UTC (permalink / raw) To: Steven Brudenell; +Cc: netdev On Mon, Oct 11, 2010 at 06:27:25PM -0400, Steven Brudenell wrote: > > I'm not sure you checked how the "burst" works, and doubt it could > > help you here. Anyway, do you think: rate 2KB/s with burst 5GB > > config would be useful for you? > > i actually really do want something like 2KB/s with 5GB burst > (modifying parameters such that burst + rate * 30 days <= 5GB, but you > get the idea). but this isn't possible given the implementation: > > i see that overall, virtual "tokens" map to "scheduler ticks", where a > "scheduler tick" is 64ns. (net/sched/sch_{tbf,htb}.c, > include/net/pkt_sched.h -- these 64ns units are called "ticks" despite > being unrelated to HZ). the "burst" parameter is also stored and > passed from userspace as a u32. so, the maximum configurable burst in > both cases is rate * 275s, since we can only track 275s worth of > "scheduler ticks" in a u32 ( (1<<32) / NSEC_PER_SEC * 64 =~ 275s ). Right. It was a compromise to allow higher rates for "common" use without u64 changes. It can be tuned with PSCHED_SHIFT yet to give you more burst, but I doubt tbf/htb authors expected monthly values here. > > > My proposal is you don't bother with 1) and 2), but first do the > > hack in tbf or htb directly, using or omitting rate tables, how > > you like, and test this idea. > > i'll give it a shot, though given that i hate writing the same code > twice, i would prefer to know the right way to change netlink before i > write a functional test. > > due to the implementation coupling i don't see any way to make any > permanent change *without* changing the netlink interface -- even > changing that u32 to a u64, which would only need to be a u64 in > userspace because userspace does the munging today! > > (what's worse, today userspace has to specify the full rate table over > netlink, instead of just specifying the rate and having the kernel > driver compute the table or whatever other data structure it deems > necessary. i think decoupling interface from implementation is a > worthy goal by itself. if they were decoupled, i could have just coded > a patch and not bothered y'all in the first place....) > > > But it seems the right way is to collect monthly stats with some > > userspace tool and change qdisc config dynamically. You might > > look at network admins' lists for small ISPs exemplary scripts > > doing such nasty things to their users, or have a look at ppp > > accounting tools. > > <non technical sidetrack> > i disagree outright that a userspace tool is the "right" way to solve > my constraints. > > my constraints are: > 1) i need to guarantee i never ever go over the monthly transfer limit > (bad experiences with Comcast... you can check out of Red Tape Hotel > any time you like, but you can never leave). > 2) i want to be able to transfer short bursts at top speed whenever > possible (that's what i'm paying for in the first place). > 3) i need to ration transfer usage so i am never stuck in a situation > of being limited to snail speeds until the end of the month (on a > Comcast connection in my area, i can reasonably sustain 2MB/sec > downstream, which eats 250GB in ~36 hours, so this constraint becomes > important). > > tbf with a large burst size seems ideal for my constraints. i can't > quantify this, but it seems like no simpler strategy satisfies the > constraints well and no more complex strategy is necessary. i think > any userspace solution i could write would end up trying to emulate > tbf with large burst. > > a userspace tool updating qdisc parameters, even if run in an infinite > loop, would always have a big chunky time resolution compared to an > inline packet shaper (which is important for #2, and for #1 to a > degree). i could write a packet shaper in userspace, but this does not > make sense to given that kernel qos already exists, and already has a > tbf implementation that just needs a little love. > </non technical sidetrack> > > given all that, i'd just like to know > > 1) whether it's forbidden or bad to do floating point math in a packet > scheduler, and Yes, it's not allowed according to Documentation/HOWTO. Btw, as you can see e.g. in sch_hfsc comments, 64-bit division is avoided too. > > 2) the best way to go about making breaking changes to netlink. I can only say there is no versioning, but backward compatibility is crucial, so you need to do some tricks or data duplication. You could probably try to get opinions about it with an RFC on moving tbf and htb schedulers to 64 bits if you're interested (decoupling it from your specific burst problem). Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-12 10:10 ` Jarek Poplawski @ 2010-10-12 19:31 ` Steven Brudenell 2010-10-12 21:59 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Steven Brudenell @ 2010-10-12 19:31 UTC (permalink / raw) To: Jarek Poplawski; +Cc: netdev > Yes, it's not allowed according to Documentation/HOWTO. Btw, as you > can see e.g. in sch_hfsc comments, 64-bit division is avoided too. i see sch_hfsc avoids do_div in critical areas for performance reasons, but uses it other places. it should still be alright to do_div in tbf_change and htb_change_class, right? it would be nice to compute the rtabs in those functions instead of having userspace do it. > I can only say there is no versioning, but backward compatibility > is crucial, so you need to do some tricks or data duplication. > You could probably try to get opinions about it with an RFC on > moving tbf and htb schedulers to 64 bits if you're interested > (decoupling it from your specific burst problem). my burst problem is the only semi-legitimate motivation i can think of. the only other possible motivations i can imagine are setting "limit" to buffer more than 4GB of packets and setting "rate" to something more than 32 gigabit; both of these seem kind of dubious. is there something else you had in mind? looking more at the netlink tc interface: why is it that the interface for so many qdiscs consists of passing a big options struct as a single netlink attr, instead of a bunch of individual attrs? this kind of seems contrary to the extensibility / flexibility spirit of netlink, and seems to be getting in the way of changing the interface. maybe i should RFC about this instead ;) ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-12 19:31 ` Steven Brudenell @ 2010-10-12 21:59 ` Jarek Poplawski 2010-10-12 22:17 ` Rick Jones 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-12 21:59 UTC (permalink / raw) To: Steven Brudenell; +Cc: netdev On Tue, Oct 12, 2010 at 03:31:48PM -0400, Steven Brudenell wrote: > > Yes, it's not allowed according to Documentation/HOWTO. Btw, as you > > can see e.g. in sch_hfsc comments, 64-bit division is avoided too. > > i see sch_hfsc avoids do_div in critical areas for performance > reasons, but uses it other places. it should still be alright to > do_div in tbf_change and htb_change_class, right? it would be nice to > compute the rtabs in those functions instead of having userspace do > it. Right, tbf_change or htb_change_class are on the "slow path". But to compute these rtabs you need passing more parameters than rate. And userspace would still do most of it for backward compatibility. > > > I can only say there is no versioning, but backward compatibility > > is crucial, so you need to do some tricks or data duplication. > > You could probably try to get opinions about it with an RFC on > > moving tbf and htb schedulers to 64 bits if you're interested > > (decoupling it from your specific burst problem). > > my burst problem is the only semi-legitimate motivation i can think > of. the only other possible motivations i can imagine are setting > "limit" to buffer more than 4GB of packets and setting "rate" to > something more than 32 gigabit; both of these seem kind of dubious. is > there something else you had in mind? No, mainly 10 gigabit rates and additionally 64-bit stats. > looking more at the netlink tc interface: why is it that the interface > for so many qdiscs consists of passing a big options struct as a > single netlink attr, instead of a bunch of individual attrs? this kind > of seems contrary to the extensibility / flexibility spirit of > netlink, and seems to be getting in the way of changing the interface. > maybe i should RFC about this instead ;) Sure, you can (I'm not the netlink expert). Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-12 21:59 ` Jarek Poplawski @ 2010-10-12 22:17 ` Rick Jones 2010-10-13 6:26 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Rick Jones @ 2010-10-12 22:17 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Steven Brudenell, netdev >>my burst problem is the only semi-legitimate motivation i can think >>of. the only other possible motivations i can imagine are setting >>"limit" to buffer more than 4GB of packets and setting "rate" to >>something more than 32 gigabit; both of these seem kind of dubious. is >>there something else you had in mind? > > > No, mainly 10 gigabit rates and additionally 64-bit stats. Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. rick jones ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-12 22:17 ` Rick Jones @ 2010-10-13 6:26 ` Jarek Poplawski 2010-10-14 3:36 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-13 6:26 UTC (permalink / raw) To: Rick Jones; +Cc: Steven Brudenell, netdev On Tue, Oct 12, 2010 at 03:17:18PM -0700, Rick Jones wrote: >>> my burst problem is the only semi-legitimate motivation i can think >>> of. the only other possible motivations i can imagine are setting >>> "limit" to buffer more than 4GB of packets and setting "rate" to >>> something more than 32 gigabit; both of these seem kind of dubious. is >>> there something else you had in mind? >> >> >> No, mainly 10 gigabit rates and additionally 64-bit stats. > > Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified > (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. Alas packet schedulers using rate tables are still around 1G. Above 2G they get less and less accurate, so hfsc is recommended. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-13 6:26 ` Jarek Poplawski @ 2010-10-14 3:36 ` Bill Fink 2010-10-14 4:01 ` Eric Dumazet 2010-10-14 6:44 ` Jarek Poplawski 0 siblings, 2 replies; 28+ messages in thread From: Bill Fink @ 2010-10-14 3:36 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Rick Jones, Steven Brudenell, netdev On Wed, 13 Oct 2010, Jarek Poplawski wrote: > On Tue, Oct 12, 2010 at 03:17:18PM -0700, Rick Jones wrote: > >>> my burst problem is the only semi-legitimate motivation i can think > >>> of. the only other possible motivations i can imagine are setting > >>> "limit" to buffer more than 4GB of packets and setting "rate" to > >>> something more than 32 gigabit; both of these seem kind of dubious. is > >>> there something else you had in mind? > >> > >> > >> No, mainly 10 gigabit rates and additionally 64-bit stats. > > > > Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified > > (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. > > Alas packet schedulers using rate tables are still around 1G. Above 2G > they get less and less accurate, so hfsc is recommended. I was just trying to do an 8 Gbps rate limit on a 10-GigE path, and couldn't get it to work with either htb or tbf. Are you saying this currently isn't possible? Or are you saying to use this hfsc mechanism, which there doesn't seem to be a man page for? -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 3:36 ` Bill Fink @ 2010-10-14 4:01 ` Eric Dumazet 2010-10-14 6:34 ` Bill Fink 2010-10-14 6:44 ` Jarek Poplawski 1 sibling, 1 reply; 28+ messages in thread From: Eric Dumazet @ 2010-10-14 4:01 UTC (permalink / raw) To: Bill Fink; +Cc: Jarek Poplawski, Rick Jones, Steven Brudenell, netdev Le mercredi 13 octobre 2010 à 23:36 -0400, Bill Fink a écrit : > I was just trying to do an 8 Gbps rate limit on a 10-GigE path, > and couldn't get it to work with either htb or tbf. Are you > saying this currently isn't possible? Or are you saying to use > this hfsc mechanism, which there doesn't seem to be a man page > for? man pages ? Oh well... 8Gbps rate limit sounds very optimistic with a central lock and one queue... Maybe its possible to split this into 8 x 1Gbps, using 8 queues... or 16 x 500 Mbps ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 4:01 ` Eric Dumazet @ 2010-10-14 6:34 ` Bill Fink 0 siblings, 0 replies; 28+ messages in thread From: Bill Fink @ 2010-10-14 6:34 UTC (permalink / raw) To: Eric Dumazet; +Cc: Jarek Poplawski, Rick Jones, Steven Brudenell, netdev On Thu, 14 Oct 2010, Eric Dumazet wrote: > Le mercredi 13 octobre 2010 à 23:36 -0400, Bill Fink a écrit : > > > I was just trying to do an 8 Gbps rate limit on a 10-GigE path, > > and couldn't get it to work with either htb or tbf. Are you > > saying this currently isn't possible? Or are you saying to use > > this hfsc mechanism, which there doesn't seem to be a man page > > for? > > man pages ? Oh well... > > 8Gbps rate limit sounds very optimistic with a central lock and one > queue... > > Maybe its possible to split this into 8 x 1Gbps, using 8 queues... > or 16 x 500 Mbps Not when I'm trying to rate limit a single flow. -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 3:36 ` Bill Fink 2010-10-14 4:01 ` Eric Dumazet @ 2010-10-14 6:44 ` Jarek Poplawski 2010-10-14 7:13 ` Bill Fink 1 sibling, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-14 6:44 UTC (permalink / raw) To: Bill Fink; +Cc: Rick Jones, Steven Brudenell, netdev On Wed, Oct 13, 2010 at 11:36:53PM -0400, Bill Fink wrote: > On Wed, 13 Oct 2010, Jarek Poplawski wrote: > > > On Tue, Oct 12, 2010 at 03:17:18PM -0700, Rick Jones wrote: > > >>> my burst problem is the only semi-legitimate motivation i can think > > >>> of. the only other possible motivations i can imagine are setting > > >>> "limit" to buffer more than 4GB of packets and setting "rate" to > > >>> something more than 32 gigabit; both of these seem kind of dubious. is > > >>> there something else you had in mind? > > >> > > >> > > >> No, mainly 10 gigabit rates and additionally 64-bit stats. > > > > > > Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified > > > (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. > > > > Alas packet schedulers using rate tables are still around 1G. Above 2G > > they get less and less accurate, so hfsc is recommended. > > I was just trying to do an 8 Gbps rate limit on a 10-GigE path, > and couldn't get it to work with either htb or tbf. Are you > saying this currently isn't possible? Let's start from reminding that no precise packet scheduling should be expected with gso/tso etc. turned on. I don't know current hardware limits for such a non-gso traffic, but for 8 Gbit rate htb or tbf would definitely have wrong rate tables (overflowed values) for packet sizes below 1500 bytes. > Or are you saying to use > this hfsc mechanism, which there doesn't seem to be a man page > for? There was a try: http://lists.openwall.net/netdev/2009/02/26/138 Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 6:44 ` Jarek Poplawski @ 2010-10-14 7:13 ` Bill Fink 2010-10-14 8:09 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-14 7:13 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Rick Jones, Steven Brudenell, netdev On Thu, 14 Oct, Jarek Poplawski wrote: > On Wed, Oct 13, 2010 at 11:36:53PM -0400, Bill Fink wrote: > > On Wed, 13 Oct 2010, Jarek Poplawski wrote: > > > > > On Tue, Oct 12, 2010 at 03:17:18PM -0700, Rick Jones wrote: > > > >>> my burst problem is the only semi-legitimate motivation i can think > > > >>> of. the only other possible motivations i can imagine are setting > > > >>> "limit" to buffer more than 4GB of packets and setting "rate" to > > > >>> something more than 32 gigabit; both of these seem kind of dubious. is > > > >>> there something else you had in mind? > > > >> > > > >> > > > >> No, mainly 10 gigabit rates and additionally 64-bit stats. > > > > > > > > Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified > > > > (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. > > > > > > Alas packet schedulers using rate tables are still around 1G. Above 2G > > > they get less and less accurate, so hfsc is recommended. > > > > I was just trying to do an 8 Gbps rate limit on a 10-GigE path, > > and couldn't get it to work with either htb or tbf. Are you > > saying this currently isn't possible? > > Let's start from reminding that no precise packet scheduling should be > expected with gso/tso etc. turned on. I don't know current hardware > limits for such a non-gso traffic, but for 8 Gbit rate htb or tbf > would definitely have wrong rate tables (overflowed values) for packet > sizes below 1500 bytes. TSO/GSO was disabled and was using 9000-byte jumbo frames (and specified mtu 9000 to tc command). Here was one attempt I made using tbf: tc qdisc add dev eth2 root handle 1: prio tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 I tried many variations of the above, all without success. > > Or are you saying to use > > this hfsc mechanism, which there doesn't seem to be a man page > > for? > > There was a try: > http://lists.openwall.net/netdev/2009/02/26/138 Thanks for the pointer. I will check it out later in detail, but I'm already having difficulty with deciding if I have the tc commands right for tbf and htb, and hfsc looks even more involved. -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 7:13 ` Bill Fink @ 2010-10-14 8:09 ` Jarek Poplawski 2010-10-14 8:50 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-14 8:09 UTC (permalink / raw) To: Bill Fink; +Cc: Rick Jones, Steven Brudenell, netdev On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > On Thu, 14 Oct, Jarek Poplawski wrote: > > > On Wed, Oct 13, 2010 at 11:36:53PM -0400, Bill Fink wrote: > > > On Wed, 13 Oct 2010, Jarek Poplawski wrote: > > > > > > > On Tue, Oct 12, 2010 at 03:17:18PM -0700, Rick Jones wrote: > > > > >>> my burst problem is the only semi-legitimate motivation i can think > > > > >>> of. the only other possible motivations i can imagine are setting > > > > >>> "limit" to buffer more than 4GB of packets and setting "rate" to > > > > >>> something more than 32 gigabit; both of these seem kind of dubious. is > > > > >>> there something else you had in mind? > > > > >> > > > > >> > > > > >> No, mainly 10 gigabit rates and additionally 64-bit stats. > > > > > > > > > > Any issue for bonded 10 GbE interfaces? Now that the IEEE have ratified > > > > > (June) how far out are 40 GbE interfaces? Or 100 GbE for that matter. > > > > > > > > Alas packet schedulers using rate tables are still around 1G. Above 2G > > > > they get less and less accurate, so hfsc is recommended. > > > > > > I was just trying to do an 8 Gbps rate limit on a 10-GigE path, > > > and couldn't get it to work with either htb or tbf. Are you > > > saying this currently isn't possible? > > > > Let's start from reminding that no precise packet scheduling should be > > expected with gso/tso etc. turned on. I don't know current hardware > > limits for such a non-gso traffic, but for 8 Gbit rate htb or tbf > > would definitely have wrong rate tables (overflowed values) for packet > > sizes below 1500 bytes. > > TSO/GSO was disabled and was using 9000-byte jumbo frames > (and specified mtu 9000 to tc command). > > Here was one attempt I made using tbf: > > tc qdisc add dev eth2 root handle 1: prio > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > I tried many variations of the above, all without success. The main problem are smaller packets. If you had (almost) only 9000b frames this probably could work. But smaller packets (I don't remember exact limits) with wrong rate table values might go almost unaccounted. > > > Or are you saying to use > > > this hfsc mechanism, which there doesn't seem to be a man page > > > for? > > > > There was a try: > > http://lists.openwall.net/netdev/2009/02/26/138 > > Thanks for the pointer. I will check it out later in detail, > but I'm already having difficulty with deciding if I have the > tc commands right for tbf and htb, and hfsc looks even more > involved. I don't know much about hfsc either, but it seems, with simplest configs (second slope only) it shouldn't be much different from htb or tbf. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 8:09 ` Jarek Poplawski @ 2010-10-14 8:50 ` Jarek Poplawski 2010-10-15 6:37 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-14 8:50 UTC (permalink / raw) To: Bill Fink; +Cc: Rick Jones, Steven Brudenell, netdev On Thu, Oct 14, 2010 at 08:09:39AM +0000, Jarek Poplawski wrote: > On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > > TSO/GSO was disabled and was using 9000-byte jumbo frames > > (and specified mtu 9000 to tc command). > > > > Here was one attempt I made using tbf: > > > > tc qdisc add dev eth2 root handle 1: prio > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > > > I tried many variations of the above, all without success. > > The main problem are smaller packets. If you had (almost) only 9000b > frames this probably could work. [...] On the other hand, e.g. the limit above seems too low wrt mtu & rate. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-14 8:50 ` Jarek Poplawski @ 2010-10-15 6:37 ` Bill Fink 2010-10-15 6:44 ` Eric Dumazet 2010-10-15 8:18 ` Jarek Poplawski 0 siblings, 2 replies; 28+ messages in thread From: Bill Fink @ 2010-10-15 6:37 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Rick Jones, Steven Brudenell, netdev On Thu, 14 Oct 2010, Jarek Poplawski wrote: > On Thu, Oct 14, 2010 at 08:09:39AM +0000, Jarek Poplawski wrote: > > On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > > > TSO/GSO was disabled and was using 9000-byte jumbo frames > > > (and specified mtu 9000 to tc command). > > > > > > Here was one attempt I made using tbf: > > > > > > tc qdisc add dev eth2 root handle 1: prio > > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > > > > > I tried many variations of the above, all without success. > > > > The main problem are smaller packets. If you had (almost) only 9000b > > frames this probably could work. [...] > > On the other hand, e.g. the limit above seems too low wrt mtu & rate. Actually, I discovered my commands above work just fine on a 2.6.35 box: i7test7% nuttcp -T10 -i1 192.168.1.17 1045.3125 MB / 1.00 sec = 8768.3573 Mbps 0 retrans 1045.6875 MB / 1.00 sec = 8772.0292 Mbps 0 retrans 1049.5625 MB / 1.00 sec = 8804.2627 Mbps 0 retrans 1043.1875 MB / 1.00 sec = 8750.9960 Mbps 0 retrans 1048.6875 MB / 1.00 sec = 8796.3246 Mbps 0 retrans 1033.4375 MB / 1.00 sec = 8669.3188 Mbps 0 retrans 1040.7500 MB / 1.00 sec = 8730.7057 Mbps 0 retrans 1047.0000 MB / 1.00 sec = 8783.2063 Mbps 0 retrans 1040.0000 MB / 1.00 sec = 8724.0564 Mbps 0 retrans 1037.4375 MB / 1.00 sec = 8702.5434 Mbps 0 retrans 10431.5608 MB / 10.00 sec = 8749.7542 Mbps 25 %TX 35 %RX 0 retrans 0.11 msRTT The problems I encountered were on a field system running 2.6.30.10. I will investigate upgrading the field system to 2.6.35. -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-15 6:37 ` Bill Fink @ 2010-10-15 6:44 ` Eric Dumazet 2010-10-15 21:37 ` Bill Fink 2010-10-15 8:18 ` Jarek Poplawski 1 sibling, 1 reply; 28+ messages in thread From: Eric Dumazet @ 2010-10-15 6:44 UTC (permalink / raw) To: Bill Fink; +Cc: Jarek Poplawski, Rick Jones, Steven Brudenell, netdev Le vendredi 15 octobre 2010 à 02:37 -0400, Bill Fink a écrit : > On Thu, 14 Oct 2010, Jarek Poplawski wrote: > > > On Thu, Oct 14, 2010 at 08:09:39AM +0000, Jarek Poplawski wrote: > > > On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > > > > TSO/GSO was disabled and was using 9000-byte jumbo frames > > > > (and specified mtu 9000 to tc command). > > > > > > > > Here was one attempt I made using tbf: > > > > > > > > tc qdisc add dev eth2 root handle 1: prio > > > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > > > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > > > > > > > I tried many variations of the above, all without success. > > > > > > The main problem are smaller packets. If you had (almost) only 9000b > > > frames this probably could work. [...] > > > > On the other hand, e.g. the limit above seems too low wrt mtu & rate. > > Actually, I discovered my commands above work just fine on > a 2.6.35 box: > > i7test7% nuttcp -T10 -i1 192.168.1.17 > 1045.3125 MB / 1.00 sec = 8768.3573 Mbps 0 retrans > 1045.6875 MB / 1.00 sec = 8772.0292 Mbps 0 retrans > 1049.5625 MB / 1.00 sec = 8804.2627 Mbps 0 retrans > 1043.1875 MB / 1.00 sec = 8750.9960 Mbps 0 retrans > 1048.6875 MB / 1.00 sec = 8796.3246 Mbps 0 retrans > 1033.4375 MB / 1.00 sec = 8669.3188 Mbps 0 retrans > 1040.7500 MB / 1.00 sec = 8730.7057 Mbps 0 retrans > 1047.0000 MB / 1.00 sec = 8783.2063 Mbps 0 retrans > 1040.0000 MB / 1.00 sec = 8724.0564 Mbps 0 retrans > 1037.4375 MB / 1.00 sec = 8702.5434 Mbps 0 retrans > > 10431.5608 MB / 10.00 sec = 8749.7542 Mbps 25 %TX 35 %RX 0 retrans 0.11 msRTT > > The problems I encountered were on a field system running > 2.6.30.10. I will investigate upgrading the field system > to 2.6.35. > Yes, I noticed same thing for me on net-next-2.6 Please report : tc -s -d qdisc ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-15 6:44 ` Eric Dumazet @ 2010-10-15 21:37 ` Bill Fink 2010-10-15 22:05 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-15 21:37 UTC (permalink / raw) To: Eric Dumazet; +Cc: Jarek Poplawski, Rick Jones, Steven Brudenell, netdev On Fri, 15 Oct 2010, Eric Dumazet wrote: > Le vendredi 15 octobre 2010 à 02:37 -0400, Bill Fink a écrit : > > On Thu, 14 Oct 2010, Jarek Poplawski wrote: > > > > > On Thu, Oct 14, 2010 at 08:09:39AM +0000, Jarek Poplawski wrote: > > > > On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > > > > > TSO/GSO was disabled and was using 9000-byte jumbo frames > > > > > (and specified mtu 9000 to tc command). > > > > > > > > > > Here was one attempt I made using tbf: > > > > > > > > > > tc qdisc add dev eth2 root handle 1: prio > > > > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > > > > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > > > > > > > > > I tried many variations of the above, all without success. > > > > > > > > The main problem are smaller packets. If you had (almost) only 9000b > > > > frames this probably could work. [...] > > > > > > On the other hand, e.g. the limit above seems too low wrt mtu & rate. > > > > Actually, I discovered my commands above work just fine on > > a 2.6.35 box: > > > > i7test7% nuttcp -T10 -i1 192.168.1.17 > > 1045.3125 MB / 1.00 sec = 8768.3573 Mbps 0 retrans > > 1045.6875 MB / 1.00 sec = 8772.0292 Mbps 0 retrans > > 1049.5625 MB / 1.00 sec = 8804.2627 Mbps 0 retrans > > 1043.1875 MB / 1.00 sec = 8750.9960 Mbps 0 retrans > > 1048.6875 MB / 1.00 sec = 8796.3246 Mbps 0 retrans > > 1033.4375 MB / 1.00 sec = 8669.3188 Mbps 0 retrans > > 1040.7500 MB / 1.00 sec = 8730.7057 Mbps 0 retrans > > 1047.0000 MB / 1.00 sec = 8783.2063 Mbps 0 retrans > > 1040.0000 MB / 1.00 sec = 8724.0564 Mbps 0 retrans > > 1037.4375 MB / 1.00 sec = 8702.5434 Mbps 0 retrans > > > > 10431.5608 MB / 10.00 sec = 8749.7542 Mbps 25 %TX 35 %RX 0 retrans 0.11 msRTT > > > > The problems I encountered were on a field system running > > 2.6.30.10. I will investigate upgrading the field system > > to 2.6.35. > > > > Yes, I noticed same thing for me on net-next-2.6 > > Please report : > > tc -s -d qdisc Not sure why you want this on the older 2.6.30.10 kernel, but here it is: i7test6% nuttcp -T10 -i1 192.168.1.14 1169.1875 MB / 1.00 sec = 9807.2868 Mbps 0 retrans 1181.1875 MB / 1.00 sec = 9908.9054 Mbps 0 retrans 1181.1250 MB / 1.00 sec = 9907.9253 Mbps 0 retrans 1181.1875 MB / 1.00 sec = 9908.4991 Mbps 0 retrans 1180.6875 MB / 1.00 sec = 9904.3345 Mbps 0 retrans 1181.1250 MB / 1.00 sec = 9908.0838 Mbps 0 retrans 1181.1875 MB / 1.00 sec = 9908.4099 Mbps 0 retrans 1181.0625 MB / 1.00 sec = 9907.3911 Mbps 0 retrans 1181.3750 MB / 1.00 sec = 9910.2801 Mbps 0 retrans 1181.1875 MB / 1.00 sec = 9908.2118 Mbps 0 retrans 11801.1382 MB / 10.04 sec = 9858.7159 Mbps 24 %TX 40 %RX 0 retrans 0.11 msRTT i7test6% tc -s -d qdisc show dev eth2 qdisc prio 1: root refcnt 32 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 12448974085 bytes 1381173 pkt (dropped 266, overlimits 0 requeues 12) rate 0bit 0pps backlog 0b 0p requeues 12 qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1111387b/64 mpu 0b lat 4295.0s Sent 12448974043 bytes 1381172 pkt (dropped 266, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 I'm guessing this is probably related to the schedulers time resolution issue that Jarek mentioned. And for completeness, here's the info for the working 2.6.35 case: i7test7% nuttcp -T10 -i1 192.168.1.17 1045.5625 MB / 1.00 sec = 8770.6210 Mbps 0 retrans 1032.1875 MB / 1.00 sec = 8658.3825 Mbps 0 retrans 1039.8125 MB / 1.00 sec = 8722.7801 Mbps 0 retrans 1050.2500 MB / 1.00 sec = 8810.0739 Mbps 0 retrans 1050.6875 MB / 1.00 sec = 8813.9378 Mbps 0 retrans 1048.8125 MB / 1.00 sec = 8798.0857 Mbps 0 retrans 1046.1875 MB / 1.00 sec = 8775.9954 Mbps 0 retrans 1045.7500 MB / 1.00 sec = 8771.9307 Mbps 0 retrans 1051.1250 MB / 1.00 sec = 8817.8900 Mbps 0 retrans 1044.0625 MB / 1.00 sec = 8757.8019 Mbps 0 retrans 10454.7500 MB / 10.00 sec = 8769.2206 Mbps 26 %TX 35 %RX 0 retrans 0.11 msRTT i7test7% tc -s -d qdisc show dev eth2 qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 11028687119 bytes 1223828 pkt (dropped 293, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1112500b/64 mpu 0b lat 4295.0s Sent 11028687077 bytes 1223827 pkt (dropped 293, overlimits 593 requeues 0) backlog 0b 0p requeues 0 I'm not sure how you can have so many dropped but not have any TCP retransmissions (or not show up as requeues). But there's probably something basic I just don't understand about how all this stuff works. -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-15 21:37 ` Bill Fink @ 2010-10-15 22:05 ` Jarek Poplawski 2010-10-16 4:51 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-15 22:05 UTC (permalink / raw) To: Bill Fink; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Fri, Oct 15, 2010 at 05:37:46PM -0400, Bill Fink wrote: ... > i7test7% tc -s -d qdisc show dev eth2 > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 11028687119 bytes 1223828 pkt (dropped 293, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1112500b/64 mpu 0b lat 4295.0s > Sent 11028687077 bytes 1223827 pkt (dropped 293, overlimits 593 requeues 0) > backlog 0b 0p requeues 0 > > I'm not sure how you can have so many dropped but not have > any TCP retransmissions (or not show up as requeues). But > there's probably something basic I just don't understand > about how all this stuff works. Me either, but it seems higher "limit" might help with these drops. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-15 22:05 ` Jarek Poplawski @ 2010-10-16 4:51 ` Bill Fink 2010-10-16 20:58 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-16 4:51 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Sat, 16 Oct 2010, Jarek Poplawski wrote: > On Fri, Oct 15, 2010 at 05:37:46PM -0400, Bill Fink wrote: > ... > > i7test7% tc -s -d qdisc show dev eth2 > > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > Sent 11028687119 bytes 1223828 pkt (dropped 293, overlimits 0 requeues 0) > > backlog 0b 0p requeues 0 > > qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1112500b/64 mpu 0b lat 4295.0s > > Sent 11028687077 bytes 1223827 pkt (dropped 293, overlimits 593 requeues 0) > > backlog 0b 0p requeues 0 > > > > I'm not sure how you can have so many dropped but not have > > any TCP retransmissions (or not show up as requeues). But > > there's probably something basic I just don't understand > > about how all this stuff works. > > Me either, but it seems higher "limit" might help with these drops. You were of course correct about the higher limit helping. I finally upgraded the field system to 2.6.35, and did some testing on the real data path of interest, which has an RTT of about 29 ms. I set up a rate limit of 8 Gbps using the following commands: tc qdisc add dev eth2 root handle 1: prio tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8000mbit limit 35000000 burst 20000 mtu 9000 tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip protocol 6 0xff match ip dst 192.168.1.23 flowid 10:1 hecn-i7sl1% nuttcp -T10 -i1 -w50m 192.168.1.23 676.3750 MB / 1.00 sec = 5673.4646 Mbps 0 retrans 948.5625 MB / 1.00 sec = 7957.1508 Mbps 0 retrans 948.8125 MB / 1.00 sec = 7959.5902 Mbps 0 retrans 948.3750 MB / 1.00 sec = 7955.5382 Mbps 0 retrans 949.0000 MB / 1.00 sec = 7960.6696 Mbps 0 retrans 948.7500 MB / 1.00 sec = 7958.7873 Mbps 0 retrans 948.6875 MB / 1.00 sec = 7958.0959 Mbps 0 retrans 948.6250 MB / 1.00 sec = 7957.4205 Mbps 0 retrans 948.7500 MB / 1.00 sec = 7958.7237 Mbps 0 retrans 948.4375 MB / 1.00 sec = 7956.3648 Mbps 0 retrans 9270.5625 MB / 10.09 sec = 7707.7457 Mbps 24 %TX 36 %RX 0 retrans 29.38 msRTT hecn-i7sl1% tc -s -d qdisc show dev eth2 qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc tbf 10: parent 1:1 rate 8000Mbit burst 19000b/64 mpu 0b lat 35.0ms Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 1831360 requeues 0) backlog 0b 0p requeues 0 No drops! BTW the effective rate limit seems to be a very coarse adjustment at these speeds. I was seeing some data path issues at 8.9 Gbps so I tried setting slightly lower rates such as 8.8 Gbps, 8.7 Gbps, etc, but they still gave me an effective rate limit of about 8.9 Gbps. It wasn't until I got down to a setting of 8 Gbps that I actually got an effective rate limit of 8 Gbps. Also the man page for tbf seems to be wrong/misleading about the burst parameter. It states: "If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your bucket. The minimum buffer size can be calculated by dividing the rate by HZ. According to that, with a rate of 8 Gbps and HZ=1000, the minimum burst should be 1000000 bytes. But my testing shows that a burst of just 20000 works just fine. That's only 2 9000-byte packets or about 20 usec of traffic at the 8 Gbps rate. Using too large a value for burst can actually be harmful as it allows the traffic to temporarily exceed the desired rate limit. -Thanks -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-16 4:51 ` Bill Fink @ 2010-10-16 20:58 ` Jarek Poplawski 2010-10-17 1:24 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-16 20:58 UTC (permalink / raw) To: Bill Fink; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Sat, Oct 16, 2010 at 12:51:06AM -0400, Bill Fink wrote: > On Sat, 16 Oct 2010, Jarek Poplawski wrote: > > > On Fri, Oct 15, 2010 at 05:37:46PM -0400, Bill Fink wrote: > > ... > > > i7test7% tc -s -d qdisc show dev eth2 > > > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > > Sent 11028687119 bytes 1223828 pkt (dropped 293, overlimits 0 requeues 0) > > > backlog 0b 0p requeues 0 > > > qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1112500b/64 mpu 0b lat 4295.0s > > > Sent 11028687077 bytes 1223827 pkt (dropped 293, overlimits 593 requeues 0) > > > backlog 0b 0p requeues 0 > > > > > > I'm not sure how you can have so many dropped but not have > > > any TCP retransmissions (or not show up as requeues). But > > > there's probably something basic I just don't understand > > > about how all this stuff works. > > > > Me either, but it seems higher "limit" might help with these drops. > > You were of course correct about the higher limit helping. > I finally upgraded the field system to 2.6.35, and did some > testing on the real data path of interest, which has an RTT > of about 29 ms. I set up a rate limit of 8 Gbps using the > following commands: > > tc qdisc add dev eth2 root handle 1: prio > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8000mbit limit 35000000 burst 20000 mtu 9000 > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip protocol 6 0xff match ip dst 192.168.1.23 flowid 10:1 > > hecn-i7sl1% nuttcp -T10 -i1 -w50m 192.168.1.23 > 676.3750 MB / 1.00 sec = 5673.4646 Mbps 0 retrans > 948.5625 MB / 1.00 sec = 7957.1508 Mbps 0 retrans > 948.8125 MB / 1.00 sec = 7959.5902 Mbps 0 retrans > 948.3750 MB / 1.00 sec = 7955.5382 Mbps 0 retrans > 949.0000 MB / 1.00 sec = 7960.6696 Mbps 0 retrans > 948.7500 MB / 1.00 sec = 7958.7873 Mbps 0 retrans > 948.6875 MB / 1.00 sec = 7958.0959 Mbps 0 retrans > 948.6250 MB / 1.00 sec = 7957.4205 Mbps 0 retrans > 948.7500 MB / 1.00 sec = 7958.7237 Mbps 0 retrans > 948.4375 MB / 1.00 sec = 7956.3648 Mbps 0 retrans > > 9270.5625 MB / 10.09 sec = 7707.7457 Mbps 24 %TX 36 %RX 0 retrans 29.38 msRTT > > hecn-i7sl1% tc -s -d qdisc show dev eth2 > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc tbf 10: parent 1:1 rate 8000Mbit burst 19000b/64 mpu 0b lat 35.0ms > Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 1831360 requeues 0) > backlog 0b 0p requeues 0 > > No drops! > > BTW the effective rate limit seems to be a very coarse adjustment > at these speeds. I was seeing some data path issues at 8.9 Gbps > so I tried setting slightly lower rates such as 8.8 Gbps, 8.7 Gbps, > etc, but they still gave me an effective rate limit of about 8.9 Gbps. > It wasn't until I got down to a setting of 8 Gbps that I actually > got an effective rate limit of 8 Gbps. > > Also the man page for tbf seems to be wrong/misleading about > the burst parameter. It states: > > "If your buffer is too small, packets may be dropped because more > tokens arrive per timer tick than fit in your bucket. The minimum > buffer size can be calculated by dividing the rate by HZ. > > According to that, with a rate of 8 Gbps and HZ=1000, the minimum > burst should be 1000000 bytes. But my testing shows that a burst > of just 20000 works just fine. That's only 2 9000-byte packets > or about 20 usec of traffic at the 8 Gbps rate. Using too large > a value for burst can actually be harmful as it allows the traffic > to temporarily exceed the desired rate limit. As I mentioned before, it could work, but your config is really on the edge. Anyway, if lower than minimum buffer size is needed something else is definitely wrong. (Btw, this size can matter less with high resolution timers.) You could try if my iproute patch: "tc_core: Use double in tc_core_time2tick()" (not merged) can help here. While googling for this patch I found this page, which might be interesting to you (besides the link to the thread with the patch at the end, take 1 or 2, shouldn't matter): http://code.google.com/p/pspacer/wiki/HTBon10GbE If it doesn't help reconsider hfsc. Thanks, Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-16 20:58 ` Jarek Poplawski @ 2010-10-17 1:24 ` Bill Fink 2010-10-17 20:36 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-17 1:24 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Sat, 16 Oct 2010, Jarek Poplawski wrote: > On Sat, Oct 16, 2010 at 12:51:06AM -0400, Bill Fink wrote: > > On Sat, 16 Oct 2010, Jarek Poplawski wrote: > > > > > On Fri, Oct 15, 2010 at 05:37:46PM -0400, Bill Fink wrote: > > > ... > > > > i7test7% tc -s -d qdisc show dev eth2 > > > > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > > > Sent 11028687119 bytes 1223828 pkt (dropped 293, overlimits 0 requeues 0) > > > > backlog 0b 0p requeues 0 > > > > qdisc tbf 10: parent 1:1 rate 8900Mbit burst 1112500b/64 mpu 0b lat 4295.0s > > > > Sent 11028687077 bytes 1223827 pkt (dropped 293, overlimits 593 requeues 0) > > > > backlog 0b 0p requeues 0 > > > > > > > > I'm not sure how you can have so many dropped but not have > > > > any TCP retransmissions (or not show up as requeues). But > > > > there's probably something basic I just don't understand > > > > about how all this stuff works. > > > > > > Me either, but it seems higher "limit" might help with these drops. > > > > You were of course correct about the higher limit helping. > > I finally upgraded the field system to 2.6.35, and did some > > testing on the real data path of interest, which has an RTT > > of about 29 ms. I set up a rate limit of 8 Gbps using the > > following commands: > > > > tc qdisc add dev eth2 root handle 1: prio > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8000mbit limit 35000000 burst 20000 mtu 9000 > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip protocol 6 0xff match ip dst 192.168.1.23 flowid 10:1 > > > > hecn-i7sl1% nuttcp -T10 -i1 -w50m 192.168.1.23 > > 676.3750 MB / 1.00 sec = 5673.4646 Mbps 0 retrans > > 948.5625 MB / 1.00 sec = 7957.1508 Mbps 0 retrans > > 948.8125 MB / 1.00 sec = 7959.5902 Mbps 0 retrans > > 948.3750 MB / 1.00 sec = 7955.5382 Mbps 0 retrans > > 949.0000 MB / 1.00 sec = 7960.6696 Mbps 0 retrans > > 948.7500 MB / 1.00 sec = 7958.7873 Mbps 0 retrans > > 948.6875 MB / 1.00 sec = 7958.0959 Mbps 0 retrans > > 948.6250 MB / 1.00 sec = 7957.4205 Mbps 0 retrans > > 948.7500 MB / 1.00 sec = 7958.7237 Mbps 0 retrans > > 948.4375 MB / 1.00 sec = 7956.3648 Mbps 0 retrans > > > > 9270.5625 MB / 10.09 sec = 7707.7457 Mbps 24 %TX 36 %RX 0 retrans 29.38 msRTT > > > > hecn-i7sl1% tc -s -d qdisc show dev eth2 > > qdisc prio 1: root refcnt 33 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 0 requeues 0) > > backlog 0b 0p requeues 0 > > qdisc tbf 10: parent 1:1 rate 8000Mbit burst 19000b/64 mpu 0b lat 35.0ms > > Sent 9779476756 bytes 1084943 pkt (dropped 0, overlimits 1831360 requeues 0) > > backlog 0b 0p requeues 0 > > > > No drops! > > > > BTW the effective rate limit seems to be a very coarse adjustment > > at these speeds. I was seeing some data path issues at 8.9 Gbps > > so I tried setting slightly lower rates such as 8.8 Gbps, 8.7 Gbps, > > etc, but they still gave me an effective rate limit of about 8.9 Gbps. > > It wasn't until I got down to a setting of 8 Gbps that I actually > > got an effective rate limit of 8 Gbps. > > > > Also the man page for tbf seems to be wrong/misleading about > > the burst parameter. It states: > > > > "If your buffer is too small, packets may be dropped because more > > tokens arrive per timer tick than fit in your bucket. The minimum > > buffer size can be calculated by dividing the rate by HZ. > > > > According to that, with a rate of 8 Gbps and HZ=1000, the minimum > > burst should be 1000000 bytes. But my testing shows that a burst > > of just 20000 works just fine. That's only 2 9000-byte packets > > or about 20 usec of traffic at the 8 Gbps rate. Using too large > > a value for burst can actually be harmful as it allows the traffic > > to temporarily exceed the desired rate limit. > > As I mentioned before, it could work, but your config is really on > the edge. Anyway, if lower than minimum buffer size is needed > something else is definitely wrong. (Btw, this size can matter less > with high resolution timers.) You could try if my iproute patch: > "tc_core: Use double in tc_core_time2tick()" (not merged) can help > here. While googling for this patch I found this page, which might be > interesting to you (besides the link to the thread with the patch at > the end, take 1 or 2, shouldn't matter): > > http://code.google.com/p/pspacer/wiki/HTBon10GbE > > If it doesn't help reconsider hfsc. Thanks for the link. From his results, it appears you can get better accuracy by keeping TSO/GSO enabled and upping the tc mtu parameter to 64000. I will have to try that out. For the very high bandwidth cases I tend to deal with, would there be any advantage to further reducing the PSCHED_SHIFT from its current value of 6? -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-17 1:24 ` Bill Fink @ 2010-10-17 20:36 ` Jarek Poplawski 2010-10-19 7:37 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-17 20:36 UTC (permalink / raw) To: Bill Fink; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Sat, Oct 16, 2010 at 09:24:34PM -0400, Bill Fink wrote: > On Sat, 16 Oct 2010, Jarek Poplawski wrote: ... > > http://code.google.com/p/pspacer/wiki/HTBon10GbE > > > > If it doesn't help reconsider hfsc. > > Thanks for the link. From his results, it appears you can > get better accuracy by keeping TSO/GSO enabled and upping > the tc mtu parameter to 64000. I will have to try that out. Sure, but you have to remember that scheduler doesn't know real packet sizes and rate tables are less accurate especially for smaller packets, so it depends on conditions. > For the very high bandwidth cases I tend to deal with, would > there be any advantage to further reducing the PSCHED_SHIFT > from its current value of 6? If you don't use low rates and/or large buffers it might be a good idea, especially on x64 (for 32-bit longs htb needs some change for this value below 5). Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-17 20:36 ` Jarek Poplawski @ 2010-10-19 7:37 ` Bill Fink 2010-10-20 11:06 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-19 7:37 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Sun, 17 Oct 2010, Jarek Poplawski wrote: > On Sat, Oct 16, 2010 at 09:24:34PM -0400, Bill Fink wrote: > > On Sat, 16 Oct 2010, Jarek Poplawski wrote: > ... > > > http://code.google.com/p/pspacer/wiki/HTBon10GbE > > > > > > If it doesn't help reconsider hfsc. > > > > Thanks for the link. From his results, it appears you can > > get better accuracy by keeping TSO/GSO enabled and upping > > the tc mtu parameter to 64000. I will have to try that out. > > Sure, but you have to remember that scheduler doesn't know real packet > sizes and rate tables are less accurate especially for smaller packets, > so it depends on conditions. On my testing on the real data path, TSO/GSO enabled did seem to give more accurate results for a single stream. But when I tried multiple 10-GigE paths simultaneously, each with a single stream across it, non-TSO/GSO seemed to fare better overall. -Bill ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-19 7:37 ` Bill Fink @ 2010-10-20 11:06 ` Jarek Poplawski 2010-10-27 4:51 ` Bill Fink 0 siblings, 1 reply; 28+ messages in thread From: Jarek Poplawski @ 2010-10-20 11:06 UTC (permalink / raw) To: Bill Fink; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Tue, Oct 19, 2010 at 03:37:24AM -0400, Bill Fink wrote: > On Sun, 17 Oct 2010, Jarek Poplawski wrote: > > > On Sat, Oct 16, 2010 at 09:24:34PM -0400, Bill Fink wrote: > > > On Sat, 16 Oct 2010, Jarek Poplawski wrote: > > ... > > > > http://code.google.com/p/pspacer/wiki/HTBon10GbE > > > > > > > > If it doesn't help reconsider hfsc. > > > > > > Thanks for the link. From his results, it appears you can > > > get better accuracy by keeping TSO/GSO enabled and upping > > > the tc mtu parameter to 64000. I will have to try that out. > > > > Sure, but you have to remember that scheduler doesn't know real packet > > sizes and rate tables are less accurate especially for smaller packets, > > so it depends on conditions. > > On my testing on the real data path, TSO/GSO enabled did seem > to give more accurate results for a single stream. But when > I tried multiple 10-GigE paths simultaneously, each with a > single stream across it, non-TSO/GSO seemed to fare better > overall. Btw, if you find time I would be interested in checking an opposite concept of lower than real mtu (256) to use rate tables different way (other tbf parameters without change). The patch below is needed for this to work. Thanks, Jarek P. --- diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c index 641a30d..9ac3460 100644 --- a/net/sched/sch_tbf.c +++ b/net/sched/sch_tbf.c @@ -123,9 +123,6 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc* sch) struct tbf_sched_data *q = qdisc_priv(sch); int ret; - if (qdisc_pkt_len(skb) > q->max_size) - return qdisc_reshape_fail(skb, sch); - ret = qdisc_enqueue(skb, q->qdisc); if (ret != NET_XMIT_SUCCESS) { if (net_xmit_drop_count(ret)) ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-20 11:06 ` Jarek Poplawski @ 2010-10-27 4:51 ` Bill Fink 2010-10-27 9:48 ` Jarek Poplawski 0 siblings, 1 reply; 28+ messages in thread From: Bill Fink @ 2010-10-27 4:51 UTC (permalink / raw) To: Jarek Poplawski; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Wed, 20 Oct 2010, Jarek Poplawski wrote: > On Tue, Oct 19, 2010 at 03:37:24AM -0400, Bill Fink wrote: > > On Sun, 17 Oct 2010, Jarek Poplawski wrote: > > > > > On Sat, Oct 16, 2010 at 09:24:34PM -0400, Bill Fink wrote: > > > > On Sat, 16 Oct 2010, Jarek Poplawski wrote: > > > ... > > > > > http://code.google.com/p/pspacer/wiki/HTBon10GbE > > > > > > > > > > If it doesn't help reconsider hfsc. > > > > > > > > Thanks for the link. From his results, it appears you can > > > > get better accuracy by keeping TSO/GSO enabled and upping > > > > the tc mtu parameter to 64000. I will have to try that out. > > > > > > Sure, but you have to remember that scheduler doesn't know real packet > > > sizes and rate tables are less accurate especially for smaller packets, > > > so it depends on conditions. > > > > On my testing on the real data path, TSO/GSO enabled did seem > > to give more accurate results for a single stream. But when > > I tried multiple 10-GigE paths simultaneously, each with a > > single stream across it, non-TSO/GSO seemed to fare better > > overall. > > Btw, if you find time I would be interested in checking an opposite > concept of lower than real mtu (256) to use rate tables different way > (other tbf parameters without change). The patch below is needed for > this to work. Sorry. I'm totally swamped at work currently and won't be able to investigate that. -Bill > diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c > index 641a30d..9ac3460 100644 > --- a/net/sched/sch_tbf.c > +++ b/net/sched/sch_tbf.c > @@ -123,9 +123,6 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc* sch) > struct tbf_sched_data *q = qdisc_priv(sch); > int ret; > > - if (qdisc_pkt_len(skb) > q->max_size) > - return qdisc_reshape_fail(skb, sch); > - > ret = qdisc_enqueue(skb, q->qdisc); > if (ret != NET_XMIT_SUCCESS) { > if (net_xmit_drop_count(ret)) ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-27 4:51 ` Bill Fink @ 2010-10-27 9:48 ` Jarek Poplawski 0 siblings, 0 replies; 28+ messages in thread From: Jarek Poplawski @ 2010-10-27 9:48 UTC (permalink / raw) To: Bill Fink; +Cc: Eric Dumazet, Rick Jones, Steven Brudenell, netdev On Wed, Oct 27, 2010 at 12:51:43AM -0400, Bill Fink wrote: > On Wed, 20 Oct 2010, Jarek Poplawski wrote: > > Btw, if you find time I would be interested in checking an opposite > > concept of lower than real mtu (256) to use rate tables different way > > (other tbf parameters without change). The patch below is needed for > > this to work. > > Sorry. I'm totally swamped at work currently and won't be able > to investigate that. No problem, especially when current solution works for you. Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: tbf/htb qdisc limitations 2010-10-15 6:37 ` Bill Fink 2010-10-15 6:44 ` Eric Dumazet @ 2010-10-15 8:18 ` Jarek Poplawski 1 sibling, 0 replies; 28+ messages in thread From: Jarek Poplawski @ 2010-10-15 8:18 UTC (permalink / raw) To: Bill Fink; +Cc: Rick Jones, Steven Brudenell, netdev On Fri, Oct 15, 2010 at 02:37:49AM -0400, Bill Fink wrote: > On Thu, 14 Oct 2010, Jarek Poplawski wrote: > > > On Thu, Oct 14, 2010 at 08:09:39AM +0000, Jarek Poplawski wrote: > > > On Thu, Oct 14, 2010 at 03:13:54AM -0400, Bill Fink wrote: > > > > TSO/GSO was disabled and was using 9000-byte jumbo frames > > > > (and specified mtu 9000 to tc command). > > > > > > > > Here was one attempt I made using tbf: > > > > > > > > tc qdisc add dev eth2 root handle 1: prio > > > > tc qdisc add dev eth2 parent 1:1 handle 10: tbf rate 8900mbit buffer 1112500 limit 10000 mtu 9000 > > > > tc filter add dev eth2 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.23 flowid 10:1 > > > > > > > > I tried many variations of the above, all without success. > > > > > > The main problem are smaller packets. If you had (almost) only 9000b > > > frames this probably could work. [...] > > > > On the other hand, e.g. the limit above seems too low wrt mtu & rate. > > Actually, I discovered my commands above work just fine on > a 2.6.35 box: > > i7test7% nuttcp -T10 -i1 192.168.1.17 > 1045.3125 MB / 1.00 sec = 8768.3573 Mbps 0 retrans > 1045.6875 MB / 1.00 sec = 8772.0292 Mbps 0 retrans > 1049.5625 MB / 1.00 sec = 8804.2627 Mbps 0 retrans > 1043.1875 MB / 1.00 sec = 8750.9960 Mbps 0 retrans > 1048.6875 MB / 1.00 sec = 8796.3246 Mbps 0 retrans > 1033.4375 MB / 1.00 sec = 8669.3188 Mbps 0 retrans > 1040.7500 MB / 1.00 sec = 8730.7057 Mbps 0 retrans > 1047.0000 MB / 1.00 sec = 8783.2063 Mbps 0 retrans > 1040.0000 MB / 1.00 sec = 8724.0564 Mbps 0 retrans > 1037.4375 MB / 1.00 sec = 8702.5434 Mbps 0 retrans > > 10431.5608 MB / 10.00 sec = 8749.7542 Mbps 25 %TX 35 %RX 0 retrans 0.11 msRTT > > The problems I encountered were on a field system running > 2.6.30.10. I will investigate upgrading the field system > to 2.6.35. This change from 2.6.31 should matter here: http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.35.y.git;a=commit;h=a4a710c4a7490587406462bf1d54504b7783d7d7 Jarek P. ^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2010-10-27 9:48 UTC | newest] Thread overview: 28+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-10-08 20:58 tbf/htb qdisc limitations Steven Brudenell 2010-10-10 11:23 ` Jarek Poplawski 2010-10-11 22:27 ` Steven Brudenell 2010-10-12 10:10 ` Jarek Poplawski 2010-10-12 19:31 ` Steven Brudenell 2010-10-12 21:59 ` Jarek Poplawski 2010-10-12 22:17 ` Rick Jones 2010-10-13 6:26 ` Jarek Poplawski 2010-10-14 3:36 ` Bill Fink 2010-10-14 4:01 ` Eric Dumazet 2010-10-14 6:34 ` Bill Fink 2010-10-14 6:44 ` Jarek Poplawski 2010-10-14 7:13 ` Bill Fink 2010-10-14 8:09 ` Jarek Poplawski 2010-10-14 8:50 ` Jarek Poplawski 2010-10-15 6:37 ` Bill Fink 2010-10-15 6:44 ` Eric Dumazet 2010-10-15 21:37 ` Bill Fink 2010-10-15 22:05 ` Jarek Poplawski 2010-10-16 4:51 ` Bill Fink 2010-10-16 20:58 ` Jarek Poplawski 2010-10-17 1:24 ` Bill Fink 2010-10-17 20:36 ` Jarek Poplawski 2010-10-19 7:37 ` Bill Fink 2010-10-20 11:06 ` Jarek Poplawski 2010-10-27 4:51 ` Bill Fink 2010-10-27 9:48 ` Jarek Poplawski 2010-10-15 8:18 ` Jarek Poplawski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).