* Re: NET_SCHED cbq dropping too many packets on a bonding interface
[not found] <B28DA1911F61434C804723B7EE8A5C67@uglypunk>
@ 2008-05-15 3:56 ` Andrew Morton
2008-05-15 5:21 ` Eric Dumazet
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2008-05-15 3:56 UTC (permalink / raw)
To: Kingsley Foreman; +Cc: linux-kernel, netdev
(cc netdev)
On Thu, 15 May 2008 12:25:19 +0930 "Kingsley Foreman" <kingsley@internode.com.au> wrote:
> Ive been using qdisc for quite a while without any problems,
> I just rebuilt a gentoo box with 2.6.25 kernel that does a lot of traffic
> ~1000mbits+ over a bonded interface
>
> Im however seeing a big problem with cbq class dropping packets for no
> reason i can see, and it is causing a lot of speed issues.
>
> the basic config is this
> _____________________________________________________________________
> /sbin/tc qdisc del dev bond0 root
> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt 1000
> cell 8
> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
>
> /sbin/tc class add dev bond0 parent 1: classid 1:1280 cbq bandwidth 2000Mbit
> rate 1200Mbit weight 120Mbit prio 1 allot 1514 cell 8 maxburst 120 minburst
> 1 minidle 0 avpkt 1000 bounded
> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 5
> classid 1:1280
>
> /sbin/tc class add dev bond0 parent 1: classid 1:1281 cbq bandwidth 2000Mbit
> rate 400Mbit weight 40Mbit prio 6 allot 1514 cell 8 maxburst 120 minburst 1
> minidle 0 avpkt 1000 bounded
> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 6
> classid 1:1281
> ____________________________________________________________________
>
> So there is a lot of bandwidth handed out but it is still dropping a lot of
> packets for very small ammounts of traffic eg 300mbits.
>
> However the biggest problem im seeing is if i just do this
>
> __________________________________________________________________
> /sbin/tc qdisc del dev bond0 root
> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt 1000
> cell 8
> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
> ___________________________________________________________________
>
> after 30sec I get results like
> ___________________________________________________________________
>
> ### bond0: queueing disciplines
>
> qdisc cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
> Sent 574230043 bytes 407156 pkt (dropped 2524, overlimits 0 requeues 0)
> rate 0bit 0pps backlog 0b 0p requeues 0
> borrowed 0 overactions 0 avgidle 3 undertime 0
>
> ### bond0: traffic classes
>
> class cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
> Sent 574330783 bytes 407225 pkt (dropped 2525, overlimits 0 requeues 0)
> rate 0bit 0pps backlog 0b 0p requeues 0
> borrowed 0 overactions 0 avgidle 3 undertime 0
> __________________________________________________________________
>
> I can't for the life if me work out why it is dropping so many packets for
> while doing so little traffic when i enable cbq the transfer rate drops by
> approx 30%, any help would be great, and any improvements to my command
> lines would be good also.
>
>
> Kingsley Foreman
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 3:56 ` NET_SCHED cbq dropping too many packets on a bonding interface Andrew Morton
@ 2008-05-15 5:21 ` Eric Dumazet
2008-05-15 6:16 ` Kingsley Foreman
0 siblings, 1 reply; 19+ messages in thread
From: Eric Dumazet @ 2008-05-15 5:21 UTC (permalink / raw)
To: Kingsley Foreman; +Cc: Andrew Morton, linux-kernel, netdev
Andrew Morton a écrit :
> (cc netdev)
>
> On Thu, 15 May 2008 12:25:19 +0930 "Kingsley Foreman" <kingsley@internode.com.au> wrote:
>
>
>> Ive been using qdisc for quite a while without any problems,
>> I just rebuilt a gentoo box with 2.6.25 kernel that does a lot of traffic
>> ~1000mbits+ over a bonded interface
>>
>> Im however seeing a big problem with cbq class dropping packets for no
>> reason i can see, and it is causing a lot of speed issues.
>>
>> the basic config is this
>> _____________________________________________________________________
>> /sbin/tc qdisc del dev bond0 root
>> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt 1000
>> cell 8
>> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
>>
>> /sbin/tc class add dev bond0 parent 1: classid 1:1280 cbq bandwidth 2000Mbit
>> rate 1200Mbit weight 120Mbit prio 1 allot 1514 cell 8 maxburst 120 minburst
>> 1 minidle 0 avpkt 1000 bounded
>> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 5
>> classid 1:1280
>>
>> /sbin/tc class add dev bond0 parent 1: classid 1:1281 cbq bandwidth 2000Mbit
>> rate 400Mbit weight 40Mbit prio 6 allot 1514 cell 8 maxburst 120 minburst 1
>> minidle 0 avpkt 1000 bounded
>> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 6
>> classid 1:1281
>> ____________________________________________________________________
>>
>> So there is a lot of bandwidth handed out but it is still dropping a lot of
>> packets for very small ammounts of traffic eg 300mbits.
>>
>> However the biggest problem im seeing is if i just do this
>>
>> __________________________________________________________________
>> /sbin/tc qdisc del dev bond0 root
>> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt 1000
>> cell 8
>> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
>> ___________________________________________________________________
>>
>> after 30sec I get results like
>> ___________________________________________________________________
>>
>> ### bond0: queueing disciplines
>>
>> qdisc cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
>> Sent 574230043 bytes 407156 pkt (dropped 2524, overlimits 0 requeues 0)
>> rate 0bit 0pps backlog 0b 0p requeues 0
>> borrowed 0 overactions 0 avgidle 3 undertime 0
>>
>> ### bond0: traffic classes
>>
>> class cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
>> Sent 574330783 bytes 407225 pkt (dropped 2525, overlimits 0 requeues 0)
>> rate 0bit 0pps backlog 0b 0p requeues 0
>> borrowed 0 overactions 0 avgidle 3 undertime 0
>> __________________________________________________________________
>>
>> I can't for the life if me work out why it is dropping so many packets for
>> while doing so little traffic when i enable cbq the transfer rate drops by
>> approx 30%, any help would be great, and any improvements to my command
>> lines would be good also.
>>
>>
>> Kingsley Foreman
>>
>
>
>
>
Could you provide your linux-2.6.25 .config file please ?
What kind of hardware is your box ? (CPU, network interfaces)
CBQ and other packet schedulers depend on a fast ktime_get() interface,
so maybe
slowdown you notice has its root on core kernel facilities
(CONFIG_HZ, SCHED_HRTICK, HIGH_RES_TIMERS, ...)
If you remove CBQ, do you get same slowdown if you have a tcpdump
running on your machine ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 5:21 ` Eric Dumazet
@ 2008-05-15 6:16 ` Kingsley Foreman
2008-05-15 9:12 ` Jarek Poplawski
0 siblings, 1 reply; 19+ messages in thread
From: Kingsley Foreman @ 2008-05-15 6:16 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Andrew Morton, linux-kernel, netdev
--------------------------------------------------
From: "Eric Dumazet" <dada1@cosmosbay.com>
Sent: Thursday, May 15, 2008 2:51 PM
To: "Kingsley Foreman" <kingsley@internode.com.au>
Cc: "Andrew Morton" <akpm@linux-foundation.org>;
<linux-kernel@vger.kernel.org>; <netdev@vger.kernel.org>
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
> Andrew Morton a écrit :
>> (cc netdev)
>>
>> On Thu, 15 May 2008 12:25:19 +0930 "Kingsley Foreman"
>> <kingsley@internode.com.au> wrote:
>>
>>
>>> Ive been using qdisc for quite a while without any problems,
>>> I just rebuilt a gentoo box with 2.6.25 kernel that does a lot of
>>> traffic
>>> ~1000mbits+ over a bonded interface
>>>
>>> Im however seeing a big problem with cbq class dropping packets for no
>>> reason i can see, and it is causing a lot of speed issues.
>>>
>>> the basic config is this
>>> _____________________________________________________________________
>>> /sbin/tc qdisc del dev bond0 root
>>> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt
>>> 1000
>>> cell 8
>>> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
>>>
>>> /sbin/tc class add dev bond0 parent 1: classid 1:1280 cbq bandwidth
>>> 2000Mbit
>>> rate 1200Mbit weight 120Mbit prio 1 allot 1514 cell 8 maxburst 120
>>> minburst
>>> 1 minidle 0 avpkt 1000 bounded
>>> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 5
>>> classid 1:1280
>>>
>>> /sbin/tc class add dev bond0 parent 1: classid 1:1281 cbq bandwidth
>>> 2000Mbit
>>> rate 400Mbit weight 40Mbit prio 6 allot 1514 cell 8 maxburst 120
>>> minburst 1
>>> minidle 0 avpkt 1000 bounded
>>> /sbin/tc filter add dev bond0 parent 1:0 protocol ip prio 300 route to 6
>>> classid 1:1281
>>> ____________________________________________________________________
>>>
>>> So there is a lot of bandwidth handed out but it is still dropping a lot
>>> of
>>> packets for very small ammounts of traffic eg 300mbits.
>>>
>>> However the biggest problem im seeing is if i just do this
>>>
>>> __________________________________________________________________
>>> /sbin/tc qdisc del dev bond0 root
>>> /sbin/tc qdisc add dev bond0 root handle 1 cbq bandwidth 2000Mbit avpkt
>>> 1000
>>> cell 8
>>> /sbin/tc class change dev bond0 root cbq weight 200Mbit allot 1514
>>> ___________________________________________________________________
>>>
>>> after 30sec I get results like
>>> ___________________________________________________________________
>>>
>>> ### bond0: queueing disciplines
>>>
>>> qdisc cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
>>> Sent 574230043 bytes 407156 pkt (dropped 2524, overlimits 0 requeues 0)
>>> rate 0bit 0pps backlog 0b 0p requeues 0
>>> borrowed 0 overactions 0 avgidle 3 undertime 0
>>>
>>> ### bond0: traffic classes
>>>
>>> class cbq 1: root rate 2000Mbit (bounded,isolated) prio no-transmit
>>> Sent 574330783 bytes 407225 pkt (dropped 2525, overlimits 0 requeues 0)
>>> rate 0bit 0pps backlog 0b 0p requeues 0
>>> borrowed 0 overactions 0 avgidle 3 undertime 0
>>> __________________________________________________________________
>>>
>>> I can't for the life if me work out why it is dropping so many packets
>>> for while doing so little traffic when i enable cbq the transfer rate
>>> drops by approx 30%, any help would be great, and any improvements to my
>>> command lines would be good also.
>>>
>>>
>>> Kingsley Foreman
>>>
>>
>>
>>
>>
> Could you provide your linux-2.6.25 .config file please ?
>
> What kind of hardware is your box ? (CPU, network interfaces)
>
> CBQ and other packet schedulers depend on a fast ktime_get() interface, so
> maybe
> slowdown you notice has its root on core kernel facilities
> (CONFIG_HZ, SCHED_HRTICK, HIGH_RES_TIMERS, ...)
>
> If you remove CBQ, do you get same slowdown if you have a tcpdump running
> on your machine ?
>
Im seeing it on a sun v20z and a sun x220m2, both multicore Opteron both tg3
Im not seeing a slowdown if tcpdump is running on the same machine without
CBQ
you can grab the config here
http://games.internode.on.net/ker-config.txt
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 6:16 ` Kingsley Foreman
@ 2008-05-15 9:12 ` Jarek Poplawski
2008-05-15 10:06 ` Kingsley Foreman
0 siblings, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-15 9:12 UTC (permalink / raw)
To: Kingsley Foreman; +Cc: Eric Dumazet, Andrew Morton, linux-kernel, netdev
On 15-05-2008 08:16, Kingsley Foreman wrote:
...
>>>> Ive been using qdisc for quite a while without any problems,
>>>> I just rebuilt a gentoo box with 2.6.25 kernel that does a lot of
>>>> traffic
>>>> ~1000mbits+ over a bonded interface
>>>>
>>>> Im however seeing a big problem with cbq class dropping packets for no
>>>> reason i can see, and it is causing a lot of speed issues.
...
Do you mean the same box with the same config but the previous kernel
(2.6.24) doesn't show this problem? BTW, what is txqueuelen on this
bond0?
Regards,
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 9:12 ` Jarek Poplawski
@ 2008-05-15 10:06 ` Kingsley Foreman
2008-05-15 10:29 ` Jarek Poplawski
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Kingsley Foreman @ 2008-05-15 10:06 UTC (permalink / raw)
To: Jarek Poplawski; +Cc: Eric Dumazet, Andrew Morton, linux-kernel, netdev
i just rolled back the kernel to 2.6.24 and im seeing the same thing,
I was using 2.6.22 before and didn't see the problem, txqueuelen on the
bond0 interface is 0 (the default)
--------------------------------------------------
From: "Jarek Poplawski" <jarkao2@gmail.com>
Sent: Thursday, May 15, 2008 6:42 PM
To: "Kingsley Foreman" <kingsley@internode.com.au>
Cc: "Eric Dumazet" <dada1@cosmosbay.com>; "Andrew Morton"
<akpm@linux-foundation.org>; <linux-kernel@vger.kernel.org>;
<netdev@vger.kernel.org>
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
> On 15-05-2008 08:16, Kingsley Foreman wrote:
> ...
>>>>> Ive been using qdisc for quite a while without any problems,
>>>>> I just rebuilt a gentoo box with 2.6.25 kernel that does a lot of
>>>>> traffic
>>>>> ~1000mbits+ over a bonded interface
>>>>>
>>>>> Im however seeing a big problem with cbq class dropping packets for no
>>>>> reason i can see, and it is causing a lot of speed issues.
> ...
>
> Do you mean the same box with the same config but the previous kernel
> (2.6.24) doesn't show this problem? BTW, what is txqueuelen on this
> bond0?
>
> Regards,
> Jarek P.
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 10:06 ` Kingsley Foreman
@ 2008-05-15 10:29 ` Jarek Poplawski
2008-05-15 15:59 ` Patrick McHardy
2008-05-15 16:09 ` Patrick McHardy
2 siblings, 0 replies; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-15 10:29 UTC (permalink / raw)
To: Kingsley Foreman; +Cc: Eric Dumazet, Andrew Morton, linux-kernel, netdev
On Thu, May 15, 2008 at 07:36:54PM +0930, Kingsley Foreman wrote:
...
> i just rolled back the kernel to 2.6.24 and im seeing the same thing,
>
> I was using 2.6.22 before and didn't see the problem, txqueuelen on the
> bond0 interface is 0 (the default)
So, could you try if the size matters here...?
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 10:06 ` Kingsley Foreman
2008-05-15 10:29 ` Jarek Poplawski
@ 2008-05-15 15:59 ` Patrick McHardy
2008-05-15 16:09 ` Patrick McHardy
2 siblings, 0 replies; 19+ messages in thread
From: Patrick McHardy @ 2008-05-15 15:59 UTC (permalink / raw)
To: Kingsley Foreman
Cc: Jarek Poplawski, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
Kingsley Foreman wrote:
> i just rolled back the kernel to 2.6.24 and im seeing the same thing,
>
> I was using 2.6.22 before and didn't see the problem, txqueuelen on the
> bond0 interface is 0 (the default)
Did you also update iproute when moving to a newer kernel?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 10:06 ` Kingsley Foreman
2008-05-15 10:29 ` Jarek Poplawski
2008-05-15 15:59 ` Patrick McHardy
@ 2008-05-15 16:09 ` Patrick McHardy
2008-05-15 18:09 ` Jarek Poplawski
2008-05-15 18:25 ` Jarek Poplawski
2 siblings, 2 replies; 19+ messages in thread
From: Patrick McHardy @ 2008-05-15 16:09 UTC (permalink / raw)
To: Kingsley Foreman
Cc: Jarek Poplawski, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
Kingsley Foreman wrote:
> i just rolled back the kernel to 2.6.24 and im seeing the same thing,
>
> I was using 2.6.22 before and didn't see the problem, txqueuelen on the
> bond0 interface is 0 (the default)
That might explain things, although it shouldn't have worked before
either.
CBQ creates default pfifo qdiscs for its leaves, these use a limit
of txqueuelen or 1 if it is zero. So even small bursts will cause
drops. Do things improve if you set txqueuelen to a larger value
*before* configuring the qdiscs?
Another thing is that CBQ on bond will probably not work properly
at all, it needs a real device since it measures the timing between
dequeue events for idle time estimation. On software devices this
doesn't work.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 16:09 ` Patrick McHardy
@ 2008-05-15 18:09 ` Jarek Poplawski
2008-05-15 18:14 ` Patrick McHardy
2008-05-15 18:25 ` Jarek Poplawski
1 sibling, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-15 18:09 UTC (permalink / raw)
To: Patrick McHardy
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
> Kingsley Foreman wrote:
>> i just rolled back the kernel to 2.6.24 and im seeing the same thing,
>>
>> I was using 2.6.22 before and didn't see the problem, txqueuelen on the
>> bond0 interface is 0 (the default)
>
> That might explain things, although it shouldn't have worked before
> either.
>
> CBQ creates default pfifo qdiscs for its leaves, these use a limit
> of txqueuelen or 1 if it is zero. So even small bursts will cause
> drops. Do things improve if you set txqueuelen to a larger value
> *before* configuring the qdiscs?
Kingsley wrote to me that even after changing txqueuelen to 1000 the
"dropped" number didn't change much. A debugging patch with printks
around all "sch->qstats.dropps++" showed only the end of cbq_enqueue().
I've asked to check tomorrow "pfifo limit 1000" for these drops too.
> Another thing is that CBQ on bond will probably not work properly
> at all, it needs a real device since it measures the timing between
> dequeue events for idle time estimation. On software devices this
> doesn't work.
Right, but these drops without any sign of overactions or overlimits
seem to show it's not about shaping (or it's not counted/documented
enough).
Regards,
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 18:09 ` Jarek Poplawski
@ 2008-05-15 18:14 ` Patrick McHardy
0 siblings, 0 replies; 19+ messages in thread
From: Patrick McHardy @ 2008-05-15 18:14 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
Jarek Poplawski wrote:
> On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
>> Kingsley Foreman wrote:
>>> i just rolled back the kernel to 2.6.24 and im seeing the same thing,
>>>
>>> I was using 2.6.22 before and didn't see the problem, txqueuelen on the
>>> bond0 interface is 0 (the default)
>> That might explain things, although it shouldn't have worked before
>> either.
>>
>> CBQ creates default pfifo qdiscs for its leaves, these use a limit
>> of txqueuelen or 1 if it is zero. So even small bursts will cause
>> drops. Do things improve if you set txqueuelen to a larger value
>> *before* configuring the qdiscs?
>
> Kingsley wrote to me that even after changing txqueuelen to 1000 the
> "dropped" number didn't change much. A debugging patch with printks
> around all "sch->qstats.dropps++" showed only the end of cbq_enqueue().
Thats where packets dropped by default pfifo would be accounted.
Did you change txqueuelen before or after setting up the qdiscs?
> I've asked to check tomorrow "pfifo limit 1000" for these drops too.
That will clear it up.
>> Another thing is that CBQ on bond will probably not work properly
>> at all, it needs a real device since it measures the timing between
>> dequeue events for idle time estimation. On software devices this
>> doesn't work.
>
> Right, but these drops without any sign of overactions or overlimits
> seem to show it's not about shaping (or it's not counted/documented
> enough).
Yes, these drops are probably unrelated, just thought I mention it.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 16:09 ` Patrick McHardy
2008-05-15 18:09 ` Jarek Poplawski
@ 2008-05-15 18:25 ` Jarek Poplawski
2008-05-15 18:32 ` Patrick McHardy
1 sibling, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-15 18:25 UTC (permalink / raw)
To: Patrick McHardy
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
...
> Do things improve if you set txqueuelen to a larger value
> *before* configuring the qdiscs?
BTW, I hope it was *before*, but since pfifo_fast_enqueue() uses
"qdisc->dev->tx_queue_len" does it really matter? (Until it's
before the test of course...)
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 18:25 ` Jarek Poplawski
@ 2008-05-15 18:32 ` Patrick McHardy
2008-05-15 18:46 ` Jarek Poplawski
0 siblings, 1 reply; 19+ messages in thread
From: Patrick McHardy @ 2008-05-15 18:32 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
Jarek Poplawski wrote:
> On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
> ...
>> Do things improve if you set txqueuelen to a larger value
>> *before* configuring the qdiscs?
>
> BTW, I hope it was *before*, but since pfifo_fast_enqueue() uses
> "qdisc->dev->tx_queue_len" does it really matter? (Until it's
> before the test of course...)
Yes, CBQ uses pfifo, not pfifo_fast. pfifo uses txqueuelen
to inialize q->limit, but thats whats used during ->enqueue().
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 18:32 ` Patrick McHardy
@ 2008-05-15 18:46 ` Jarek Poplawski
2008-05-15 21:27 ` Kingsley Foreman
0 siblings, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-15 18:46 UTC (permalink / raw)
To: Patrick McHardy
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Thu, May 15, 2008 at 08:32:44PM +0200, Patrick McHardy wrote:
> Jarek Poplawski wrote:
>> On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
>> ...
>>> Do things improve if you set txqueuelen to a larger value
>>> *before* configuring the qdiscs?
>>
>> BTW, I hope it was *before*, but since pfifo_fast_enqueue() uses
>> "qdisc->dev->tx_queue_len" does it really matter? (Until it's
>> before the test of course...)
>
>
> Yes, CBQ uses pfifo, not pfifo_fast. pfifo uses txqueuelen
> to inialize q->limit, but thats whats used during ->enqueue().
...My bad! I missed this and this (only!?) seems to explain this
puzzle. So, I hope it was really because *not before* (and not only
size matters...)
Thanks,
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 18:46 ` Jarek Poplawski
@ 2008-05-15 21:27 ` Kingsley Foreman
2008-05-16 5:49 ` Jarek Poplawski
0 siblings, 1 reply; 19+ messages in thread
From: Kingsley Foreman @ 2008-05-15 21:27 UTC (permalink / raw)
To: Jarek Poplawski, Patrick McHardy
Cc: Eric Dumazet, Andrew Morton, linux-kernel, netdev
--------------------------------------------------
From: "Jarek Poplawski" <jarkao2@gmail.com>
Sent: Friday, May 16, 2008 4:16 AM
To: "Patrick McHardy" <kaber@trash.net>
Cc: "Kingsley Foreman" <kingsley@internode.com.au>; "Eric Dumazet"
<dada1@cosmosbay.com>; "Andrew Morton" <akpm@linux-foundation.org>;
<linux-kernel@vger.kernel.org>; <netdev@vger.kernel.org>
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
> On Thu, May 15, 2008 at 08:32:44PM +0200, Patrick McHardy wrote:
>> Jarek Poplawski wrote:
>>> On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
>>> ...
>>>> Do things improve if you set txqueuelen to a larger value
>>>> *before* configuring the qdiscs?
>>>
>>> BTW, I hope it was *before*, but since pfifo_fast_enqueue() uses
>>> "qdisc->dev->tx_queue_len" does it really matter? (Until it's
>>> before the test of course...)
>>
>>
>> Yes, CBQ uses pfifo, not pfifo_fast. pfifo uses txqueuelen
>> to inialize q->limit, but thats whats used during ->enqueue().
>
> ...My bad! I missed this and this (only!?) seems to explain this
> puzzle. So, I hope it was really because *not before* (and not only
> size matters...)
>
> Thanks,
> Jarek P.
>
running
tc qdisc add dev bond0 root pfifo limit 1000
or
tc qdisc add dev bond0 root handle 1: cbq bandwidth 2000Mbit avpkt 1000 cell
0
tc qdisc add dev bond0 parent 1: pfifo limit 1000
doesn't appear to be dropping packets.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-15 21:27 ` Kingsley Foreman
@ 2008-05-16 5:49 ` Jarek Poplawski
2008-05-16 6:12 ` Kingsley Foreman
0 siblings, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-16 5:49 UTC (permalink / raw)
To: Kingsley Foreman
Cc: Patrick McHardy, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Fri, May 16, 2008 at 06:57:23AM +0930, Kingsley Foreman wrote:
...
> running
>
> tc qdisc add dev bond0 root pfifo limit 1000
>
> or
>
> tc qdisc add dev bond0 root handle 1: cbq bandwidth 2000Mbit avpkt 1000
> cell 0
> tc qdisc add dev bond0 parent 1: pfifo limit 1000
>
>
> doesn't appear to be dropping packets.
>
Great! So it looks like there is no error here unless there are needed
significantly bigger queues to stop this dropping compared to 2.6.22.
You could try to lower this limit now to something like 10 to find when
drops start to appear. Why 2.6.22 doesn't need this at all is a mistery
anyway (old scheduler?), and it would really need some work (like git
bisection) to find the reason for more than 5 or 10 packet difference.
Thanks,
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-16 5:49 ` Jarek Poplawski
@ 2008-05-16 6:12 ` Kingsley Foreman
2008-05-16 7:01 ` Jarek Poplawski
0 siblings, 1 reply; 19+ messages in thread
From: Kingsley Foreman @ 2008-05-16 6:12 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Patrick McHardy, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
ok after some playing a bit if i use
tc qdisc change dev bond0 parent 1: pfifo limit 30
the dropped packets go away, im not sure if that is considered normal or
not, however any number under 30 gives me issues.
Kingsley
----- Original Message -----
From: "Jarek Poplawski" <jarkao2@gmail.com>
To: "Kingsley Foreman" <kingsley@internode.com.au>
Cc: "Patrick McHardy" <kaber@trash.net>; "Eric Dumazet"
<dada1@cosmosbay.com>; "Andrew Morton" <akpm@linux-foundation.org>;
<linux-kernel@vger.kernel.org>; <netdev@vger.kernel.org>
Sent: Friday, May 16, 2008 3:19 PM
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
> On Fri, May 16, 2008 at 06:57:23AM +0930, Kingsley Foreman wrote:
> ...
>> running
>>
>> tc qdisc add dev bond0 root pfifo limit 1000
>>
>> or
>>
>> tc qdisc add dev bond0 root handle 1: cbq bandwidth 2000Mbit avpkt 1000
>> cell 0
>> tc qdisc add dev bond0 parent 1: pfifo limit 1000
>>
>>
>> doesn't appear to be dropping packets.
>>
>
> Great! So it looks like there is no error here unless there are needed
> significantly bigger queues to stop this dropping compared to 2.6.22.
> You could try to lower this limit now to something like 10 to find when
> drops start to appear. Why 2.6.22 doesn't need this at all is a mistery
> anyway (old scheduler?), and it would really need some work (like git
> bisection) to find the reason for more than 5 or 10 packet difference.
>
> Thanks,
> Jarek P.
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-16 6:12 ` Kingsley Foreman
@ 2008-05-16 7:01 ` Jarek Poplawski
2008-05-16 7:22 ` Jarek Poplawski
0 siblings, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-16 7:01 UTC (permalink / raw)
To: Kingsley Foreman
Cc: Patrick McHardy, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Fri, May 16, 2008 at 03:42:18PM +0930, Kingsley Foreman wrote:
...
> ok after some playing a bit if i use
>
> tc qdisc change dev bond0 parent 1: pfifo limit 30
>
> the dropped packets go away, im not sure if that is considered normal or
> not, however any number under 30 gives me issues.
If there are no significant differences in configs between these 2.6.22
and 2.6.24/25 (e.g. things mentionned earlier by Eric) IMHO it's "more
than normal", but as I've written it would need a lot of your time and
work to check the rason.
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-16 7:01 ` Jarek Poplawski
@ 2008-05-16 7:22 ` Jarek Poplawski
2008-05-16 11:56 ` Patrick McHardy
0 siblings, 1 reply; 19+ messages in thread
From: Jarek Poplawski @ 2008-05-16 7:22 UTC (permalink / raw)
To: Kingsley Foreman
Cc: Patrick McHardy, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
On Fri, May 16, 2008 at 07:01:02AM +0000, Jarek Poplawski wrote:
> On Fri, May 16, 2008 at 03:42:18PM +0930, Kingsley Foreman wrote:
> ...
> > ok after some playing a bit if i use
> >
> > tc qdisc change dev bond0 parent 1: pfifo limit 30
> >
> > the dropped packets go away, im not sure if that is considered normal or
> > not, however any number under 30 gives me issues.
>
> If there are no significant differences in configs between these 2.6.22
> and 2.6.24/25 (e.g. things mentionned earlier by Eric) IMHO it's "more
> than normal", but as I've written it would need a lot of your time and
> work to check the rason.
BTW, it still doesn't have to mean any error: e.g. it could happen when
kernel throughput is better while NIC tx speed stayed the same. So, it
shouldn't probably bother you too much until there is no visible impact
on latency or rates.
Jarek P.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: NET_SCHED cbq dropping too many packets on a bonding interface
2008-05-16 7:22 ` Jarek Poplawski
@ 2008-05-16 11:56 ` Patrick McHardy
0 siblings, 0 replies; 19+ messages in thread
From: Patrick McHardy @ 2008-05-16 11:56 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Kingsley Foreman, Eric Dumazet, Andrew Morton, linux-kernel,
netdev
Jarek Poplawski wrote:
> On Fri, May 16, 2008 at 07:01:02AM +0000, Jarek Poplawski wrote:
>> On Fri, May 16, 2008 at 03:42:18PM +0930, Kingsley Foreman wrote:
>> ...
>>> ok after some playing a bit if i use
>>>
>>> tc qdisc change dev bond0 parent 1: pfifo limit 30
>>>
>>> the dropped packets go away, im not sure if that is considered normal or
>>> not, however any number under 30 gives me issues.
>> If there are no significant differences in configs between these 2.6.22
>> and 2.6.24/25 (e.g. things mentionned earlier by Eric) IMHO it's "more
>> than normal", but as I've written it would need a lot of your time and
>> work to check the rason.
>
> BTW, it still doesn't have to mean any error: e.g. it could happen when
> kernel throughput is better while NIC tx speed stayed the same. So, it
> shouldn't probably bother you too much until there is no visible impact
> on latency or rates.
Yes. I don't think this is an error, the configuration
was simply broken.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2008-05-16 11:56 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <B28DA1911F61434C804723B7EE8A5C67@uglypunk>
2008-05-15 3:56 ` NET_SCHED cbq dropping too many packets on a bonding interface Andrew Morton
2008-05-15 5:21 ` Eric Dumazet
2008-05-15 6:16 ` Kingsley Foreman
2008-05-15 9:12 ` Jarek Poplawski
2008-05-15 10:06 ` Kingsley Foreman
2008-05-15 10:29 ` Jarek Poplawski
2008-05-15 15:59 ` Patrick McHardy
2008-05-15 16:09 ` Patrick McHardy
2008-05-15 18:09 ` Jarek Poplawski
2008-05-15 18:14 ` Patrick McHardy
2008-05-15 18:25 ` Jarek Poplawski
2008-05-15 18:32 ` Patrick McHardy
2008-05-15 18:46 ` Jarek Poplawski
2008-05-15 21:27 ` Kingsley Foreman
2008-05-16 5:49 ` Jarek Poplawski
2008-05-16 6:12 ` Kingsley Foreman
2008-05-16 7:01 ` Jarek Poplawski
2008-05-16 7:22 ` Jarek Poplawski
2008-05-16 11:56 ` Patrick McHardy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).