netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Questions on kernel skb send / netdev queue monitoring
@ 2008-11-05 17:53 Andre Schwarz
  2008-11-05 18:30 ` Eric Dumazet
  0 siblings, 1 reply; 3+ messages in thread
From: Andre Schwarz @ 2008-11-05 17:53 UTC (permalink / raw)
  To: netdev

Hi,

we're running 2.6.27 on a MPC8343 based board.
The board is working as a camera and is supposed to stream image data
over 1000M Ethernet.

Ethernet is connected via 2x Vitesse VSC8601 RGMII PHY, i.e. "eth0" and
"eth1" present.

Basically the system is running fine for quite some time - starting with
kernel 2.6.19.
Lately I have some trouble regarding performance and errors.

Obviously I'm doing something wrong ... hopefully someone can enlighten me.


How the system works :

- Kernel driver allocates static list of skb to hold a complete image.
This can be up to 4k skb depending on mtu.
- Imaging device (FPGA @ PCI) initiates DMA into skb.
- driver sends the skb out.


1. Sending

This is my "inner loop" send function and is called for every skb in the
list.

static inline int gevss_send_get_ehdr(TGevStream *gevs, struct sk_buff *skb)
{
        int result;
        struct sk_buff *slow_skb = skb_clone(skb, GFP_ATOMIC);

        atomic_inc(&slow_skb->users);
        result = gevs->rt->u.dst.output(slow_skb);
        kfree_skb(slow_skb);

        return result;
}

Is there really any need for cloning each skb before sending ?
I'd really like to send the static skb without consuming it. How can
this be done ?

Is "gevs->rt->u.dst.output(slow_skb)" reasonable ?
What about "hard_start_xmit" and/or "dev_queue_xmit" inside netdev ?
Are these functions supposed to be used by other drivers ?

What result can I expect if there's a failure, i.e. the HW-queue is full ?
How should this be handled ? retry,i.e. send again after a while ?
Can I query the xmit queue size/usage ?

Actually I'm checking for NETDEV_TX_OK and NETDEV_TX_BUSY.
Is this reasonable ?


2. "overruns"

I've never seen that before. The overrun counter is incrementing quite
fast even during proper operation.
Looks like this is also an issue with not throttling the sender when the
xmit queue is full ...  :-( 
How can I avoid this ?

eth0      Link encap:Ethernet  HWaddr 00:0C:8D:30:40:25
          inet addr:192.168.65.55  Bcast:192.168.65.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:929 errors:0 dropped:0 overruns:0 frame:0
          TX packets:180937 errors:0 dropped:0 overruns:54002 carrier:0   
          collisions:0 txqueuelen:1000
          RX bytes:65212 (63.6 KiB)  TX bytes:262068658 (249.9 MiB)
          Base address:0xa000




Any help is welcome.

regards,
Andre

____________________________________________________________________
Psssst! Schon vom neuen WEB.DE MultiMessenger gehört? 
Der kann`s mit allen: http://www.produkte.web.de/messenger/?did=3123


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Questions on kernel skb send / netdev queue monitoring
  2008-11-05 17:53 Questions on kernel skb send / netdev queue monitoring Andre Schwarz
@ 2008-11-05 18:30 ` Eric Dumazet
  2008-11-07  9:39   ` Andre Schwarz
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2008-11-05 18:30 UTC (permalink / raw)
  To: Andre Schwarz; +Cc: netdev

Andre Schwarz a écrit :
> Hi,
> 
> we're running 2.6.27 on a MPC8343 based board.
> The board is working as a camera and is supposed to stream image data
> over 1000M Ethernet.
> 
> Ethernet is connected via 2x Vitesse VSC8601 RGMII PHY, i.e. "eth0" and
> "eth1" present.
> 
> Basically the system is running fine for quite some time - starting with
> kernel 2.6.19.
> Lately I have some trouble regarding performance and errors.
> 
> Obviously I'm doing something wrong ... hopefully someone can enlighten me.
> 
> 
> How the system works :
> 
> - Kernel driver allocates static list of skb to hold a complete image.
> This can be up to 4k skb depending on mtu.
> - Imaging device (FPGA @ PCI) initiates DMA into skb.
> - driver sends the skb out.
> 
> 
> 1. Sending
> 
> This is my "inner loop" send function and is called for every skb in the
> list.
> 
> static inline int gevss_send_get_ehdr(TGevStream *gevs, struct sk_buff *skb)
> {
>         int result;
>         struct sk_buff *slow_skb = skb_clone(skb, GFP_ATOMIC);
> 
>         atomic_inc(&slow_skb->users);
>         result = gevs->rt->u.dst.output(slow_skb);
>         kfree_skb(slow_skb);
> 
>         return result;
> }
> 
> Is there really any need for cloning each skb before sending ?
> I'd really like to send the static skb without consuming it. How can
> this be done ?
> 

You have replied to yourself... you clone skb because you want to keep skb.

> Is "gevs->rt->u.dst.output(slow_skb)" reasonable ?
> What about "hard_start_xmit" and/or "dev_queue_xmit" inside netdev ?
> Are these functions supposed to be used by other drivers ?
> 
> What result can I expect if there's a failure, i.e. the HW-queue is full ?
> How should this be handled ? retry,i.e. send again after a while ?
> Can I query the xmit queue size/usage ?
> 
> Actually I'm checking for NETDEV_TX_OK and NETDEV_TX_BUSY.
> Is this reasonable ?
> 
> 
> 2. "overruns"
> 
> I've never seen that before. The overrun counter is incrementing quite
> fast even during proper operation.
> Looks like this is also an issue with not throttling the sender when the
> xmit queue is full ...  :-( 
> How can I avoid this ?
> 
> eth0      Link encap:Ethernet  HWaddr 00:0C:8D:30:40:25
>           inet addr:192.168.65.55  Bcast:192.168.65.255  Mask:255.255.255.0
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:929 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:180937 errors:0 dropped:0 overruns:54002 carrier:0   
>           collisions:0 txqueuelen:1000
>           RX bytes:65212 (63.6 KiB)  TX bytes:262068658 (249.9 MiB)
>           Base address:0xa000

If your driver has to push 4096 skb at once, and you dont want to handle overruns,
you might need to change eth0 settings

ifconfig eth0 txqueuelen 5000



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Questions on kernel skb send / netdev queue monitoring
  2008-11-05 18:30 ` Eric Dumazet
@ 2008-11-07  9:39   ` Andre Schwarz
  0 siblings, 0 replies; 3+ messages in thread
From: Andre Schwarz @ 2008-11-07  9:39 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

Eric,

thanks for your reply, but there are still open questions.


Eric Dumazet schrieb:
> Andre Schwarz a écrit :
>> Hi,
>>
>> we're running 2.6.27 on a MPC8343 based board.
>> The board is working as a camera and is supposed to stream image data
>> over 1000M Ethernet.
>>
>> Ethernet is connected via 2x Vitesse VSC8601 RGMII PHY, i.e. "eth0" and
>> "eth1" present.
>>
>> Basically the system is running fine for quite some time - starting with
>> kernel 2.6.19.
>> Lately I have some trouble regarding performance and errors.
>>
>> Obviously I'm doing something wrong ... hopefully someone can
>> enlighten me.
>>
>>
>> How the system works :
>>
>> - Kernel driver allocates static list of skb to hold a complete image.
>> This can be up to 4k skb depending on mtu.
>> - Imaging device (FPGA @ PCI) initiates DMA into skb.
>> - driver sends the skb out.
>>
>>
>> 1. Sending
>>
>> This is my "inner loop" send function and is called for every skb in the
>> list.
>>
>> static inline int gevss_send_get_ehdr(TGevStream *gevs, struct
>> sk_buff *skb)
>> {
>>         int result;
>>         struct sk_buff *slow_skb = skb_clone(skb, GFP_ATOMIC);
>>
>>         atomic_inc(&slow_skb->users);
>>         result = gevs->rt->u.dst.output(slow_skb);
>>         kfree_skb(slow_skb);
>>
>>         return result;
>> }
>>
>> Is there really any need for cloning each skb before sending ?
>> I'd really like to send the static skb without consuming it. How can
>> this be done ?
>>
>
> You have replied to yourself... you clone skb because you want to keep
> skb.
>
As long as this is the fastest way. Wouldn't be a simple use count
increment better ?
Why is incrementing the user count before sending not working ?
>> Is "gevs->rt->u.dst.output(slow_skb)" reasonable ?
>> What about "hard_start_xmit" and/or "dev_queue_xmit" inside netdev ?
>> Are these functions supposed to be used by other drivers ?
>>
>> What result can I expect if there's a failure, i.e. the HW-queue is
>> full ?
>> How should this be handled ? retry,i.e. send again after a while ?
>> Can I query the xmit queue size/usage ?
>>
>> Actually I'm checking for NETDEV_TX_OK and NETDEV_TX_BUSY.
>> Is this reasonable ?
>>
>>
>> 2. "overruns"
>>
>> I've never seen that before. The overrun counter is incrementing quite
>> fast even during proper operation.
>> Looks like this is also an issue with not throttling the sender when the
>> xmit queue is full ...  :-( How can I avoid this ?
>>
>> eth0      Link encap:Ethernet  HWaddr 00:0C:8D:30:40:25
>>           inet addr:192.168.65.55  Bcast:192.168.65.255 
>> Mask:255.255.255.0
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:929 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:180937 errors:0 dropped:0 overruns:54002
>> carrier:0             collisions:0 txqueuelen:1000
>>           RX bytes:65212 (63.6 KiB)  TX bytes:262068658 (249.9 MiB)
>>           Base address:0xa000
>
> If your driver has to push 4096 skb at once, and you dont want to
> handle overruns,
> you might need to change eth0 settings
>
> ifconfig eth0 txqueuelen 5000
>
ok - but I definitely want to handle overruns or at least get notified
when the queue is almost full.
Querying the queue status would be fine, also.

Since I have data sources capable of delivering much more than 125MB
_any_ queue will overflow sooner or later without some kind of flow control.
Can you tell me how the txqueue can be monitored ?


regards,
Andre



MATRIX VISION GmbH, Talstraße 16, DE-71570 Oppenweiler  - Registergericht: Amtsgericht Stuttgart, HRB 271090
Geschäftsführer: Gerhard Thullner, Werner Armingeon, Uwe Furtner

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-11-07  9:39 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-05 17:53 Questions on kernel skb send / netdev queue monitoring Andre Schwarz
2008-11-05 18:30 ` Eric Dumazet
2008-11-07  9:39   ` Andre Schwarz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).