netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andre Schwarz <andre.schwarz@matrix-vision.de>
To: netdev@vger.kernel.org
Subject: Questions on kernel skb send / netdev queue monitoring
Date: Wed, 05 Nov 2008 18:53:23 +0100	[thread overview]
Message-ID: <2009170231@web.de> (raw)

Hi,

we're running 2.6.27 on a MPC8343 based board.
The board is working as a camera and is supposed to stream image data
over 1000M Ethernet.

Ethernet is connected via 2x Vitesse VSC8601 RGMII PHY, i.e. "eth0" and
"eth1" present.

Basically the system is running fine for quite some time - starting with
kernel 2.6.19.
Lately I have some trouble regarding performance and errors.

Obviously I'm doing something wrong ... hopefully someone can enlighten me.


How the system works :

- Kernel driver allocates static list of skb to hold a complete image.
This can be up to 4k skb depending on mtu.
- Imaging device (FPGA @ PCI) initiates DMA into skb.
- driver sends the skb out.


1. Sending

This is my "inner loop" send function and is called for every skb in the
list.

static inline int gevss_send_get_ehdr(TGevStream *gevs, struct sk_buff *skb)
{
        int result;
        struct sk_buff *slow_skb = skb_clone(skb, GFP_ATOMIC);

        atomic_inc(&slow_skb->users);
        result = gevs->rt->u.dst.output(slow_skb);
        kfree_skb(slow_skb);

        return result;
}

Is there really any need for cloning each skb before sending ?
I'd really like to send the static skb without consuming it. How can
this be done ?

Is "gevs->rt->u.dst.output(slow_skb)" reasonable ?
What about "hard_start_xmit" and/or "dev_queue_xmit" inside netdev ?
Are these functions supposed to be used by other drivers ?

What result can I expect if there's a failure, i.e. the HW-queue is full ?
How should this be handled ? retry,i.e. send again after a while ?
Can I query the xmit queue size/usage ?

Actually I'm checking for NETDEV_TX_OK and NETDEV_TX_BUSY.
Is this reasonable ?


2. "overruns"

I've never seen that before. The overrun counter is incrementing quite
fast even during proper operation.
Looks like this is also an issue with not throttling the sender when the
xmit queue is full ...  :-( 
How can I avoid this ?

eth0      Link encap:Ethernet  HWaddr 00:0C:8D:30:40:25
          inet addr:192.168.65.55  Bcast:192.168.65.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:929 errors:0 dropped:0 overruns:0 frame:0
          TX packets:180937 errors:0 dropped:0 overruns:54002 carrier:0   
          collisions:0 txqueuelen:1000
          RX bytes:65212 (63.6 KiB)  TX bytes:262068658 (249.9 MiB)
          Base address:0xa000




Any help is welcome.

regards,
Andre

____________________________________________________________________
Psssst! Schon vom neuen WEB.DE MultiMessenger gehört? 
Der kann`s mit allen: http://www.produkte.web.de/messenger/?did=3123


             reply	other threads:[~2008-11-05 17:53 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-05 17:53 Andre Schwarz [this message]
2008-11-05 18:30 ` Questions on kernel skb send / netdev queue monitoring Eric Dumazet
2008-11-07  9:39   ` Andre Schwarz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2009170231@web.de \
    --to=andre.schwarz@matrix-vision.de \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).