From: Arnd Bergmann <arnd@arndb.de>
To: Stefan Wahren <stefan.wahren@i2se.com>
Cc: davem@davemloft.net, robh+dt@kernel.org, pawel.moll@arm.com,
mark.rutland@arm.com, ijc+devicetree@hellion.org.uk,
galak@codeaurora.org, f.fainelli@gmail.com,
netdev@vger.kernel.org, devicetree@vger.kernel.org
Subject: Re: [PATCH RFC 2/2] net: qualcomm: new Ethernet over SPI driver for QCA7000
Date: Tue, 29 Apr 2014 20:14:05 +0200 [thread overview]
Message-ID: <4693879.NBTVQb29rJ@wuerfel> (raw)
In-Reply-To: <535FCB24.2010507@i2se.com>
On Tuesday 29 April 2014 17:54:12 Stefan Wahren wrote:
> > As far as I know it's also not mandatory.
> >
> > If the hardware interfaces require calling sleeping functions, it
> > may not actually be possible, but if you can use it, it normally
> > provides better performance.
>
> As i understood NAPI is good for high load on 1000 MBit ethernet, but
> the QCA7000 has
> in best case only a 10 MBit powerline connection. Additionally these
> packets must be transfered
> over a half duplex SPI. So i think the current driver implementation
> isn't a bottle neck.
Ok, makes sense. What is the slowest speed you might see then?
You already have a relatively small queue of at most 10 frames,
but if this goes below 10 Mbit, that can still be noticeable
bufferbloat.
Try adding calls to netdev_sent_queue, netdev_completed_queue and
netdev_reset_queue to let the network stack know how much data
is currently queued up for the tx thread.
On a related note, there is one part I don't understand:
+netdev_tx_t
+qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ u32 frame_len;
+ u8 *ptmp;
+ struct qcaspi *qca = netdev_priv(dev);
+ u32 new_tail;
+ struct sk_buff *tskb;
+ u8 pad_len = 0;
+
+ if (skb->len < QCAFRM_ETHMINLEN)
+ pad_len = QCAFRM_ETHMINLEN - skb->len;
+
+ if (qca->txq.skb[qca->txq.tail]) {
+ netdev_warn(qca->net_dev, "queue was unexpectedly full!\n");
+ netif_stop_queue(qca->net_dev);
+ qca->stats.queue_full++;
+ return NETDEV_TX_BUSY;
+ }
You print a 'netdev_warn' message here when the queue is full, expecting
this to be rare. If the device is so slow, why doesn't this happen
all the time?
Arnd
next prev parent reply other threads:[~2014-04-29 18:14 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-28 17:54 [PATCH RFC 0/2] add Qualcomm QCA7000 ethernet driver Stefan Wahren
2014-04-28 17:54 ` [PATCH RFC 1/2] Documentation: add Device tree bindings for QCA7000 Stefan Wahren
2014-04-28 19:57 ` Arnd Bergmann
2014-04-29 6:30 ` Stefan Wahren
2014-04-29 7:57 ` Arnd Bergmann
2014-04-29 22:36 ` Mark Rutland
2014-04-30 7:30 ` Stefan Wahren
2014-04-28 17:54 ` [PATCH RFC 2/2] net: qualcomm: new Ethernet over SPI driver " Stefan Wahren
2014-04-28 20:09 ` Arnd Bergmann
2014-04-29 6:51 ` Stefan Wahren
2014-04-29 8:14 ` Arnd Bergmann
2014-04-29 15:54 ` Stefan Wahren
2014-04-29 18:14 ` Arnd Bergmann [this message]
2014-04-30 8:09 ` Stefan Wahren
2014-04-30 9:32 ` Arnd Bergmann
2014-04-30 15:36 ` Stefan Wahren
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4693879.NBTVQb29rJ@wuerfel \
--to=arnd@arndb.de \
--cc=davem@davemloft.net \
--cc=devicetree@vger.kernel.org \
--cc=f.fainelli@gmail.com \
--cc=galak@codeaurora.org \
--cc=ijc+devicetree@hellion.org.uk \
--cc=mark.rutland@arm.com \
--cc=netdev@vger.kernel.org \
--cc=pawel.moll@arm.com \
--cc=robh+dt@kernel.org \
--cc=stefan.wahren@i2se.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox