From: "George Spelvin" <linux@horizon.com>
To: grundler@parisc-linux.org, netdev@vger.kernel.org
Cc: linux@horizon.com
Subject: [RFC] tulip: Support for byte queue limits
Date: 12 Jul 2013 09:20:54 -0400 [thread overview]
Message-ID: <20130712132054.16269.qmail@science.horizon.com> (raw)
The New Hotness of fq_codel wants bql support, but my WAN link is on my
Old And Busted tulip card, which lacks it.
It's just a few lines, but the important thing is knowing where to
put them, and I've sort of guessed. In particular, it seems like the
netdev_sent_queue call could be almost anywhere in tulip_start_xmit and
I'm not sure if there are reasons to prefer any particular position.
(You may have my S-o-b on copyright grounds, but I'd like to test it
some more before declaring this patch ready to be merged.)
diff --git a/drivers/net/ethernet/dec/tulip/interrupt.c b/drivers/net/ethernet/dec/tulip/interrupt.c
index 92306b3..d74426e 100644
--- a/drivers/net/ethernet/dec/tulip/interrupt.c
+++ b/drivers/net/ethernet/dec/tulip/interrupt.c
@@ -532,6 +532,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
#endif
unsigned int work_count = tulip_max_interrupt_work;
unsigned int handled = 0;
+ unsigned int bytes_compl = 0;
/* Let's see whether the interrupt really is for us */
csr5 = ioread32(ioaddr + CSR5);
@@ -634,6 +635,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
PCI_DMA_TODEVICE);
/* Free the original skb. */
+ bytes_compl += tp->tx_buffers[entry].skb->len;
dev_kfree_skb_irq(tp->tx_buffers[entry].skb);
tp->tx_buffers[entry].skb = NULL;
tp->tx_buffers[entry].mapping = 0;
@@ -802,6 +804,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
}
#endif /* CONFIG_TULIP_NAPI */
+ netdev_completed_queue(dev, tx, bytes_compl);
if ((missed = ioread32(ioaddr + CSR8) & 0x1ffff)) {
dev->stats.rx_dropped += missed & 0x10000 ? 0x10000 : missed;
}
diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
index 1e9443d..b4249f3 100644
--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
+++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
@@ -703,6 +703,7 @@ tulip_start_xmit(struct sk_buff *skb, struct net_device *dev)
wmb();
tp->cur_tx++;
+ netdev_sent_queue(dev, skb->len);
/* Trigger an immediate transmit demand. */
iowrite32(0, tp->base_addr + CSR1);
@@ -746,6 +747,7 @@ static void tulip_clean_tx_ring(struct tulip_private *tp)
tp->tx_buffers[entry].skb = NULL;
tp->tx_buffers[entry].mapping = 0;
}
+ netdev_reset_queue(tp->dev);
}
static void tulip_down (struct net_device *dev)
next reply other threads:[~2013-07-12 13:20 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-12 13:20 George Spelvin [this message]
2013-07-12 13:39 ` [RFC] tulip: Support for byte queue limits Eric Dumazet
2013-07-12 16:31 ` Grant Grundler
2013-07-12 18:01 ` George Spelvin
2013-07-12 18:14 ` Grant Grundler
2013-07-15 23:58 ` Ben Hutchings
2013-07-16 1:17 ` Grant Grundler
2013-07-16 1:27 ` Eric Dumazet
2013-07-16 14:37 ` Ben Hutchings
2013-07-16 17:20 ` Grant Grundler
2013-07-17 4:09 ` George Spelvin
2013-07-17 15:42 ` Ben Hutchings
2013-07-31 12:34 ` George Spelvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130712132054.16269.qmail@science.horizon.com \
--to=linux@horizon.com \
--cc=grundler@parisc-linux.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).