netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] tulip: Support for byte queue limits
@ 2013-07-12 13:20 George Spelvin
  2013-07-12 13:39 ` Eric Dumazet
  2013-07-12 16:31 ` Grant Grundler
  0 siblings, 2 replies; 13+ messages in thread
From: George Spelvin @ 2013-07-12 13:20 UTC (permalink / raw)
  To: grundler, netdev; +Cc: linux

The New Hotness of fq_codel wants bql support, but my WAN link is on my
Old And Busted tulip card, which lacks it.

It's just a few lines, but the important thing is knowing where to
put them, and I've sort of guessed.  In particular, it seems like the
netdev_sent_queue call could be almost anywhere in tulip_start_xmit and
I'm not sure if there are reasons to prefer any particular position.

(You may have my S-o-b on copyright grounds, but I'd like to test it
some more before declaring this patch ready to be merged.)


diff --git a/drivers/net/ethernet/dec/tulip/interrupt.c b/drivers/net/ethernet/dec/tulip/interrupt.c
index 92306b3..d74426e 100644
--- a/drivers/net/ethernet/dec/tulip/interrupt.c
+++ b/drivers/net/ethernet/dec/tulip/interrupt.c
@@ -532,6 +532,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
 #endif
 	unsigned int work_count = tulip_max_interrupt_work;
 	unsigned int handled = 0;
+	unsigned int bytes_compl = 0;
 
 	/* Let's see whether the interrupt really is for us */
 	csr5 = ioread32(ioaddr + CSR5);
@@ -634,6 +635,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
 						 PCI_DMA_TODEVICE);
 
 				/* Free the original skb. */
+				bytes_compl += tp->tx_buffers[entry].skb->len;
 				dev_kfree_skb_irq(tp->tx_buffers[entry].skb);
 				tp->tx_buffers[entry].skb = NULL;
 				tp->tx_buffers[entry].mapping = 0;
@@ -802,6 +804,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
 	}
 #endif /* CONFIG_TULIP_NAPI */
 
+	netdev_completed_queue(dev, tx, bytes_compl);
 	if ((missed = ioread32(ioaddr + CSR8) & 0x1ffff)) {
 		dev->stats.rx_dropped += missed & 0x10000 ? 0x10000 : missed;
 	}
diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
index 1e9443d..b4249f3 100644
--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
+++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
@@ -703,6 +703,7 @@ tulip_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	wmb();
 
 	tp->cur_tx++;
+	netdev_sent_queue(dev, skb->len);
 
 	/* Trigger an immediate transmit demand. */
 	iowrite32(0, tp->base_addr + CSR1);
@@ -746,6 +747,7 @@ static void tulip_clean_tx_ring(struct tulip_private *tp)
 		tp->tx_buffers[entry].skb = NULL;
 		tp->tx_buffers[entry].mapping = 0;
 	}
+	netdev_reset_queue(tp->dev);
 }
 
 static void tulip_down (struct net_device *dev)

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-07-31 12:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-12 13:20 [RFC] tulip: Support for byte queue limits George Spelvin
2013-07-12 13:39 ` Eric Dumazet
2013-07-12 16:31 ` Grant Grundler
2013-07-12 18:01   ` George Spelvin
2013-07-12 18:14     ` Grant Grundler
2013-07-15 23:58     ` Ben Hutchings
2013-07-16  1:17       ` Grant Grundler
2013-07-16  1:27         ` Eric Dumazet
2013-07-16 14:37         ` Ben Hutchings
2013-07-16 17:20           ` Grant Grundler
2013-07-17  4:09       ` George Spelvin
2013-07-17 15:42         ` Ben Hutchings
2013-07-31 12:34           ` George Spelvin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).