* [PATCH net-next 1/2] net: bcmgenet: use skb_put_padto()
2015-03-13 22:58 [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support Florian Fainelli
@ 2015-03-13 22:58 ` Florian Fainelli
2015-03-14 4:48 ` Jaedon Shin
2015-03-13 22:58 ` [PATCH net-next 2/2] net: bcmgenet: add byte queue limits support Florian Fainelli
2015-03-13 23:03 ` [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support Florian Fainelli
2 siblings, 1 reply; 6+ messages in thread
From: Florian Fainelli @ 2015-03-13 22:58 UTC (permalink / raw)
To: netdev; +Cc: edumazet, Florian Fainelli, davem, jaedon.shin, pgynther
We use skb_pad() to pad small packets, and then we need to enforce again
in bcmgenet_xmit() a check on ETH_ZLEN. Use skb_put_padto() which pads
up to the specified length and also increases skb->len accordingly. Note
that we still need to use skb_headlen() since we might transmitting the
first part out of a train of fragments.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
drivers/net/ethernet/broadcom/genet/bcmgenet.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index e74ae628bbb9..cee86c24b35f 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -1108,8 +1108,7 @@ static int bcmgenet_xmit_single(struct net_device *dev,
tx_cb_ptr->skb = skb;
- skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb);
-
+ skb_len = skb_headlen(skb);
mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE);
ret = dma_mapping_error(kdev, mapping);
if (ret) {
@@ -1272,7 +1271,7 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
goto out;
}
- if (skb_padto(skb, ETH_ZLEN)) {
+ if (skb_put_padto(skb, ETH_ZLEN)) {
ret = NETDEV_TX_OK;
goto out;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH net-next 2/2] net: bcmgenet: add byte queue limits support
2015-03-13 22:58 [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support Florian Fainelli
2015-03-13 22:58 ` [PATCH net-next 1/2] net: bcmgenet: use skb_put_padto() Florian Fainelli
@ 2015-03-13 22:58 ` Florian Fainelli
2015-03-13 23:03 ` [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support Florian Fainelli
2 siblings, 0 replies; 6+ messages in thread
From: Florian Fainelli @ 2015-03-13 22:58 UTC (permalink / raw)
To: netdev; +Cc: edumazet, Florian Fainelli, davem, jaedon.shin, pgynther
Add support for byte queue limits by hooking onto our transmit path and
reclaim routine. In order to account for whether the transmit state
block (64bytes) is enabled or not, make sure that we record the packet
length that will be sent on the wire, not pushed to the TDMA engine.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
drivers/net/ethernet/broadcom/genet/bcmgenet.c | 18 ++++++++++++++++--
drivers/net/ethernet/broadcom/genet/bcmgenet.h | 6 ++++++
2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index cee86c24b35f..2e481669f199 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -982,7 +982,7 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
struct bcmgenet_priv *priv = netdev_priv(dev);
struct enet_cb *tx_cb_ptr;
struct netdev_queue *txq;
- unsigned int pkts_compl = 0;
+ unsigned int pkts_compl = 0, bytes_compl = 0;
unsigned int c_index;
unsigned int txbds_ready;
unsigned int txbds_processed = 0;
@@ -1005,6 +1005,7 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
tx_cb_ptr = &priv->tx_cbs[ring->clean_ptr];
if (tx_cb_ptr->skb) {
pkts_compl++;
+ bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent;
dev->stats.tx_packets++;
dev->stats.tx_bytes += tx_cb_ptr->skb->len;
dma_unmap_single(&dev->dev,
@@ -1032,8 +1033,10 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
ring->free_bds += txbds_processed;
ring->c_index = (ring->c_index + txbds_processed) & DMA_C_INDEX_MASK;
+ txq = netdev_get_tx_queue(dev, ring->queue);
+ netdev_tx_completed_queue(txq, pkts_compl, bytes_compl);
+
if (ring->free_bds > (MAX_SKB_FRAGS + 1)) {
- txq = netdev_get_tx_queue(dev, ring->queue);
if (netif_tx_queue_stopped(txq))
netif_tx_wake_queue(txq);
}
@@ -1240,6 +1243,7 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
struct bcmgenet_tx_ring *ring = NULL;
struct netdev_queue *txq;
unsigned long flags = 0;
+ unsigned int skb_len;
int nr_frags, index;
u16 dma_desc_flags;
int ret;
@@ -1276,6 +1280,12 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
goto out;
}
+ /* Retain how many bytes will be sent on the wire, without FCB inserted
+ * by transmit checksum offload
+ */
+ skb_len = skb->len;
+ GENET_CB(skb)->bytes_sent = skb_len;
+
/* set the SKB transmit checksum */
if (priv->desc_64b_en) {
skb = bcmgenet_put_tx_csum(dev, skb);
@@ -1315,6 +1325,8 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
ring->prod_index += nr_frags + 1;
ring->prod_index &= DMA_P_INDEX_MASK;
+ netdev_tx_sent_queue(txq, skb_len);
+
if (ring->free_bds <= (MAX_SKB_FRAGS + 1))
netif_tx_stop_queue(txq);
@@ -1764,9 +1776,11 @@ static void bcmgenet_fini_tx_ring(struct bcmgenet_priv *priv,
unsigned int index)
{
struct bcmgenet_tx_ring *ring = &priv->tx_rings[index];
+ struct netdev_queue *txq = netdev_get_tx_queue(priv->dev, ring->queue);
napi_disable(&ring->napi);
netif_napi_del(&ring->napi);
+ netdev_tx_reset_queue(txq);
}
/* Initialize a RDMA ring */
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 2a8113898aed..23008c5ed235 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -519,6 +519,12 @@ struct bcmgenet_hw_params {
u32 flags;
};
+struct bcmgenet_skb_cb {
+ unsigned int bytes_sent; /* bytes on the wire (no TSB) */
+};
+
+#define GENET_CB(skb) ((struct bcmgenet_skb_cb *)((skb)->cb))
+
struct bcmgenet_tx_ring {
spinlock_t lock; /* ring lock */
struct napi_struct napi; /* NAPI per tx queue */
--
2.1.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support
2015-03-13 22:58 [PATCH net-next 0/2] net: bcmgenet: Byte Queue Limits support Florian Fainelli
2015-03-13 22:58 ` [PATCH net-next 1/2] net: bcmgenet: use skb_put_padto() Florian Fainelli
2015-03-13 22:58 ` [PATCH net-next 2/2] net: bcmgenet: add byte queue limits support Florian Fainelli
@ 2015-03-13 23:03 ` Florian Fainelli
2015-03-13 23:13 ` David Miller
2 siblings, 1 reply; 6+ messages in thread
From: Florian Fainelli @ 2015-03-13 23:03 UTC (permalink / raw)
To: netdev; +Cc: edumazet, davem, jaedon.shin, pgynther
On 13/03/15 15:58, Florian Fainelli wrote:
> Hi all,
>
> This path series adds byte queue limits support to the GENET driver. Just like
> the gianfar driver, we sometimes need to insert a 64 bytes transmit status block
> which is not going to be sent on the wire, just to the TDMA/TBUF, so we need to
> record the correct byte count before doing that SKB inflation.
David, please hold this patch series for now, I found a bunch of small
corner cases with it while bring up/down the interface, will respin later.
Thanks!
--
Florian
^ permalink raw reply [flat|nested] 6+ messages in thread