* [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013
@ 2013-12-08 12:35 Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow Amir Vadai
` (11 more replies)
0 siblings, 12 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller; +Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev
Hi Dave,
This patchset was tested contains:
1. An overhaul to the rx flow memory allocation done by Jenny (Eugenia). It
gives a performance boost when SR-IOV is enabled.
2. Support in ndo_get_phys_port_id added by Hadar.
3. Change the driver to use by default CQE/EQE size of 64 bytes done by Eyal.
This doubles the packet-rate of the NIC.
4. Configure the XPS queue mapping on driver load - added by Ido.
5. Fixes for some small bugs done by Jenny, Matan and Rana
Patchset was applied and tested against commit: "0d74c42 ether_addr_equal:
Optimize implementation, remove unused compare_ether_addr"
Thanks,
Amir
Eugenia Emantayev (3):
net/mlx4_en: Reuse mapped memory in RX flow
net/mlx4_en: Ignore irrelevant hypervisor events
net/mlx4_en: Add NAPI support for transmit side
Eyal Perry (1):
net/mlx4_core: Set CQE/EQE size to 64B by default
Hadar Hen Zion (5):
net/mlx4_core: Remove zeroed out of explicit QUERY_FUNC_CAP fields
net/mlx4_core: Rename QUERY_FUNC_CAP fields
net/mlx4_core: Introduce nic_info new flag in QUERY_FUNC_CAP
net/mlx4_core: Expose physical port id as PF/VF capability
net/mlx4_en: Implement ndo_get_phys_port_id
Ido Shamay (1):
net/mlx4_en: Configure the XPS queue mapping on driver load
Matan Barak (1):
net/mlx4_core: Check port number for validity before accessing data
Rana Shahout (1):
net/mlx4_en: Fix Supported/Advertised link mode reported by ethtool
drivers/net/ethernet/mellanox/mlx4/en_cq.c | 12 +-
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 24 +-
drivers/net/ethernet/mellanox/mlx4/en_main.c | 3 +
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 41 +-
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 723 +++++++++---------------
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 48 +-
drivers/net/ethernet/mellanox/mlx4/fw.c | 74 ++-
drivers/net/ethernet/mellanox/mlx4/fw.h | 2 +
drivers/net/ethernet/mellanox/mlx4/main.c | 9 +-
drivers/net/ethernet/mellanox/mlx4/mcg.c | 28 +-
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 66 +--
include/linux/mlx4/device.h | 2 +
12 files changed, 504 insertions(+), 528 deletions(-)
--
1.8.3.4
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 19:20 ` Eric Dumazet
2013-12-08 12:35 ` [PATCH net-next 02/12] net/mlx4_core: Remove zeroed out of explicit QUERY_FUNC_CAP fields Amir Vadai
` (10 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev,
Eugenia Emantayev
From: Eugenia Emantayev <eugenia@mellanox.com>
In receive flow use one fragment instead of multiple fragments.
Always allocate at least twice memory than needed for current MTU
and on each cycle use one hunk of the mapped memory.
Realloc and map new page only if this page was not freed.
This behavior allows to save unnecessary dma (un)mapping
operations that are very expensive when IOMMU is enabled.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 12 +-
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 723 +++++++++----------------
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 56 +-
3 files changed, 299 insertions(+), 492 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 709e5ec..9270006 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1490,7 +1490,11 @@ int mlx4_en_start_port(struct net_device *dev)
/* Calculate Rx buf size */
dev->mtu = min(dev->mtu, priv->max_mtu);
- mlx4_en_calc_rx_buf(dev);
+ priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
+ priv->rx_buf_size = roundup_pow_of_two(priv->rx_skb_size);
+ priv->rx_alloc_size = max_t(int, 2 * priv->rx_buf_size, PAGE_SIZE);
+ priv->rx_alloc_order = get_order(priv->rx_alloc_size);
+ priv->log_rx_info = ROUNDUP_LOG2(sizeof(struct mlx4_en_rx_buf));
en_dbg(DRV, priv, "Rx buf size:%d\n", priv->rx_skb_size);
/* Configure rx cq's and rings */
@@ -1923,7 +1927,7 @@ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
goto err;
if (mlx4_en_create_rx_ring(priv, &priv->rx_ring[i],
- prof->rx_ring_size, priv->stride,
+ prof->rx_ring_size,
node))
goto err;
}
@@ -2316,7 +2320,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
memcpy(priv->prev_mac, dev->dev_addr, sizeof(priv->prev_mac));
priv->stride = roundup_pow_of_two(sizeof(struct mlx4_en_rx_desc) +
- DS_SIZE * MLX4_EN_MAX_RX_FRAGS);
+ DS_SIZE);
err = mlx4_en_alloc_resources(priv);
if (err)
goto out;
@@ -2393,7 +2397,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
mlx4_en_update_loopback_state(priv->dev, priv->dev->features);
/* Configure port */
- mlx4_en_calc_rx_buf(dev);
+ priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
err = mlx4_SET_PORT_general(mdev->dev, priv->port,
priv->rx_skb_size + ETH_FCS_LEN,
prof->tx_pause, prof->tx_ppp,
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 07a1d0f..965c021 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -43,197 +43,72 @@
#include "mlx4_en.h"
-static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_alloc *page_alloc,
- const struct mlx4_en_frag_info *frag_info,
- gfp_t _gfp)
+static int mlx4_en_alloc_frag(struct mlx4_en_priv *priv,
+ struct mlx4_en_rx_ring *ring,
+ struct mlx4_en_rx_desc *rx_desc,
+ struct mlx4_en_rx_buf *rx_buf,
+ enum mlx4_en_alloc_type type)
{
- int order;
+ struct device *dev = priv->ddev;
struct page *page;
- dma_addr_t dma;
-
- for (order = MLX4_EN_ALLOC_PREFER_ORDER; ;) {
- gfp_t gfp = _gfp;
-
- if (order)
- gfp |= __GFP_COMP | __GFP_NOWARN;
- page = alloc_pages(gfp, order);
- if (likely(page))
- break;
- if (--order < 0 ||
- ((PAGE_SIZE << order) < frag_info->frag_size))
+ dma_addr_t dma = 0;
+ gfp_t gfp = GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
+
+ /* alloc new page */
+ page = alloc_pages_node(ring->numa_node, gfp, ring->rx_alloc_order);
+ if (unlikely(!page)) {
+ page = alloc_pages(gfp, ring->rx_alloc_order);
+ if (unlikely(!page))
return -ENOMEM;
}
- dma = dma_map_page(priv->ddev, page, 0, PAGE_SIZE << order,
- PCI_DMA_FROMDEVICE);
- if (dma_mapping_error(priv->ddev, dma)) {
- put_page(page);
- return -ENOMEM;
- }
- page_alloc->page_size = PAGE_SIZE << order;
- page_alloc->page = page;
- page_alloc->dma = dma;
- page_alloc->page_offset = frag_info->frag_align;
- /* Not doing get_page() for each frag is a big win
- * on asymetric workloads.
- */
- atomic_set(&page->_count,
- page_alloc->page_size / frag_info->frag_stride);
- return 0;
-}
-static int mlx4_en_alloc_frags(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_desc *rx_desc,
- struct mlx4_en_rx_alloc *frags,
- struct mlx4_en_rx_alloc *ring_alloc,
- gfp_t gfp)
-{
- struct mlx4_en_rx_alloc page_alloc[MLX4_EN_MAX_RX_FRAGS];
- const struct mlx4_en_frag_info *frag_info;
- struct page *page;
- dma_addr_t dma;
- int i;
-
- for (i = 0; i < priv->num_frags; i++) {
- frag_info = &priv->frag_info[i];
- page_alloc[i] = ring_alloc[i];
- page_alloc[i].page_offset += frag_info->frag_stride;
-
- if (page_alloc[i].page_offset + frag_info->frag_stride <=
- ring_alloc[i].page_size)
- continue;
-
- if (mlx4_alloc_pages(priv, &page_alloc[i], frag_info, gfp))
- goto out;
- }
+ /* map new page */
+ dma = dma_map_page(dev, page, 0,
+ ring->rx_alloc_size, DMA_FROM_DEVICE);
- for (i = 0; i < priv->num_frags; i++) {
- frags[i] = ring_alloc[i];
- dma = ring_alloc[i].dma + ring_alloc[i].page_offset;
- ring_alloc[i] = page_alloc[i];
- rx_desc->data[i].addr = cpu_to_be64(dma);
- }
-
- return 0;
-
-out:
- while (i--) {
- frag_info = &priv->frag_info[i];
- if (page_alloc[i].page != ring_alloc[i].page) {
- dma_unmap_page(priv->ddev, page_alloc[i].dma,
- page_alloc[i].page_size, PCI_DMA_FROMDEVICE);
- page = page_alloc[i].page;
- atomic_set(&page->_count, 1);
- put_page(page);
- }
+ /* free memory if mapping failed */
+ if (dma_mapping_error(dev, dma)) {
+ __free_pages(page, ring->rx_alloc_order);
+ return -ENOMEM;
}
- return -ENOMEM;
-}
-
-static void mlx4_en_free_frag(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_alloc *frags,
- int i)
-{
- const struct mlx4_en_frag_info *frag_info = &priv->frag_info[i];
- u32 next_frag_end = frags[i].page_offset + 2 * frag_info->frag_stride;
-
-
- if (next_frag_end > frags[i].page_size)
- dma_unmap_page(priv->ddev, frags[i].dma, frags[i].page_size,
- PCI_DMA_FROMDEVICE);
-
- if (frags[i].page)
- put_page(frags[i].page);
-}
-
-static int mlx4_en_init_allocator(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_ring *ring)
-{
- int i;
- struct mlx4_en_rx_alloc *page_alloc;
-
- for (i = 0; i < priv->num_frags; i++) {
- const struct mlx4_en_frag_info *frag_info = &priv->frag_info[i];
- if (mlx4_alloc_pages(priv, &ring->page_alloc[i],
- frag_info, GFP_KERNEL))
- goto out;
- }
+ /* allocation of replacement page was successfull,
+ * therefore unmap the old one and set the new page
+ * for HW use
+ */
+ if (type == MLX4_EN_ALLOC_REPLACEMENT)
+ dma_unmap_page(dev, rx_buf->dma,
+ ring->rx_alloc_size,
+ DMA_FROM_DEVICE);
+
+ rx_buf->page = page;
+ rx_buf->dma = dma;
+ rx_buf->page_offset = 0;
+ rx_desc->data[0].addr = cpu_to_be64(dma);
return 0;
-
-out:
- while (i--) {
- struct page *page;
-
- page_alloc = &ring->page_alloc[i];
- dma_unmap_page(priv->ddev, page_alloc->dma,
- page_alloc->page_size, PCI_DMA_FROMDEVICE);
- page = page_alloc->page;
- atomic_set(&page->_count, 1);
- put_page(page);
- page_alloc->page = NULL;
- }
- return -ENOMEM;
-}
-
-static void mlx4_en_destroy_allocator(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_ring *ring)
-{
- struct mlx4_en_rx_alloc *page_alloc;
- int i;
-
- for (i = 0; i < priv->num_frags; i++) {
- const struct mlx4_en_frag_info *frag_info = &priv->frag_info[i];
-
- page_alloc = &ring->page_alloc[i];
- en_dbg(DRV, priv, "Freeing allocator:%d count:%d\n",
- i, page_count(page_alloc->page));
-
- dma_unmap_page(priv->ddev, page_alloc->dma,
- page_alloc->page_size, PCI_DMA_FROMDEVICE);
- while (page_alloc->page_offset + frag_info->frag_stride <
- page_alloc->page_size) {
- put_page(page_alloc->page);
- page_alloc->page_offset += frag_info->frag_stride;
- }
- page_alloc->page = NULL;
- }
}
static void mlx4_en_init_rx_desc(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_ring *ring, int index)
+ struct mlx4_en_rx_ring *ring,
+ int index)
{
struct mlx4_en_rx_desc *rx_desc = ring->buf + ring->stride * index;
- int possible_frags;
- int i;
-
- /* Set size and memtype fields */
- for (i = 0; i < priv->num_frags; i++) {
- rx_desc->data[i].byte_count =
- cpu_to_be32(priv->frag_info[i].frag_size);
- rx_desc->data[i].lkey = cpu_to_be32(priv->mdev->mr.key);
- }
- /* If the number of used fragments does not fill up the ring stride,
- * remaining (unused) fragments must be padded with null address/size
- * and a special memory key */
- possible_frags = (ring->stride - sizeof(struct mlx4_en_rx_desc)) / DS_SIZE;
- for (i = priv->num_frags; i < possible_frags; i++) {
- rx_desc->data[i].byte_count = 0;
- rx_desc->data[i].lkey = cpu_to_be32(MLX4_EN_MEMTYPE_PAD);
- rx_desc->data[i].addr = 0;
- }
+ rx_desc->data[0].byte_count =
+ cpu_to_be32(ring->rx_buf_size);
+ rx_desc->data[0].lkey = cpu_to_be32(priv->mdev->mr.key);
}
static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_ring *ring, int index,
- gfp_t gfp)
+ struct mlx4_en_rx_ring *ring,
+ int index)
{
- struct mlx4_en_rx_desc *rx_desc = ring->buf + (index * ring->stride);
- struct mlx4_en_rx_alloc *frags = ring->rx_info +
- (index << priv->log_rx_info);
+ struct mlx4_en_rx_desc *rx_desc = ring->buf + ring->stride * index;
+ struct mlx4_en_rx_buf *rx_buf = &ring->rx_info[index];
+
+ return mlx4_en_alloc_frag(priv, ring, rx_desc, rx_buf,
+ MLX4_EN_ALLOC_NEW);
- return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
}
static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
@@ -243,16 +118,17 @@ static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
static void mlx4_en_free_rx_desc(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring *ring,
- int index)
+ struct mlx4_en_rx_desc *rx_desc,
+ struct mlx4_en_rx_buf *rx_buf)
{
- struct mlx4_en_rx_alloc *frags;
- int nr;
-
- frags = ring->rx_info + (index << priv->log_rx_info);
- for (nr = 0; nr < priv->num_frags; nr++) {
- en_dbg(DRV, priv, "Freeing fragment:%d\n", nr);
- mlx4_en_free_frag(priv, frags, nr);
- }
+ dma_unmap_page(priv->ddev, rx_buf->dma,
+ ring->rx_alloc_size, DMA_FROM_DEVICE);
+ put_page(rx_buf->page);
+
+ rx_buf->dma = 0;
+ rx_buf->page = NULL;
+ rx_buf->page_offset = 0;
+ rx_desc->data[0].addr = 0;
}
static int mlx4_en_fill_rx_buffers(struct mlx4_en_priv *priv)
@@ -261,22 +137,21 @@ static int mlx4_en_fill_rx_buffers(struct mlx4_en_priv *priv)
int ring_ind;
int buf_ind;
int new_size;
+ struct mlx4_en_rx_desc *rx_desc;
+ struct mlx4_en_rx_buf *rx_buf;
for (buf_ind = 0; buf_ind < priv->prof->rx_ring_size; buf_ind++) {
for (ring_ind = 0; ring_ind < priv->rx_ring_num; ring_ind++) {
ring = priv->rx_ring[ring_ind];
if (mlx4_en_prepare_rx_desc(priv, ring,
- ring->actual_size,
- GFP_KERNEL)) {
+ ring->actual_size)) {
if (ring->actual_size < MLX4_EN_MIN_RX_SIZE) {
- en_err(priv, "Failed to allocate "
- "enough rx buffers\n");
+ en_err(priv, "Failed to allocate enough rx buffers\n");
return -ENOMEM;
} else {
new_size = rounddown_pow_of_two(ring->actual_size);
- en_warn(priv, "Only %d buffers allocated "
- "reducing ring size to %d",
+ en_warn(priv, "Only %d buffers allocated reducing ring size to %d\n",
ring->actual_size, new_size);
goto reduce_rings;
}
@@ -293,10 +168,11 @@ reduce_rings:
while (ring->actual_size > new_size) {
ring->actual_size--;
ring->prod--;
- mlx4_en_free_rx_desc(priv, ring, ring->actual_size);
+ rx_desc = ring->buf + ring->stride * ring->actual_size;
+ rx_buf = &ring->rx_info[ring->actual_size];
+ mlx4_en_free_rx_desc(priv, ring, rx_desc, rx_buf);
}
}
-
return 0;
}
@@ -304,28 +180,34 @@ static void mlx4_en_free_rx_buf(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring *ring)
{
int index;
+ struct mlx4_en_rx_desc *rx_desc;
+ struct mlx4_en_rx_buf *rx_buf;
en_dbg(DRV, priv, "Freeing Rx buf - cons:%d prod:%d\n",
ring->cons, ring->prod);
/* Unmap and free Rx buffers */
BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size);
+
while (ring->cons != ring->prod) {
index = ring->cons & ring->size_mask;
+ rx_desc = ring->buf + ring->stride * index;
+ rx_buf = &ring->rx_info[index];
en_dbg(DRV, priv, "Processing descriptor:%d\n", index);
- mlx4_en_free_rx_desc(priv, ring, index);
+ mlx4_en_free_rx_desc(priv, ring, rx_desc, rx_buf);
++ring->cons;
}
}
int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
- u32 size, u16 stride, int node)
+ u32 size, int node)
{
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_rx_ring *ring;
int err = -ENOMEM;
int tmp;
+ int this_cpu = numa_node_id();
ring = kzalloc_node(sizeof(*ring), GFP_KERNEL, node);
if (!ring) {
@@ -334,21 +216,22 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
en_err(priv, "Failed to allocate RX ring structure\n");
return -ENOMEM;
}
- }
+ ring->numa_node = this_cpu;
+ } else
+ ring->numa_node = node;
ring->prod = 0;
ring->cons = 0;
ring->size = size;
ring->size_mask = size - 1;
- ring->stride = stride;
+ ring->stride = priv->stride;
ring->log_stride = ffs(ring->stride) - 1;
ring->buf_size = ring->size * ring->stride + TXBB_SIZE;
- tmp = size * roundup_pow_of_two(MLX4_EN_MAX_RX_FRAGS *
- sizeof(struct mlx4_en_rx_alloc));
- ring->rx_info = vmalloc_node(tmp, node);
+ tmp = size * roundup_pow_of_two(sizeof(struct mlx4_en_rx_buf));
+ ring->rx_info = vzalloc_node(tmp, node);
if (!ring->rx_info) {
- ring->rx_info = vmalloc(tmp);
+ ring->rx_info = vzalloc(tmp);
if (!ring->rx_info) {
err = -ENOMEM;
goto err_ring;
@@ -397,7 +280,7 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv)
int ring_ind;
int err;
int stride = roundup_pow_of_two(sizeof(struct mlx4_en_rx_desc) +
- DS_SIZE * priv->num_frags);
+ DS_SIZE);
for (ring_ind = 0; ring_ind < priv->rx_ring_num; ring_ind++) {
ring = priv->rx_ring[ring_ind];
@@ -406,6 +289,9 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv)
ring->cons = 0;
ring->actual_size = 0;
ring->cqn = priv->rx_cq[ring_ind]->mcq.cqn;
+ ring->rx_alloc_order = priv->rx_alloc_order;
+ ring->rx_alloc_size = priv->rx_alloc_size;
+ ring->rx_buf_size = priv->rx_buf_size;
ring->stride = stride;
if (ring->stride <= TXBB_SIZE)
@@ -420,16 +306,6 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv)
/* Initialize all descriptors */
for (i = 0; i < ring->size; i++)
mlx4_en_init_rx_desc(priv, ring, i);
-
- /* Initialize page allocators */
- err = mlx4_en_init_allocator(priv, ring);
- if (err) {
- en_err(priv, "Failed initializing ring allocator\n");
- if (ring->stride <= TXBB_SIZE)
- ring->buf -= TXBB_SIZE;
- ring_ind--;
- goto err_allocator;
- }
}
err = mlx4_en_fill_rx_buffers(priv);
if (err)
@@ -449,13 +325,14 @@ err_buffers:
mlx4_en_free_rx_buf(priv, priv->rx_ring[ring_ind]);
ring_ind = priv->rx_ring_num - 1;
-err_allocator:
+
while (ring_ind >= 0) {
- if (priv->rx_ring[ring_ind]->stride <= TXBB_SIZE)
- priv->rx_ring[ring_ind]->buf -= TXBB_SIZE;
- mlx4_en_destroy_allocator(priv, priv->rx_ring[ring_ind]);
+ ring = priv->rx_ring[ring_ind];
+ if (ring->stride <= TXBB_SIZE)
+ ring->buf -= TXBB_SIZE;
ring_ind--;
}
+
return err;
}
@@ -483,95 +360,125 @@ void mlx4_en_deactivate_rx_ring(struct mlx4_en_priv *priv,
mlx4_en_free_rx_buf(priv, ring);
if (ring->stride <= TXBB_SIZE)
ring->buf -= TXBB_SIZE;
- mlx4_en_destroy_allocator(priv, ring);
}
-
static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_desc *rx_desc,
- struct mlx4_en_rx_alloc *frags,
- struct sk_buff *skb,
- int length)
+ struct mlx4_en_rx_ring *ring,
+ struct mlx4_en_rx_desc *rx_desc,
+ struct mlx4_en_rx_buf *rx_buf,
+ struct sk_buff *skb,
+ int length)
{
- struct skb_frag_struct *skb_frags_rx = skb_shinfo(skb)->frags;
- struct mlx4_en_frag_info *frag_info;
- int nr;
- dma_addr_t dma;
+ struct page *page = rx_buf->page;
+ struct skb_frag_struct *skb_frags_rx;
+ struct device *dev = priv->ddev;
+
+ if (skb) {
+ skb_frags_rx = skb_shinfo(skb)->frags;
+ __skb_frag_set_page(&skb_frags_rx[0], page);
+ skb_frag_size_set(&skb_frags_rx[0], length);
+ skb_frags_rx[0].page_offset = rx_buf->page_offset;
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(dev,
+ rx_buf->dma,
+ rx_buf->page_offset,
+ ring->rx_buf_size,
+ DMA_FROM_DEVICE);
+ if (ring->rx_alloc_size == 2 * ring->rx_buf_size) {
+ /* if we are exclusive owner of the page we can reuse it,
+ * otherwise alloc replacement page
+ */
+ if (unlikely(page_count(page) != 1))
+ goto replace;
- /* Collect used fragments while replacing them in the HW descriptors */
- for (nr = 0; nr < priv->num_frags; nr++) {
- frag_info = &priv->frag_info[nr];
- if (length <= frag_info->frag_prefix_size)
- break;
- if (!frags[nr].page)
- goto fail;
+ /* move page offset to next buffer */
+ rx_buf->page_offset ^= ring->rx_buf_size;
- dma = be64_to_cpu(rx_desc->data[nr].addr);
- dma_sync_single_for_cpu(priv->ddev, dma, frag_info->frag_size,
- DMA_FROM_DEVICE);
+ /* increment ref count on page,
+ * since we are only owner can
+ * just set it to 2
+ */
+ atomic_set(&page->_count, 2);
+ } else {
+ if (rx_buf->page_offset + ring->rx_buf_size >=
+ ring->rx_alloc_size)
+ rx_buf->page_offset = 0;
+ else
+ rx_buf->page_offset += ring->rx_buf_size;
+
+ /* increment ref count on page */
+ get_page(page);
+ }
+
+ rx_desc->data[0].addr = cpu_to_be64(rx_buf->dma + rx_buf->page_offset);
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(dev, rx_buf->dma,
+ rx_buf->page_offset,
+ ring->rx_buf_size,
+ DMA_FROM_DEVICE);
+ return 0;
- /* Save page reference in skb */
- __skb_frag_set_page(&skb_frags_rx[nr], frags[nr].page);
- skb_frag_size_set(&skb_frags_rx[nr], frag_info->frag_size);
- skb_frags_rx[nr].page_offset = frags[nr].page_offset;
- skb->truesize += frag_info->frag_stride;
- frags[nr].page = NULL;
- }
- /* Adjust size of last fragment to match actual length */
- if (nr > 0)
- skb_frag_size_set(&skb_frags_rx[nr - 1],
- length - priv->frag_info[nr - 1].frag_prefix_size);
- return nr;
-
-fail:
- while (nr > 0) {
- nr--;
- __skb_frag_unref(&skb_frags_rx[nr]);
+replace:
+ if (mlx4_en_alloc_frag(priv, ring, rx_desc, rx_buf,
+ MLX4_EN_ALLOC_REPLACEMENT)) {
+ /* replacement allocation failed, drop and use same page */
+ dma_sync_single_range_for_device(dev, rx_buf->dma,
+ rx_buf->page_offset,
+ ring->rx_buf_size,
+ DMA_FROM_DEVICE);
+ return -ENOMEM;
}
return 0;
}
-
static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv,
+ struct mlx4_en_rx_ring *ring,
struct mlx4_en_rx_desc *rx_desc,
- struct mlx4_en_rx_alloc *frags,
+ struct mlx4_en_rx_buf *rx_buf,
unsigned int length)
{
+ struct device *dev = priv->ddev;
struct sk_buff *skb;
void *va;
- int used_frags;
dma_addr_t dma;
- skb = netdev_alloc_skb(priv->dev, SMALL_PACKET_SIZE + NET_IP_ALIGN);
+ skb = netdev_alloc_skb_ip_align(priv->dev, SMALL_PACKET_SIZE);
if (!skb) {
en_dbg(RX_ERR, priv, "Failed allocating skb\n");
return NULL;
}
- skb_reserve(skb, NET_IP_ALIGN);
- skb->len = length;
+ prefetchw(skb->data);
+ skb->len = length;
/* Get pointer to first fragment so we could copy the headers into the
- * (linear part of the) skb */
- va = page_address(frags[0].page) + frags[0].page_offset;
+ * (linear part of the) skb
+ */
+ va = page_address(rx_buf->page) + rx_buf->page_offset;
+ prefetch(va);
if (length <= SMALL_PACKET_SIZE) {
/* We are copying all relevant data to the skb - temporarily
- * sync buffers for the copy */
+ * sync buffers for the copy
+ */
dma = be64_to_cpu(rx_desc->data[0].addr);
- dma_sync_single_for_cpu(priv->ddev, dma, length,
+ dma_sync_single_for_cpu(dev, dma, length,
DMA_FROM_DEVICE);
skb_copy_to_linear_data(skb, va, length);
+ dma_sync_single_for_device(dev, dma, length,
+ DMA_FROM_DEVICE);
+ skb->truesize = length + sizeof(struct sk_buff);
skb->tail += length;
} else {
- /* Move relevant fragments to skb */
- used_frags = mlx4_en_complete_rx_desc(priv, rx_desc, frags,
- skb, length);
- if (unlikely(!used_frags)) {
+ if (mlx4_en_complete_rx_desc(priv, ring, rx_desc,
+ rx_buf, skb, length)) {
kfree_skb(skb);
return NULL;
}
- skb_shinfo(skb)->nr_frags = used_frags;
+ skb_shinfo(skb)->nr_frags = 1;
+ /* Move relevant fragments to skb */
/* Copy headers into the skb linear buffer */
memcpy(skb->data, va, HEADER_COPY_SIZE);
skb->tail += HEADER_COPY_SIZE;
@@ -582,7 +489,9 @@ static struct sk_buff *mlx4_en_rx_skb(struct mlx4_en_priv *priv,
/* Adjust size of first fragment */
skb_frag_size_sub(&skb_shinfo(skb)->frags[0], HEADER_COPY_SIZE);
skb->data_len = length - HEADER_COPY_SIZE;
+ skb->truesize += ring->rx_buf_size;
}
+
return skb;
}
@@ -593,44 +502,56 @@ static void validate_loopback(struct mlx4_en_priv *priv, struct sk_buff *skb)
for (i = 0; i < MLX4_LOOPBACK_TEST_PAYLOAD; i++, offset++) {
if (*(skb->data + offset) != (unsigned char) (i & 0xff))
- goto out_loopback;
+ return;
}
/* Loopback found */
priv->loopback_ok = 1;
-
-out_loopback:
- dev_kfree_skb_any(skb);
}
-static void mlx4_en_refill_rx_buffers(struct mlx4_en_priv *priv,
- struct mlx4_en_rx_ring *ring)
+static inline int invalid_cqe(struct mlx4_en_priv *priv,
+ struct mlx4_cqe *cqe)
{
- int index = ring->prod & ring->size_mask;
-
- while ((u32) (ring->prod - ring->cons) < ring->actual_size) {
- if (mlx4_en_prepare_rx_desc(priv, ring, index, GFP_ATOMIC))
- break;
- ring->prod++;
- index = ring->prod & ring->size_mask;
+ /* Drop packet on bad receive or bad checksum */
+ if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) ==
+ MLX4_CQE_OPCODE_ERROR)) {
+ en_err(priv, "CQE completed in error - vendor syndrom:%d syndrom:%d\n",
+ ((struct mlx4_err_cqe *)cqe)->vendor_err_syndrome,
+ ((struct mlx4_err_cqe *)cqe)->syndrome);
+ return 1;
+ }
+ if (unlikely(cqe->badfcs_enc & MLX4_CQE_BAD_FCS)) {
+ en_dbg(RX_ERR, priv, "Accepted frame with bad FCS\n");
+ return 1;
}
+
+ return 0;
}
-int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget)
+int mlx4_en_process_rx_cq(struct net_device *dev,
+ struct mlx4_en_cq *cq,
+ int budget)
{
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_cqe *cqe;
+ struct mlx4_cq *mcq = &cq->mcq;
struct mlx4_en_rx_ring *ring = priv->rx_ring[cq->ring];
- struct mlx4_en_rx_alloc *frags;
struct mlx4_en_rx_desc *rx_desc;
+ struct mlx4_en_rx_buf *rx_buf;
+ struct net_device_stats *stats = &priv->stats;
struct sk_buff *skb;
int index;
- int nr;
unsigned int length;
int polled = 0;
- int ip_summed;
+ struct ethhdr *ethh;
+ dma_addr_t dma;
int factor = priv->cqe_factor;
+ u32 cons_index = mcq->cons_index;
+ u32 size_mask = ring->size_mask;
+ int size = cq->size;
+ struct mlx4_cqe *buf = cq->buf;
u64 timestamp;
+ int ip_summed;
if (!priv->port_up)
return 0;
@@ -638,70 +559,56 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
/* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
* descriptor offset can be deduced from the CQE index instead of
* reading 'cqe->index' */
- index = cq->mcq.cons_index & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
+ index = cons_index & size_mask;
+ cqe = &buf[(index << factor) + factor];
/* Process all completed CQEs */
while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
- cq->mcq.cons_index & cq->size)) {
+ cons_index & size)) {
- frags = ring->rx_info + (index << priv->log_rx_info);
rx_desc = ring->buf + (index << ring->log_stride);
-
- /*
- * make sure we read the CQE after we read the ownership bit
- */
+ rx_buf = &ring->rx_info[index];
+ /* make sure we read the CQE after we read the ownership bit */
rmb();
/* Drop packet on bad receive or bad checksum */
- if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) ==
- MLX4_CQE_OPCODE_ERROR)) {
- en_err(priv, "CQE completed in error - vendor "
- "syndrom:%d syndrom:%d\n",
- ((struct mlx4_err_cqe *) cqe)->vendor_err_syndrome,
- ((struct mlx4_err_cqe *) cqe)->syndrome);
- goto next;
- }
- if (unlikely(cqe->badfcs_enc & MLX4_CQE_BAD_FCS)) {
- en_dbg(RX_ERR, priv, "Accepted frame with bad FCS\n");
+ if (unlikely(invalid_cqe(priv, cqe)))
goto next;
- }
+
+ /* Get pointer to first fragment since we haven't skb yet and
+ * cast it to ethhdr struct
+ */
+ dma = be64_to_cpu(rx_desc->data[0].addr);
+ dma_sync_single_for_cpu(priv->ddev, dma, sizeof(*ethh),
+ DMA_FROM_DEVICE);
+
+ ethh = (struct ethhdr *)(page_address(rx_buf->page) +
+ rx_buf->page_offset);
/* Check if we need to drop the packet if SRIOV is not enabled
* and not performing the selftest or flb disabled
*/
- if (priv->flags & MLX4_EN_FLAG_RX_FILTER_NEEDED) {
- struct ethhdr *ethh;
- dma_addr_t dma;
- /* Get pointer to first fragment since we haven't
- * skb yet and cast it to ethhdr struct
- */
- dma = be64_to_cpu(rx_desc->data[0].addr);
- dma_sync_single_for_cpu(priv->ddev, dma, sizeof(*ethh),
- DMA_FROM_DEVICE);
- ethh = (struct ethhdr *)(page_address(frags[0].page) +
- frags[0].page_offset);
-
- if (is_multicast_ether_addr(ethh->h_dest)) {
- struct mlx4_mac_entry *entry;
- struct hlist_head *bucket;
- unsigned int mac_hash;
-
- /* Drop the packet, since HW loopback-ed it */
- mac_hash = ethh->h_source[MLX4_EN_MAC_HASH_IDX];
- bucket = &priv->mac_hash[mac_hash];
- rcu_read_lock();
- hlist_for_each_entry_rcu(entry, bucket, hlist) {
- if (ether_addr_equal_64bits(entry->mac,
- ethh->h_source)) {
- rcu_read_unlock();
- goto next;
- }
+ if (priv->flags & MLX4_EN_FLAG_RX_FILTER_NEEDED &&
+ is_multicast_ether_addr(ethh->h_dest)) {
+ struct mlx4_mac_entry *entry;
+ struct hlist_head *bucket;
+ unsigned int mac_hash;
+
+ /* Drop the packet, since HW loopback-ed it */
+ mac_hash = ethh->h_source[MLX4_EN_MAC_HASH_IDX];
+ bucket = &priv->mac_hash[mac_hash];
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(entry, bucket, hlist) {
+ if (ether_addr_equal_64bits(entry->mac,
+ ethh->h_source)) {
+ rcu_read_unlock();
+ goto next;
}
- rcu_read_unlock();
}
+ rcu_read_unlock();
}
-
+ /* avoid cache miss in tcp_gro_receive */
+ prefetch((char *)ethh + 64);
/*
* Packet is OK - process it.
*/
@@ -710,77 +617,27 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
ring->bytes += length;
ring->packets++;
- if (likely(dev->features & NETIF_F_RXCSUM)) {
- if ((cqe->status & cpu_to_be16(MLX4_CQE_STATUS_IPOK)) &&
- (cqe->checksum == cpu_to_be16(0xffff))) {
- ring->csum_ok++;
- /* This packet is eligible for GRO if it is:
- * - DIX Ethernet (type interpretation)
- * - TCP/IP (v4)
- * - without IP options
- * - not an IP fragment
- * - no LLS polling in progress
- */
- if (!mlx4_en_cq_ll_polling(cq) &&
- (dev->features & NETIF_F_GRO)) {
- struct sk_buff *gro_skb = napi_get_frags(&cq->napi);
- if (!gro_skb)
- goto next;
-
- nr = mlx4_en_complete_rx_desc(priv,
- rx_desc, frags, gro_skb,
- length);
- if (!nr)
- goto next;
-
- skb_shinfo(gro_skb)->nr_frags = nr;
- gro_skb->len = length;
- gro_skb->data_len = length;
- gro_skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- if ((cqe->vlan_my_qpn &
- cpu_to_be32(MLX4_CQE_VLAN_PRESENT_MASK)) &&
- (dev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
- u16 vid = be16_to_cpu(cqe->sl_vid);
-
- __vlan_hwaccel_put_tag(gro_skb, htons(ETH_P_8021Q), vid);
- }
-
- if (dev->features & NETIF_F_RXHASH)
- gro_skb->rxhash = be32_to_cpu(cqe->immed_rss_invalid);
-
- skb_record_rx_queue(gro_skb, cq->ring);
-
- if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) {
- timestamp = mlx4_en_get_cqe_ts(cqe);
- mlx4_en_fill_hwtstamps(mdev,
- skb_hwtstamps(gro_skb),
- timestamp);
- }
-
- napi_gro_frags(&cq->napi);
- goto next;
- }
-
- /* GRO not possible, complete processing here */
- ip_summed = CHECKSUM_UNNECESSARY;
- } else {
- ip_summed = CHECKSUM_NONE;
- ring->csum_none++;
- }
+ if (likely((dev->features & NETIF_F_RXCSUM) &&
+ (cqe->status & cpu_to_be16(MLX4_CQE_STATUS_IPOK)) &&
+ (cqe->checksum == cpu_to_be16(0xffff)))) {
+ ring->csum_ok++;
+ ip_summed = CHECKSUM_UNNECESSARY;
} else {
- ip_summed = CHECKSUM_NONE;
ring->csum_none++;
+ ip_summed = CHECKSUM_NONE;
}
- skb = mlx4_en_rx_skb(priv, rx_desc, frags, length);
+ /* any kind of traffic goes here */
+ skb = mlx4_en_rx_skb(priv, ring, rx_desc, rx_buf, length);
if (!skb) {
- priv->stats.rx_dropped++;
+ stats->rx_dropped++;
goto next;
}
- if (unlikely(priv->validate_loopback)) {
+ /* check for loopback */
+ if (unlikely(priv->validate_loopback)) {
validate_loopback(priv, skb);
+ kfree_skb(skb);
goto next;
}
@@ -791,12 +648,15 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
if (dev->features & NETIF_F_RXHASH)
skb->rxhash = be32_to_cpu(cqe->immed_rss_invalid);
+ /* process VLAN traffic */
if ((be32_to_cpu(cqe->vlan_my_qpn) &
- MLX4_CQE_VLAN_PRESENT_MASK) &&
- (dev->features & NETIF_F_HW_VLAN_CTAG_RX))
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), be16_to_cpu(cqe->sl_vid));
+ MLX4_CQE_VLAN_PRESENT_MASK) &&
+ (dev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ be16_to_cpu(cqe->sl_vid));
- if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) {
+ /* process time stamps */
+ } else if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) {
timestamp = mlx4_en_get_cqe_ts(cqe);
mlx4_en_fill_hwtstamps(mdev, skb_hwtstamps(skb),
timestamp);
@@ -805,30 +665,32 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
skb_mark_napi_id(skb, &cq->napi);
/* Push it up the stack */
- netif_receive_skb(skb);
+ if (mlx4_en_cq_ll_polling(cq))
+ netif_receive_skb(skb);
+ else
+ napi_gro_receive(&cq->napi, skb);
next:
- for (nr = 0; nr < priv->num_frags; nr++)
- mlx4_en_free_frag(priv, frags, nr);
-
- ++cq->mcq.cons_index;
- index = (cq->mcq.cons_index) & ring->size_mask;
- cqe = &cq->buf[(index << factor) + factor];
- if (++polled == budget)
+ ++cons_index;
+ index = cons_index & size_mask;
+ cqe = &buf[(index << factor) + factor];
+ if (++polled == budget) {
+ /* we are here because we reached the NAPI budget */
goto out;
+ }
}
out:
AVG_PERF_COUNTER(priv->pstats.rx_coal_avg, polled);
- mlx4_cq_set_ci(&cq->mcq);
+ mcq->cons_index = cons_index;
+ mlx4_cq_set_ci(mcq);
wmb(); /* ensure HW sees CQ consumer before we post new buffers */
- ring->cons = cq->mcq.cons_index;
- mlx4_en_refill_rx_buffers(priv, ring);
+ ring->cons = mcq->cons_index;
+ ring->prod += polled;
mlx4_en_update_rx_prod_db(ring);
return polled;
}
-
void mlx4_en_rx_irq(struct mlx4_cq *mcq)
{
struct mlx4_en_cq *cq = container_of(mcq, struct mlx4_en_cq, mcq);
@@ -866,55 +728,6 @@ int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget)
return done;
}
-static const int frag_sizes[] = {
- FRAG_SZ0,
- FRAG_SZ1,
- FRAG_SZ2,
- FRAG_SZ3
-};
-
-void mlx4_en_calc_rx_buf(struct net_device *dev)
-{
- struct mlx4_en_priv *priv = netdev_priv(dev);
- int eff_mtu = dev->mtu + ETH_HLEN + VLAN_HLEN + ETH_LLC_SNAP_SIZE;
- int buf_size = 0;
- int i = 0;
-
- while (buf_size < eff_mtu) {
- priv->frag_info[i].frag_size =
- (eff_mtu > buf_size + frag_sizes[i]) ?
- frag_sizes[i] : eff_mtu - buf_size;
- priv->frag_info[i].frag_prefix_size = buf_size;
- if (!i) {
- priv->frag_info[i].frag_align = NET_IP_ALIGN;
- priv->frag_info[i].frag_stride =
- ALIGN(frag_sizes[i] + NET_IP_ALIGN, SMP_CACHE_BYTES);
- } else {
- priv->frag_info[i].frag_align = 0;
- priv->frag_info[i].frag_stride =
- ALIGN(frag_sizes[i], SMP_CACHE_BYTES);
- }
- buf_size += priv->frag_info[i].frag_size;
- i++;
- }
-
- priv->num_frags = i;
- priv->rx_skb_size = eff_mtu;
- priv->log_rx_info = ROUNDUP_LOG2(i * sizeof(struct mlx4_en_rx_alloc));
-
- en_dbg(DRV, priv, "Rx buffer scatter-list (effective-mtu:%d "
- "num_frags:%d):\n", eff_mtu, priv->num_frags);
- for (i = 0; i < priv->num_frags; i++) {
- en_err(priv,
- " frag:%d - size:%d prefix:%d align:%d stride:%d\n",
- i,
- priv->frag_info[i].frag_size,
- priv->frag_info[i].frag_prefix_size,
- priv->frag_info[i].frag_align,
- priv->frag_info[i].frag_stride);
- }
-}
-
/* RSS related functions */
static int mlx4_en_config_rss_qp(struct mlx4_en_priv *priv, int qpn,
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index f3758de..fa33a83 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -94,27 +94,20 @@
#define MLX4_EN_WATCHDOG_TIMEOUT (15 * HZ)
-/* Use the maximum between 16384 and a single page */
-#define MLX4_EN_ALLOC_SIZE PAGE_ALIGN(16384)
+#define MLX4_EN_ALLOC_SIZE PAGE_ALIGN(PAGE_SIZE)
+#define MLX4_EN_ALLOC_ORDER get_order(MLX4_EN_ALLOC_SIZE)
-#define MLX4_EN_ALLOC_PREFER_ORDER PAGE_ALLOC_COSTLY_ORDER
-
-/* Receive fragment sizes; we use at most 3 fragments (for 9600 byte MTU
- * and 4K allocations) */
-enum {
- FRAG_SZ0 = 1536 - NET_IP_ALIGN,
- FRAG_SZ1 = 4096,
- FRAG_SZ2 = 4096,
- FRAG_SZ3 = MLX4_EN_ALLOC_SIZE
+enum mlx4_en_alloc_type {
+ MLX4_EN_ALLOC_NEW = 0,
+ MLX4_EN_ALLOC_REPLACEMENT = 1,
};
-#define MLX4_EN_MAX_RX_FRAGS 4
/* Maximum ring sizes */
#define MLX4_EN_MAX_TX_SIZE 8192
#define MLX4_EN_MAX_RX_SIZE 8192
-/* Minimum ring size for our page-allocation scheme to work */
-#define MLX4_EN_MIN_RX_SIZE (MLX4_EN_ALLOC_SIZE / SMP_CACHE_BYTES)
+/* Minimum ring sizes */
+#define MLX4_EN_MIN_RX_SIZE (4096 / TXBB_SIZE)
#define MLX4_EN_MIN_TX_SIZE (4096 / TXBB_SIZE)
#define MLX4_EN_SMALL_PKT_SIZE 64
@@ -234,13 +227,6 @@ struct mlx4_en_tx_desc {
#define MLX4_EN_CX3_LOW_ID 0x1000
#define MLX4_EN_CX3_HIGH_ID 0x1005
-struct mlx4_en_rx_alloc {
- struct page *page;
- dma_addr_t dma;
- u32 page_offset;
- u32 page_size;
-};
-
struct mlx4_en_tx_ring {
struct mlx4_hwq_resources wqres;
u32 size ; /* number of TXBBs */
@@ -275,9 +261,14 @@ struct mlx4_en_rx_desc {
struct mlx4_wqe_data_seg data[0];
};
+struct mlx4_en_rx_buf {
+ dma_addr_t dma;
+ struct page *page;
+ unsigned int page_offset;
+};
+
struct mlx4_en_rx_ring {
struct mlx4_hwq_resources wqres;
- struct mlx4_en_rx_alloc page_alloc[MLX4_EN_MAX_RX_FRAGS];
u32 size ; /* number of Rx descs*/
u32 actual_size;
u32 size_mask;
@@ -288,8 +279,12 @@ struct mlx4_en_rx_ring {
u32 cons;
u32 buf_size;
u8 fcs_del;
+ u16 rx_alloc_order;
+ u32 rx_alloc_size;
+ u32 rx_buf_size;
+ int qpn;
void *buf;
- void *rx_info;
+ struct mlx4_en_rx_buf *rx_info;
unsigned long bytes;
unsigned long packets;
#ifdef CONFIG_NET_RX_BUSY_POLL
@@ -300,6 +295,7 @@ struct mlx4_en_rx_ring {
unsigned long csum_ok;
unsigned long csum_none;
int hwtstamp_rx_filter;
+ int numa_node;
};
struct mlx4_en_cq {
@@ -436,13 +432,6 @@ struct mlx4_en_mc_list {
u64 reg_id;
};
-struct mlx4_en_frag_info {
- u16 frag_size;
- u16 frag_prefix_size;
- u16 frag_stride;
- u16 frag_align;
-};
-
#ifdef CONFIG_MLX4_EN_DCB
/* Minimal TC BW - setting to 0 will block traffic */
#define MLX4_EN_BW_MIN 1
@@ -526,8 +515,9 @@ struct mlx4_en_priv {
u32 tx_ring_num;
u32 rx_ring_num;
u32 rx_skb_size;
- struct mlx4_en_frag_info frag_info[MLX4_EN_MAX_RX_FRAGS];
- u16 num_frags;
+ u16 rx_alloc_order;
+ u32 rx_alloc_size;
+ u32 rx_buf_size;
u16 log_rx_info;
struct mlx4_en_tx_ring **tx_ring;
@@ -730,7 +720,7 @@ void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
- u32 size, u16 stride, int node);
+ u32 size, int node);
void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
u32 size, u16 stride);
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 02/12] net/mlx4_core: Remove zeroed out of explicit QUERY_FUNC_CAP fields
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 03/12] net/mlx4_core: Rename " Amir Vadai
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Hadar Hen Zion
From: Hadar Hen Zion <hadarh@mellanox.com>
All mailboxes are already zeroed by commit:
571b8b9 net/mlx4_core: Initialize all mailbox buffers to zero before use
Remove explicit zero set for force mac and force vlan fields in
mlx4_QUERY_FUNC_CAP_wrapper
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/fw.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 1949282..91b50fe 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -221,12 +221,6 @@ int mlx4_QUERY_FUNC_CAP_wrapper(struct mlx4_dev *dev, int slave,
#define QUERY_FUNC_CAP_RDMA_PROPS_FORCE_PHY_WQE_GID 0x80
if (vhcr->op_modifier == 1) {
- field = 0;
- /* ensure force vlan and force mac bits are not set */
- MLX4_PUT(outbox->buf, field, QUERY_FUNC_CAP_ETH_PROPS_OFFSET);
- /* ensure that phy_wqe_gid bit is not set */
- MLX4_PUT(outbox->buf, field, QUERY_FUNC_CAP_RDMA_PROPS_OFFSET);
-
field = vhcr->in_modifier; /* phys-port = logical-port */
MLX4_PUT(outbox->buf, field, QUERY_FUNC_CAP_PHYS_PORT_OFFSET);
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 03/12] net/mlx4_core: Rename QUERY_FUNC_CAP fields
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 02/12] net/mlx4_core: Remove zeroed out of explicit QUERY_FUNC_CAP fields Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 04/12] net/mlx4_core: Introduce nic_info new flag in QUERY_FUNC_CAP Amir Vadai
` (8 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Hadar Hen Zion
From: Hadar Hen Zion <hadarh@mellanox.com>
Use correct names for QUERY_FUNC_CAP fields: flags0 and flags1.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/fw.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 91b50fe..58ca7de 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -207,18 +207,18 @@ int mlx4_QUERY_FUNC_CAP_wrapper(struct mlx4_dev *dev, int slave,
/* when opcode modifier = 1 */
#define QUERY_FUNC_CAP_PHYS_PORT_OFFSET 0x3
-#define QUERY_FUNC_CAP_RDMA_PROPS_OFFSET 0x8
-#define QUERY_FUNC_CAP_ETH_PROPS_OFFSET 0xc
+#define QUERY_FUNC_CAP_FLAGS0_OFFSET 0x8
+#define QUERY_FUNC_CAP_FLAGS1_OFFSET 0xc
#define QUERY_FUNC_CAP_QP0_TUNNEL 0x10
#define QUERY_FUNC_CAP_QP0_PROXY 0x14
#define QUERY_FUNC_CAP_QP1_TUNNEL 0x18
#define QUERY_FUNC_CAP_QP1_PROXY 0x1c
-#define QUERY_FUNC_CAP_ETH_PROPS_FORCE_MAC 0x40
-#define QUERY_FUNC_CAP_ETH_PROPS_FORCE_VLAN 0x80
+#define QUERY_FUNC_CAP_FLAGS1_FORCE_MAC 0x40
+#define QUERY_FUNC_CAP_FLAGS1_FORCE_VLAN 0x80
-#define QUERY_FUNC_CAP_RDMA_PROPS_FORCE_PHY_WQE_GID 0x80
+#define QUERY_FUNC_CAP_FLAGS0_FORCE_PHY_WQE_GID 0x80
if (vhcr->op_modifier == 1) {
field = vhcr->in_modifier; /* phys-port = logical-port */
@@ -386,21 +386,21 @@ int mlx4_QUERY_FUNC_CAP(struct mlx4_dev *dev, u32 gen_or_port,
}
if (dev->caps.port_type[gen_or_port] == MLX4_PORT_TYPE_ETH) {
- MLX4_GET(field, outbox, QUERY_FUNC_CAP_ETH_PROPS_OFFSET);
- if (field & QUERY_FUNC_CAP_ETH_PROPS_FORCE_VLAN) {
+ MLX4_GET(field, outbox, QUERY_FUNC_CAP_FLAGS1_OFFSET);
+ if (field & QUERY_FUNC_CAP_FLAGS1_FORCE_VLAN) {
mlx4_err(dev, "VLAN is enforced on this port\n");
err = -EPROTONOSUPPORT;
goto out;
}
- if (field & QUERY_FUNC_CAP_ETH_PROPS_FORCE_MAC) {
+ if (field & QUERY_FUNC_CAP_FLAGS1_FORCE_MAC) {
mlx4_err(dev, "Force mac is enabled on this port\n");
err = -EPROTONOSUPPORT;
goto out;
}
} else if (dev->caps.port_type[gen_or_port] == MLX4_PORT_TYPE_IB) {
- MLX4_GET(field, outbox, QUERY_FUNC_CAP_RDMA_PROPS_OFFSET);
- if (field & QUERY_FUNC_CAP_RDMA_PROPS_FORCE_PHY_WQE_GID) {
+ MLX4_GET(field, outbox, QUERY_FUNC_CAP_FLAGS0_OFFSET);
+ if (field & QUERY_FUNC_CAP_FLAGS0_FORCE_PHY_WQE_GID) {
mlx4_err(dev, "phy_wqe_gid is "
"enforced on this ib port\n");
err = -EPROTONOSUPPORT;
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 04/12] net/mlx4_core: Introduce nic_info new flag in QUERY_FUNC_CAP
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (2 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 03/12] net/mlx4_core: Rename " Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 05/12] net/mlx4_core: Expose physical port id as PF/VF capability Amir Vadai
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Hadar Hen Zion
From: Hadar Hen Zion <hadarh@mellanox.com>
Set nic_info field in QUERY_FUNC_CAP, which designates
supplementary NIC information is provided by the hypervisor.
When set, the following fields are valid: nic_num_rings,
nic_indirection_tbl_sz, cur_mac and phys_port_id.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/fw.c | 11 ++++++++---
drivers/net/ethernet/mellanox/mlx4/fw.h | 1 +
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index 58ca7de..bfe91ae 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -217,10 +217,15 @@ int mlx4_QUERY_FUNC_CAP_wrapper(struct mlx4_dev *dev, int slave,
#define QUERY_FUNC_CAP_FLAGS1_FORCE_MAC 0x40
#define QUERY_FUNC_CAP_FLAGS1_FORCE_VLAN 0x80
+#define QUERY_FUNC_CAP_FLAGS1_NIC_INFO 0x10
#define QUERY_FUNC_CAP_FLAGS0_FORCE_PHY_WQE_GID 0x80
if (vhcr->op_modifier == 1) {
+ /* Set nic_info bit to mark new fields support */
+ field = QUERY_FUNC_CAP_FLAGS1_NIC_INFO;
+ MLX4_PUT(outbox->buf, field, QUERY_FUNC_CAP_FLAGS1_OFFSET);
+
field = vhcr->in_modifier; /* phys-port = logical-port */
MLX4_PUT(outbox->buf, field, QUERY_FUNC_CAP_PHYS_PORT_OFFSET);
@@ -385,15 +390,15 @@ int mlx4_QUERY_FUNC_CAP(struct mlx4_dev *dev, u32 gen_or_port,
goto out;
}
+ MLX4_GET(func_cap->flags1, outbox, QUERY_FUNC_CAP_FLAGS1_OFFSET);
if (dev->caps.port_type[gen_or_port] == MLX4_PORT_TYPE_ETH) {
- MLX4_GET(field, outbox, QUERY_FUNC_CAP_FLAGS1_OFFSET);
- if (field & QUERY_FUNC_CAP_FLAGS1_FORCE_VLAN) {
+ if (func_cap->flags1 & QUERY_FUNC_CAP_FLAGS1_OFFSET) {
mlx4_err(dev, "VLAN is enforced on this port\n");
err = -EPROTONOSUPPORT;
goto out;
}
- if (field & QUERY_FUNC_CAP_FLAGS1_FORCE_MAC) {
+ if (func_cap->flags1 & QUERY_FUNC_CAP_FLAGS1_FORCE_MAC) {
mlx4_err(dev, "Force mac is enabled on this port\n");
err = -EPROTONOSUPPORT;
goto out;
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
index a0a368b..9d95298 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
@@ -140,6 +140,7 @@ struct mlx4_func_cap {
u32 qp1_proxy_qpn;
u8 physical_port;
u8 port_flags;
+ u8 flags1;
};
struct mlx4_adapter {
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 05/12] net/mlx4_core: Expose physical port id as PF/VF capability
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (3 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 04/12] net/mlx4_core: Introduce nic_info new flag in QUERY_FUNC_CAP Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 06/12] net/mlx4_en: Implement ndo_get_phys_port_id Amir Vadai
` (6 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Hadar Hen Zion
From: Hadar Hen Zion <hadarh@mellanox.com>
Add the infrastructure needed to support ndo_get_phys_port_id which
allows users to identify to which physical port a net-device is connected
to by reading a unique port id.
This will work for VFs and PFs.
The driver uses a new device capability - phys_port_id, The PF driver
reads the port phys_port_id from Firmware and stores it. The VF driver
reads the port phys_port_id from the PF using QUERY_FUNC_CAP command.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/fw.c | 45 +++++++++++++++++++++++++++++++
drivers/net/ethernet/mellanox/mlx4/fw.h | 1 +
drivers/net/ethernet/mellanox/mlx4/main.c | 5 ++++
include/linux/mlx4/device.h | 2 ++
4 files changed, 53 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index bfe91ae..27a8434 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -214,6 +214,7 @@ int mlx4_QUERY_FUNC_CAP_wrapper(struct mlx4_dev *dev, int slave,
#define QUERY_FUNC_CAP_QP0_PROXY 0x14
#define QUERY_FUNC_CAP_QP1_TUNNEL 0x18
#define QUERY_FUNC_CAP_QP1_PROXY 0x1c
+#define QUERY_FUNC_CAP_PHYS_PORT_ID 0x28
#define QUERY_FUNC_CAP_FLAGS1_FORCE_MAC 0x40
#define QUERY_FUNC_CAP_FLAGS1_FORCE_VLAN 0x80
@@ -242,6 +243,9 @@ int mlx4_QUERY_FUNC_CAP_wrapper(struct mlx4_dev *dev, int slave,
size += 2;
MLX4_PUT(outbox->buf, size, QUERY_FUNC_CAP_QP1_PROXY);
+ MLX4_PUT(outbox->buf, dev->caps.phys_port_id[vhcr->in_modifier],
+ QUERY_FUNC_CAP_PHYS_PORT_ID);
+
} else if (vhcr->op_modifier == 0) {
/* enable rdma and ethernet interfaces, and new quota locations */
field = (QUERY_FUNC_CAP_FLAG_ETH | QUERY_FUNC_CAP_FLAG_RDMA |
@@ -432,6 +436,10 @@ int mlx4_QUERY_FUNC_CAP(struct mlx4_dev *dev, u32 gen_or_port,
MLX4_GET(size, outbox, QUERY_FUNC_CAP_QP1_PROXY);
func_cap->qp1_proxy_qpn = size & 0xFFFFFF;
+ if (func_cap->flags1 & QUERY_FUNC_CAP_FLAGS1_NIC_INFO)
+ MLX4_GET(func_cap->phys_port_id, outbox,
+ QUERY_FUNC_CAP_PHYS_PORT_ID);
+
/* All other resources are allocated by the master, but we still report
* 'num' and 'reserved' capabilities as follows:
* - num remains the maximum resource index
@@ -1712,6 +1720,43 @@ int mlx4_NOP(struct mlx4_dev *dev)
return mlx4_cmd(dev, 0, 0x1f, 0, MLX4_CMD_NOP, 100, MLX4_CMD_NATIVE);
}
+int mlx4_get_phys_port_id(struct mlx4_dev *dev)
+{
+ u8 port;
+ u32 *outbox;
+ struct mlx4_cmd_mailbox *mailbox;
+ u32 in_mod;
+ u32 guid_hi, guid_lo;
+ int err, ret = 0;
+#define MOD_STAT_CFG_PORT_OFFSET 8
+#define MOD_STAT_CFG_GUID_H 0X14
+#define MOD_STAT_CFG_GUID_L 0X1c
+
+ mailbox = mlx4_alloc_cmd_mailbox(dev);
+ if (IS_ERR(mailbox))
+ return PTR_ERR(mailbox);
+ outbox = mailbox->buf;
+
+ for (port = 1; port <= dev->caps.num_ports; port++) {
+ in_mod = port << MOD_STAT_CFG_PORT_OFFSET;
+ err = mlx4_cmd_box(dev, 0, mailbox->dma, in_mod, 0x2,
+ MLX4_CMD_MOD_STAT_CFG, MLX4_CMD_TIME_CLASS_A,
+ MLX4_CMD_NATIVE);
+ if (err) {
+ mlx4_err(dev, "Fail to get port %d uplink guid\n",
+ port);
+ ret = err;
+ } else {
+ MLX4_GET(guid_hi, outbox, MOD_STAT_CFG_GUID_H);
+ MLX4_GET(guid_lo, outbox, MOD_STAT_CFG_GUID_L);
+ dev->caps.phys_port_id[port] = (u64)guid_lo |
+ (u64)guid_hi << 32;
+ }
+ }
+ mlx4_free_cmd_mailbox(dev, mailbox);
+ return ret;
+}
+
#define MLX4_WOL_SETUP_MODE (5 << 28)
int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port)
{
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
index 9d95298..6811ee0 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.h
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
@@ -141,6 +141,7 @@ struct mlx4_func_cap {
u8 physical_port;
u8 port_flags;
u8 flags1;
+ u64 phys_port_id;
};
struct mlx4_adapter {
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index 01fc651..d5f5dcb 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -606,6 +606,7 @@ static int mlx4_slave_cap(struct mlx4_dev *dev)
dev->caps.qp1_tunnel[i - 1] = func_cap.qp1_tunnel_qpn;
dev->caps.qp1_proxy[i - 1] = func_cap.qp1_proxy_qpn;
dev->caps.port_mask[i] = dev->caps.port_type[i];
+ dev->caps.phys_port_id[i] = func_cap.phys_port_id;
if (mlx4_get_slave_pkey_gid_tbl_len(dev, i,
&dev->caps.gid_table_len[i],
&dev->caps.pkey_table_len[i]))
@@ -1484,6 +1485,10 @@ static int mlx4_init_hca(struct mlx4_dev *dev)
choose_steering_mode(dev, &dev_cap);
+ err = mlx4_get_phys_port_id(dev);
+ if (err)
+ mlx4_err(dev, "Fail to get physical port id\n");
+
if (mlx4_is_master(dev))
mlx4_parav_master_pf_caps(dev);
diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
index 7d3a523..294b7c5 100644
--- a/include/linux/mlx4/device.h
+++ b/include/linux/mlx4/device.h
@@ -454,6 +454,7 @@ struct mlx4_caps {
u32 userspace_caps; /* userspace must be aware of these */
u32 function_caps; /* VFs must be aware of these */
u16 hca_core_clock;
+ u64 phys_port_id[MLX4_MAX_PORTS + 1];
};
struct mlx4_buf_list {
@@ -1113,6 +1114,7 @@ int mlx4_assign_eq(struct mlx4_dev *dev, char *name, struct cpu_rmap *rmap,
int *vector);
void mlx4_release_eq(struct mlx4_dev *dev, int vec);
+int mlx4_get_phys_port_id(struct mlx4_dev *dev);
int mlx4_wol_read(struct mlx4_dev *dev, u64 *config, int port);
int mlx4_wol_write(struct mlx4_dev *dev, u64 config, int port);
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 06/12] net/mlx4_en: Implement ndo_get_phys_port_id
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (4 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 05/12] net/mlx4_core: Expose physical port id as PF/VF capability Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load Amir Vadai
` (5 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Hadar Hen Zion
From: Hadar Hen Zion <hadarh@mellanox.com>
Use the port GUID read from the firmware to identify the physical port.
This port identifier is available via ndo_get_phys_port_id for both PF
and VF net-devices.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 9270006..44a1cc2 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -2168,6 +2168,27 @@ static int mlx4_en_set_vf_link_state(struct net_device *dev, int vf, int link_st
return mlx4_set_vf_link_state(mdev->dev, en_priv->port, vf, link_state);
}
+
+#define PORT_ID_BYTE_LEN 8
+static int mlx4_en_get_phys_port_id(struct net_device *dev,
+ struct netdev_phys_port_id *ppid)
+{
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ struct mlx4_dev *mdev = priv->mdev->dev;
+ int i;
+ u64 phys_port_id = mdev->caps.phys_port_id[priv->port];
+
+ if (!phys_port_id)
+ return -EOPNOTSUPP;
+
+ ppid->id_len = sizeof(phys_port_id);
+ for (i = PORT_ID_BYTE_LEN - 1; i >= 0; --i) {
+ ppid->id[i] = phys_port_id & 0xff;
+ phys_port_id >>= 8;
+ }
+ return 0;
+}
+
static const struct net_device_ops mlx4_netdev_ops = {
.ndo_open = mlx4_en_open,
.ndo_stop = mlx4_en_close,
@@ -2193,6 +2214,7 @@ static const struct net_device_ops mlx4_netdev_ops = {
#ifdef CONFIG_NET_RX_BUSY_POLL
.ndo_busy_poll = mlx4_en_low_latency_recv,
#endif
+ .ndo_get_phys_port_id = mlx4_en_get_phys_port_id,
};
static const struct net_device_ops mlx4_netdev_ops_master = {
@@ -2221,6 +2243,7 @@ static const struct net_device_ops mlx4_netdev_ops_master = {
#ifdef CONFIG_RFS_ACCEL
.ndo_rx_flow_steer = mlx4_en_filter_rfs,
#endif
+ .ndo_get_phys_port_id = mlx4_en_get_phys_port_id,
};
int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (5 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 06/12] net/mlx4_en: Implement ndo_get_phys_port_id Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 15:28 ` Sergei Shtylyov
2013-12-08 12:35 ` [PATCH net-next 08/12] net/mlx4_core: Set CQE/EQE size to 64B by default Amir Vadai
` (4 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Ido Shamay
From: Ido Shamay <idos@mellanox.com>
Only TX rings of User Piority 0 are mapped.
TX rings of other UP's are using UP 0 mapping.
XPS is not in use when num_tc is set.
Signed-off-by: Ido Shamay <idos@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 6 ++++--
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 9 ++++++++-
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 7 +++++--
3 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 44a1cc2..c9bc292 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1914,8 +1914,10 @@ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
prof->tx_ring_size, i, TX, node))
goto err;
- if (mlx4_en_create_tx_ring(priv, &priv->tx_ring[i], priv->base_tx_qpn + i,
- prof->tx_ring_size, TXBB_SIZE, node))
+ if (mlx4_en_create_tx_ring(priv, &priv->tx_ring[i],
+ priv->base_tx_qpn + i,
+ prof->tx_ring_size, TXBB_SIZE,
+ node, i))
goto err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index f54ebd5..5e22d7d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -55,7 +55,7 @@ MODULE_PARM_DESC(inline_thold, "threshold for using inline data");
int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring **pring, int qpn, u32 size,
- u16 stride, int node)
+ u16 stride, int node, int queue_index)
{
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_tx_ring *ring;
@@ -140,6 +140,10 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
ring->bf_enabled = true;
ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type;
+ ring->queue_index = queue_index;
+
+ if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index))
+ cpumask_set_cpu(queue_index, &ring->affinity_mask);
*pring = ring;
return 0;
@@ -206,6 +210,9 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
&ring->qp, &ring->qp_state);
+ if (!user_prio && cpu_online(ring->queue_index))
+ netif_set_xps_queue(priv->dev, &ring->affinity_mask,
+ ring->queue_index);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index fa33a83..a4cdd7d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -241,6 +241,8 @@ struct mlx4_en_tx_ring {
u16 poll_cnt;
struct mlx4_en_tx_info *tx_info;
u8 *bounce_buf;
+ u8 queue_index;
+ cpumask_t affinity_mask;
u32 last_nr_txbb;
struct mlx4_qp qp;
struct mlx4_qp_context context;
@@ -708,8 +710,9 @@ u16 mlx4_en_select_queue(struct net_device *dev, struct sk_buff *skb);
netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev);
int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
- struct mlx4_en_tx_ring **pring,
- int qpn, u32 size, u16 stride, int node);
+ struct mlx4_en_tx_ring **pring,
+ int qpn, u32 size, u16 stride,
+ int node, int queue_index);
void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring **pring);
int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 08/12] net/mlx4_core: Set CQE/EQE size to 64B by default
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (6 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 09/12] net/mlx4_en: Ignore irrelevant hypervisor events Amir Vadai
` (3 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Eyal Perry
From: Eyal Perry <eyalpe@mellanox.com>
To achieve out of the box performance default is to use 64 byte CQE/EQE.
In tests that we conduct in our labs, we achieved a performance
improvement of twice the message rate. For older VF/libmlx4 support,
enable_64b_cqe_eqe must be set to 0 (disabled).
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index d5f5dcb..9a024ae 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -96,10 +96,10 @@ MODULE_PARM_DESC(log_num_mgm_entry_size, "log mgm size, that defines the num"
" To activate device managed"
" flow steering when available, set to -1");
-static bool enable_64b_cqe_eqe;
+static bool enable_64b_cqe_eqe = true;
module_param(enable_64b_cqe_eqe, bool, 0444);
MODULE_PARM_DESC(enable_64b_cqe_eqe,
- "Enable 64 byte CQEs/EQEs when the FW supports this");
+ "Enable 64 byte CQEs/EQEs when the FW supports this (default: True)");
#define HCA_GLOBAL_CAP_MASK 0
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 09/12] net/mlx4_en: Ignore irrelevant hypervisor events
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (7 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 08/12] net/mlx4_core: Set CQE/EQE size to 64B by default Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side Amir Vadai
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev,
Eugenia Emantayev
From: Eugenia Emantayev <eugenia@mellanox.co.il>
MLX4_DEV_EVENT_SLAVE_INIT and MLX4_DEV_EVENT_SLAVE_SHUTDOWN
events used by Hypervisor to inform the PPF IB driver that
IB para-virtualization must be initialized/destroyed for a slave.
If this event is catched by ETH VF annoying but harmless error message
is printed into dmesg. Remove dmesg prints for these events.
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_main.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_main.c b/drivers/net/ethernet/mellanox/mlx4/en_main.c
index 0d087b0..725a4e1 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_main.c
@@ -174,6 +174,9 @@ static void mlx4_en_event(struct mlx4_dev *dev, void *endev_ptr,
mlx4_err(mdev, "Internal error detected, restarting device\n");
break;
+ case MLX4_DEV_EVENT_SLAVE_INIT:
+ case MLX4_DEV_EVENT_SLAVE_SHUTDOWN:
+ break;
default:
if (port < 1 || port > dev->caps.num_ports ||
!mdev->pndev[port])
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (8 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 09/12] net/mlx4_en: Ignore irrelevant hypervisor events Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 19:22 ` Eric Dumazet
2013-12-08 12:35 ` [PATCH net-next 11/12] net/mlx4_en: Fix Supported/Advertised link mode reported by ethtool Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 12/12] net/mlx4_core: Check port number for validity before accessing data Amir Vadai
11 siblings, 1 reply; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev,
Eugenia Emantayev
From: Eugenia Emantayev <eugenia@mellanox.com>
Add NAPI for TX side,
implement its support and provide NAPI callback.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_cq.c | 12 ++++++---
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 39 +++++++++++++++++++++++-----
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 3 +++
3 files changed, 43 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
index 3a098cc..2c60f0c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -161,12 +161,16 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
cq->mcq.comp = cq->is_tx ? mlx4_en_tx_irq : mlx4_en_rx_irq;
cq->mcq.event = mlx4_en_cq_event;
- if (!cq->is_tx) {
+ if (cq->is_tx) {
+ netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_tx_cq,
+ MLX4_EN_TX_BUDGET);
+ } else {
netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_rx_cq, 64);
napi_hash_add(&cq->napi);
- napi_enable(&cq->napi);
}
+ napi_enable(&cq->napi);
+
return 0;
}
@@ -188,12 +192,12 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq)
void mlx4_en_deactivate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq)
{
+ napi_disable(&cq->napi);
if (!cq->is_tx) {
- napi_disable(&cq->napi);
napi_hash_del(&cq->napi);
synchronize_rcu();
- netif_napi_del(&cq->napi);
}
+ netif_napi_del(&cq->napi);
mlx4_cq_free(priv->mdev->dev, &cq->mcq);
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index 5e22d7d..e3adceb 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -324,7 +324,7 @@ static u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
}
}
}
- dev_kfree_skb_any(skb);
+ dev_kfree_skb(skb);
return tx_info->nr_txbb;
}
@@ -361,7 +361,9 @@ int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring)
return cnt;
}
-static void mlx4_en_process_tx_cq(struct net_device *dev, struct mlx4_en_cq *cq)
+static int mlx4_en_process_tx_cq(struct net_device *dev,
+ struct mlx4_en_cq *cq,
+ int budget)
{
struct mlx4_en_priv *priv = netdev_priv(dev);
struct mlx4_cq *mcq = &cq->mcq;
@@ -379,9 +381,10 @@ static void mlx4_en_process_tx_cq(struct net_device *dev, struct mlx4_en_cq *cq)
u32 bytes = 0;
int factor = priv->cqe_factor;
u64 timestamp = 0;
+ int done = 0;
if (!priv->port_up)
- return;
+ return 0;
index = cons_index & size_mask;
cqe = &buf[(index << factor) + factor];
@@ -390,7 +393,7 @@ static void mlx4_en_process_tx_cq(struct net_device *dev, struct mlx4_en_cq *cq)
/* Process all completed CQEs */
while (XNOR(cqe->owner_sr_opcode & MLX4_CQE_OWNER_MASK,
- cons_index & size)) {
+ cons_index & size) && (done < budget)) {
/*
* make sure we read the CQE after we read the
* ownership bit
@@ -428,7 +431,7 @@ static void mlx4_en_process_tx_cq(struct net_device *dev, struct mlx4_en_cq *cq)
txbbs_stamp = txbbs_skipped;
packets++;
bytes += ring->tx_info[ring_index].nr_bytes;
- } while (ring_index != new_index);
+ } while ((++done < budget) && (ring_index != new_index));
++cons_index;
index = cons_index & size_mask;
@@ -454,6 +457,7 @@ static void mlx4_en_process_tx_cq(struct net_device *dev, struct mlx4_en_cq *cq)
netif_tx_wake_queue(ring->tx_queue);
priv->port_stats.wake_queue++;
}
+ return done;
}
void mlx4_en_tx_irq(struct mlx4_cq *mcq)
@@ -461,10 +465,31 @@ void mlx4_en_tx_irq(struct mlx4_cq *mcq)
struct mlx4_en_cq *cq = container_of(mcq, struct mlx4_en_cq, mcq);
struct mlx4_en_priv *priv = netdev_priv(cq->dev);
- mlx4_en_process_tx_cq(cq->dev, cq);
- mlx4_en_arm_cq(priv, cq);
+ if (priv->port_up)
+ napi_schedule(&cq->napi);
+ else
+ mlx4_en_arm_cq(priv, cq);
}
+/* TX CQ polling - called by NAPI */
+int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget)
+{
+ struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi);
+ struct net_device *dev = cq->dev;
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ int done;
+
+ done = mlx4_en_process_tx_cq(dev, cq, budget);
+
+ /* If we used up all the quota - we're probably not done yet... */
+ if (done < budget) {
+ /* Done for now */
+ napi_complete(napi);
+ mlx4_en_arm_cq(priv, cq);
+ return done;
+ }
+ return budget;
+}
static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring,
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index a4cdd7d..e2a1abc 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -224,6 +224,8 @@ struct mlx4_en_tx_desc {
#define MLX4_EN_USE_SRQ 0x01000000
+#define MLX4_EN_TX_BUDGET 64
+
#define MLX4_EN_CX3_LOW_ID 0x1000
#define MLX4_EN_CX3_HIGH_ID 0x1005
@@ -734,6 +736,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev,
struct mlx4_en_cq *cq,
int budget);
int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget);
+int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget);
void mlx4_en_fill_qp_context(struct mlx4_en_priv *priv, int size, int stride,
int is_tx, int rss, int qpn, int cqn, int user_prio,
struct mlx4_qp_context *context);
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 11/12] net/mlx4_en: Fix Supported/Advertised link mode reported by ethtool
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (9 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 12/12] net/mlx4_core: Check port number for validity before accessing data Amir Vadai
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Rana Shahout,
Eugenia Emantayev, Eyal Perry
From: Rana Shahout <ranas@mellanox.com>
Correctly report the support for 1/10/40Gbps link-speed as
supported/advertised link mode in ethtool's get_settings command.
Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il>
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
index 0596f9f..a601869 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
@@ -367,8 +367,28 @@ static int mlx4_en_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
int trans_type;
cmd->autoneg = AUTONEG_DISABLE;
- cmd->supported = SUPPORTED_10000baseT_Full;
- cmd->advertising = ADVERTISED_10000baseT_Full;
+
+ cmd->supported = SUPPORTED_1000baseT_Full |
+ SUPPORTED_10000baseT_Full |
+ SUPPORTED_1000baseKX_Full |
+ SUPPORTED_10000baseKX4_Full |
+ SUPPORTED_10000baseKR_Full |
+ SUPPORTED_10000baseR_FEC |
+ SUPPORTED_40000baseKR4_Full |
+ SUPPORTED_40000baseCR4_Full |
+ SUPPORTED_40000baseSR4_Full |
+ SUPPORTED_40000baseLR4_Full;
+
+ cmd->advertising = ADVERTISED_1000baseT_Full |
+ ADVERTISED_10000baseT_Full |
+ ADVERTISED_1000baseKX_Full |
+ ADVERTISED_10000baseKX4_Full |
+ ADVERTISED_10000baseKR_Full |
+ ADVERTISED_10000baseR_FEC |
+ ADVERTISED_40000baseKR4_Full |
+ ADVERTISED_40000baseCR4_Full |
+ ADVERTISED_40000baseSR4_Full |
+ ADVERTISED_40000baseLR4_Full;
if (mlx4_en_QUERY_PORT(priv->mdev, priv->port))
return -ENOMEM;
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH net-next 12/12] net/mlx4_core: Check port number for validity before accessing data
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
` (10 preceding siblings ...)
2013-12-08 12:35 ` [PATCH net-next 11/12] net/mlx4_en: Fix Supported/Advertised link mode reported by ethtool Amir Vadai
@ 2013-12-08 12:35 ` Amir Vadai
11 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 12:35 UTC (permalink / raw)
To: David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, Amir Vadai, netdev, Matan Barak
From: Matan Barak <matanb@mellanox.com>
Need to validate port number at mlx4_promisc_qp() before use.
Since port number is extracted from gid, as a cooked or corrupted gid
could lead to a crash.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx4/mcg.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c
index acf9d5f..289a09b 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mcg.c
+++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c
@@ -125,9 +125,14 @@ static struct mlx4_promisc_qp *get_promisc_qp(struct mlx4_dev *dev, u8 port,
enum mlx4_steer_type steer,
u32 qpn)
{
- struct mlx4_steer *s_steer = &mlx4_priv(dev)->steer[port - 1];
+ struct mlx4_steer *s_steer;
struct mlx4_promisc_qp *pqp;
+ if (port < 1 || port > dev->caps.num_ports)
+ return NULL;
+
+ s_steer = &mlx4_priv(dev)->steer[port - 1];
+
list_for_each_entry(pqp, &s_steer->promisc_qps[steer], list) {
if (pqp->qpn == qpn)
return pqp;
@@ -154,6 +159,9 @@ static int new_steering_entry(struct mlx4_dev *dev, u8 port,
u32 prot;
int err;
+ if (port < 1 || port > dev->caps.num_ports)
+ return -EINVAL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
new_entry = kzalloc(sizeof *new_entry, GFP_KERNEL);
if (!new_entry)
@@ -238,6 +246,9 @@ static int existing_steering_entry(struct mlx4_dev *dev, u8 port,
struct mlx4_promisc_qp *pqp;
struct mlx4_promisc_qp *dqp;
+ if (port < 1 || port > dev->caps.num_ports)
+ return -EINVAL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
pqp = get_promisc_qp(dev, port, steer, qpn);
@@ -283,6 +294,9 @@ static bool check_duplicate_entry(struct mlx4_dev *dev, u8 port,
struct mlx4_steer_index *tmp_entry, *entry = NULL;
struct mlx4_promisc_qp *dqp, *tmp_dqp;
+ if (port < 1 || port > dev->caps.num_ports)
+ return NULL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
/* if qp is not promisc, it cannot be duplicated */
@@ -324,6 +338,9 @@ static bool can_remove_steering_entry(struct mlx4_dev *dev, u8 port,
bool ret = false;
int i;
+ if (port < 1 || port > dev->caps.num_ports)
+ return NULL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
mailbox = mlx4_alloc_cmd_mailbox(dev);
@@ -378,6 +395,9 @@ static int add_promisc_qp(struct mlx4_dev *dev, u8 port,
int err;
struct mlx4_priv *priv = mlx4_priv(dev);
+ if (port < 1 || port > dev->caps.num_ports)
+ return -EINVAL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
mutex_lock(&priv->mcg_table.mutex);
@@ -484,6 +504,9 @@ static int remove_promisc_qp(struct mlx4_dev *dev, u8 port,
int loc, i;
int err;
+ if (port < 1 || port > dev->caps.num_ports)
+ return -EINVAL;
+
s_steer = &mlx4_priv(dev)->steer[port - 1];
mutex_lock(&priv->mcg_table.mutex);
@@ -910,6 +933,9 @@ int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16],
u8 port = gid[5];
u8 new_entry = 0;
+ if (port < 1 || port > dev->caps.num_ports)
+ return -EINVAL;
+
mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox))
return PTR_ERR(mailbox);
--
1.8.3.4
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load
2013-12-08 12:35 ` [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load Amir Vadai
@ 2013-12-08 15:28 ` Sergei Shtylyov
2013-12-08 15:42 ` Amir Vadai
0 siblings, 1 reply; 21+ messages in thread
From: Sergei Shtylyov @ 2013-12-08 15:28 UTC (permalink / raw)
To: Amir Vadai, David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, netdev, Ido Shamay
Hello.
On 08-12-2013 16:35, Amir Vadai wrote:
> From: Ido Shamay <idos@mellanox.com>
> Only TX rings of User Piority 0 are mapped.
> TX rings of other UP's are using UP 0 mapping.
> XPS is not in use when num_tc is set.
> Signed-off-by: Ido Shamay <idos@mellanox.com>
> Signed-off-by: Amir Vadai <amirv@mellanox.com>
[...]
> diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> index fa33a83..a4cdd7d 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
[...]
> @@ -708,8 +710,9 @@ u16 mlx4_en_select_queue(struct net_device *dev, struct sk_buff *skb);
> netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev);
>
> int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
> - struct mlx4_en_tx_ring **pring,
> - int qpn, u32 size, u16 stride, int node);
> + struct mlx4_en_tx_ring **pring,
> + int qpn, u32 size, u16 stride,
> + int node, int queue_index);
Why you've changed the indentation here -- it was alright before.
> void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
> struct mlx4_en_tx_ring **pring);
> int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
WBR, Sergei
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load
2013-12-08 15:28 ` Sergei Shtylyov
@ 2013-12-08 15:42 ` Amir Vadai
0 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-08 15:42 UTC (permalink / raw)
To: Sergei Shtylyov, David S. Miller
Cc: Or Gerlitz, Yevgeny Petrilin, netdev, Ido Shamay
On 08/12/2013 17:28, Sergei Shtylyov wrote:
> Hello.
>
> On 08-12-2013 16:35, Amir Vadai wrote:
>
>> From: Ido Shamay <idos@mellanox.com>
>
>> Only TX rings of User Piority 0 are mapped.
>> TX rings of other UP's are using UP 0 mapping.
>> XPS is not in use when num_tc is set.
>
>> Signed-off-by: Ido Shamay <idos@mellanox.com>
>> Signed-off-by: Amir Vadai <amirv@mellanox.com>
> [...]
>
>> diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
>> b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
>> index fa33a83..a4cdd7d 100644
>> --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
>> +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> [...]
>> @@ -708,8 +710,9 @@ u16 mlx4_en_select_queue(struct net_device *dev,
>> struct sk_buff *skb);
>> netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev);
>>
>> int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
>> - struct mlx4_en_tx_ring **pring,
>> - int qpn, u32 size, u16 stride, int node);
>> + struct mlx4_en_tx_ring **pring,
>> + int qpn, u32 size, u16 stride,
>> + int node, int queue_index);
>
> Why you've changed the indentation here -- it was alright before.
Right - changed by mistake. Will be fixed in V1.
>
>> void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
>> struct mlx4_en_tx_ring **pring);
>> int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
>
> WBR, Sergei
>
Amir
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow
2013-12-08 12:35 ` [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow Amir Vadai
@ 2013-12-08 19:20 ` Eric Dumazet
2013-12-09 9:44 ` Amir Vadai
0 siblings, 1 reply; 21+ messages in thread
From: Eric Dumazet @ 2013-12-08 19:20 UTC (permalink / raw)
To: Amir Vadai
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On Sun, 2013-12-08 at 14:35 +0200, Amir Vadai wrote:
> From: Eugenia Emantayev <eugenia@mellanox.com>
>
> In receive flow use one fragment instead of multiple fragments.
> Always allocate at least twice memory than needed for current MTU
> and on each cycle use one hunk of the mapped memory.
> Realloc and map new page only if this page was not freed.
> This behavior allows to save unnecessary dma (un)mapping
> operations that are very expensive when IOMMU is enabled.
>
>
> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
> Signed-off-by: Amir Vadai <amirv@mellanox.com>
> ---
> drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 12 +-
> drivers/net/ethernet/mellanox/mlx4/en_rx.c | 723 +++++++++----------------
> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 56 +-
> 3 files changed, 299 insertions(+), 492 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> index 709e5ec..9270006 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> @@ -1490,7 +1490,11 @@ int mlx4_en_start_port(struct net_device *dev)
>
> /* Calculate Rx buf size */
> dev->mtu = min(dev->mtu, priv->max_mtu);
> - mlx4_en_calc_rx_buf(dev);
> + priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
> + priv->rx_buf_size = roundup_pow_of_two(priv->rx_skb_size);
> + priv->rx_alloc_size = max_t(int, 2 * priv->rx_buf_size, PAGE_SIZE);
> + priv->rx_alloc_order = get_order(priv->rx_alloc_size);
> + priv->log_rx_info = ROUNDUP_LOG2(sizeof(struct mlx4_en_rx_buf));
> en_dbg(DRV, priv, "Rx buf size:%d\n", priv->rx_skb_size);
>
> /* Configure rx cq's and rings */
> @@ -1923,7 +1927,7 @@ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
> goto err;
>
> if (mlx4_en_create_rx_ring(priv, &priv->rx_ring[i],
> - prof->rx_ring_size, priv->stride,
> + prof->rx_ring_size,
> node))
> goto err;
> }
> @@ -2316,7 +2320,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
> memcpy(priv->prev_mac, dev->dev_addr, sizeof(priv->prev_mac));
>
> priv->stride = roundup_pow_of_two(sizeof(struct mlx4_en_rx_desc) +
> - DS_SIZE * MLX4_EN_MAX_RX_FRAGS);
> + DS_SIZE);
> err = mlx4_en_alloc_resources(priv);
> if (err)
> goto out;
> @@ -2393,7 +2397,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
> mlx4_en_update_loopback_state(priv->dev, priv->dev->features);
>
> /* Configure port */
> - mlx4_en_calc_rx_buf(dev);
> + priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
> err = mlx4_SET_PORT_general(mdev->dev, priv->port,
> priv->rx_skb_size + ETH_FCS_LEN,
> prof->tx_pause, prof->tx_ppp,
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index 07a1d0f..965c021 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -43,197 +43,72 @@
>
> #include "mlx4_en.h"
>
> -static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
> - struct mlx4_en_rx_alloc *page_alloc,
> - const struct mlx4_en_frag_info *frag_info,
> - gfp_t _gfp)
> +static int mlx4_en_alloc_frag(struct mlx4_en_priv *priv,
> + struct mlx4_en_rx_ring *ring,
> + struct mlx4_en_rx_desc *rx_desc,
> + struct mlx4_en_rx_buf *rx_buf,
> + enum mlx4_en_alloc_type type)
> {
> - int order;
> + struct device *dev = priv->ddev;
> struct page *page;
> - dma_addr_t dma;
> -
> - for (order = MLX4_EN_ALLOC_PREFER_ORDER; ;) {
> - gfp_t gfp = _gfp;
> -
> - if (order)
> - gfp |= __GFP_COMP | __GFP_NOWARN;
> - page = alloc_pages(gfp, order);
> - if (likely(page))
> - break;
> - if (--order < 0 ||
> - ((PAGE_SIZE << order) < frag_info->frag_size))
> + dma_addr_t dma = 0;
> + gfp_t gfp = GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
> +
> + /* alloc new page */
> + page = alloc_pages_node(ring->numa_node, gfp, ring->rx_alloc_order);
Hey... what is numa_node ?
> + if (unlikely(!page)) {
> + page = alloc_pages(gfp, ring->rx_alloc_order);
> + if (unlikely(!page))
> return -ENOMEM;
> }
I find very worrying such a change, with non documented features.
(Not even mentioned in the changelog)
We made a change in the past [1], that allocations always should be done
on the node of the cpu handling the RX irqs.
Have you tested the performance changes of this patch ?
If yes, please describe the protocol.
[1] commit 564824b0c52c34692d804bb6ea214451615b0b50
("net: allocate skbs on local node")
Also, it looks like your patch is now using 2048 bytes per frame,
instead of 1536, yet truesize is not changed.
In fact, it seems you added yet another skb->truesize lie
As in :
skb->truesize = length + sizeof(struct sk_buff);
And :
skb->truesize += ring->rx_buf_size;
This is absolutely wrong, I am very upset by this patch, its a step
back.
You drivers guys cannot ignore how the whole networking stack works.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side
2013-12-08 12:35 ` [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side Amir Vadai
@ 2013-12-08 19:22 ` Eric Dumazet
2013-12-09 10:07 ` Amir Vadai
0 siblings, 1 reply; 21+ messages in thread
From: Eric Dumazet @ 2013-12-08 19:22 UTC (permalink / raw)
To: Amir Vadai
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On Sun, 2013-12-08 at 14:35 +0200, Amir Vadai wrote:
> From: Eugenia Emantayev <eugenia@mellanox.com>
>
> Add NAPI for TX side,
> implement its support and provide NAPI callback.
>
>
> Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
> Signed-off-by: Amir Vadai <amirv@mellanox.com>
> ---
> drivers/net/ethernet/mellanox/mlx4/en_cq.c | 12 ++++++---
> drivers/net/ethernet/mellanox/mlx4/en_tx.c | 39 +++++++++++++++++++++++-----
> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 3 +++
> 3 files changed, 43 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> index 3a098cc..2c60f0c 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> @@ -161,12 +161,16 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
> cq->mcq.comp = cq->is_tx ? mlx4_en_tx_irq : mlx4_en_rx_irq;
> cq->mcq.event = mlx4_en_cq_event;
>
> - if (!cq->is_tx) {
> + if (cq->is_tx) {
> + netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_tx_cq,
> + MLX4_EN_TX_BUDGET);
> + } else {
TX completion is not supposed to have a 'budget'.
You should consume all completed descriptors.
BQL should already drive the dynamic of the TX queue.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow
2013-12-08 19:20 ` Eric Dumazet
@ 2013-12-09 9:44 ` Amir Vadai
2013-12-09 15:17 ` Eric Dumazet
0 siblings, 1 reply; 21+ messages in thread
From: Amir Vadai @ 2013-12-09 9:44 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On 08/12/2013 21:20, Eric Dumazet wrote:
> On Sun, 2013-12-08 at 14:35 +0200, Amir Vadai wrote:
>> From: Eugenia Emantayev <eugenia@mellanox.com>
>>
>> In receive flow use one fragment instead of multiple fragments.
>> Always allocate at least twice memory than needed for current MTU
>> and on each cycle use one hunk of the mapped memory.
>> Realloc and map new page only if this page was not freed.
>> This behavior allows to save unnecessary dma (un)mapping
>> operations that are very expensive when IOMMU is enabled.
>>
>>
>> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
>> Signed-off-by: Amir Vadai <amirv@mellanox.com>
>> ---
>> drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 12 +-
>> drivers/net/ethernet/mellanox/mlx4/en_rx.c | 723 +++++++++----------------
>> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 56 +-
>> 3 files changed, 299 insertions(+), 492 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> index 709e5ec..9270006 100644
>> --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> @@ -1490,7 +1490,11 @@ int mlx4_en_start_port(struct net_device *dev)
>>
>> /* Calculate Rx buf size */
>> dev->mtu = min(dev->mtu, priv->max_mtu);
>> - mlx4_en_calc_rx_buf(dev);
>> + priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
>> + priv->rx_buf_size = roundup_pow_of_two(priv->rx_skb_size);
>> + priv->rx_alloc_size = max_t(int, 2 * priv->rx_buf_size, PAGE_SIZE);
>> + priv->rx_alloc_order = get_order(priv->rx_alloc_size);
>> + priv->log_rx_info = ROUNDUP_LOG2(sizeof(struct mlx4_en_rx_buf));
>> en_dbg(DRV, priv, "Rx buf size:%d\n", priv->rx_skb_size);
>>
>> /* Configure rx cq's and rings */
>> @@ -1923,7 +1927,7 @@ int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
>> goto err;
>>
>> if (mlx4_en_create_rx_ring(priv, &priv->rx_ring[i],
>> - prof->rx_ring_size, priv->stride,
>> + prof->rx_ring_size,
>> node))
>> goto err;
>> }
>> @@ -2316,7 +2320,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
>> memcpy(priv->prev_mac, dev->dev_addr, sizeof(priv->prev_mac));
>>
>> priv->stride = roundup_pow_of_two(sizeof(struct mlx4_en_rx_desc) +
>> - DS_SIZE * MLX4_EN_MAX_RX_FRAGS);
>> + DS_SIZE);
>> err = mlx4_en_alloc_resources(priv);
>> if (err)
>> goto out;
>> @@ -2393,7 +2397,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
>> mlx4_en_update_loopback_state(priv->dev, priv->dev->features);
>>
>> /* Configure port */
>> - mlx4_en_calc_rx_buf(dev);
>> + priv->rx_skb_size = dev->mtu + ETH_HLEN + VLAN_HLEN;
>> err = mlx4_SET_PORT_general(mdev->dev, priv->port,
>> priv->rx_skb_size + ETH_FCS_LEN,
>> prof->tx_pause, prof->tx_ppp,
>> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>> index 07a1d0f..965c021 100644
>> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
>> @@ -43,197 +43,72 @@
>>
>> #include "mlx4_en.h"
>>
>> -static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
>> - struct mlx4_en_rx_alloc *page_alloc,
>> - const struct mlx4_en_frag_info *frag_info,
>> - gfp_t _gfp)
>> +static int mlx4_en_alloc_frag(struct mlx4_en_priv *priv,
>> + struct mlx4_en_rx_ring *ring,
>> + struct mlx4_en_rx_desc *rx_desc,
>> + struct mlx4_en_rx_buf *rx_buf,
>> + enum mlx4_en_alloc_type type)
>> {
>> - int order;
>> + struct device *dev = priv->ddev;
>> struct page *page;
>> - dma_addr_t dma;
>> -
>> - for (order = MLX4_EN_ALLOC_PREFER_ORDER; ;) {
>> - gfp_t gfp = _gfp;
>> -
>> - if (order)
>> - gfp |= __GFP_COMP | __GFP_NOWARN;
>> - page = alloc_pages(gfp, order);
>> - if (likely(page))
>> - break;
>> - if (--order < 0 ||
>> - ((PAGE_SIZE << order) < frag_info->frag_size))
>> + dma_addr_t dma = 0;
>> + gfp_t gfp = GFP_ATOMIC | __GFP_COLD | __GFP_COMP | __GFP_NOWARN;
>> +
>> + /* alloc new page */
>> + page = alloc_pages_node(ring->numa_node, gfp, ring->rx_alloc_order);
>
> Hey... what is numa_node ?
We get optimal performance when rx rings are mapped 1:1 to cpu's - IRQ
affinity is set to this CPU, and memory is allocated on the NUMA node
close to it (ring->numa_node)
In order to do that, we will post a patch soon to use
irq_set_affinity_hint() in order to hint the irq balancer. Till this
patch is applied, users should set irq affinity through sysfs and
disable the irq balancer.
>
>> + if (unlikely(!page)) {
>> + page = alloc_pages(gfp, ring->rx_alloc_order);
>> + if (unlikely(!page))
>> return -ENOMEM;
>> }
>
> I find very worrying such a change, with non documented features.
> (Not even mentioned in the changelog)
Will improve the changelog in V1
>
> We made a change in the past [1], that allocations always should be done
> on the node of the cpu handling the RX irqs.
This is the intention here too.
But, if can't do the allocation on the needed node (close to the cpu
handling the RX irqs), fallback to allocate from any node.
>
> Have you tested the performance changes of this patch ?
Sure
> If yes, please describe the protocol.
Will send it later today
>
> [1] commit 564824b0c52c34692d804bb6ea214451615b0b50
> ("net: allocate skbs on local node")
>
> Also, it looks like your patch is now using 2048 bytes per frame,
> instead of 1536, yet truesize is not changed.
>
> In fact, it seems you added yet another skb->truesize lie
>
> As in :
>
> skb->truesize = length + sizeof(struct sk_buff);
I'm sorry, the fix in [1] was dropped by mistake.
Will remove this line in V1
[1] 90278c9 "mlx4_en: fix skb truesize underestimation"
>
> And :
>
> skb->truesize += ring->rx_buf_size;
This is a bug - should be ring->rx_skb_size which is 1536 and not 2K.
Also used by mistake rx_buf_size when posted buffers to the HW - this
will also be fixed for V1.
>
> This is absolutely wrong, I am very upset by this patch, its a step
> back.
This is part of a process we do to have only one driver (upstream
kernel) instead of having two: MLNX_OFED driver and upstream driver.
I hope that in the end it will be a huge step forward...
>
> You drivers guys cannot ignore how the whole networking stack works.
If we ignored something it was by mistake. We take it very seriously,
and appreciate the hard work done here.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side
2013-12-08 19:22 ` Eric Dumazet
@ 2013-12-09 10:07 ` Amir Vadai
0 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-09 10:07 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On 08/12/2013 21:22, Eric Dumazet wrote:
> On Sun, 2013-12-08 at 14:35 +0200, Amir Vadai wrote:
>> From: Eugenia Emantayev <eugenia@mellanox.com>
>>
>> Add NAPI for TX side,
>> implement its support and provide NAPI callback.
>>
>>
>> Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
>> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
>> Signed-off-by: Amir Vadai <amirv@mellanox.com>
>> ---
>> drivers/net/ethernet/mellanox/mlx4/en_cq.c | 12 ++++++---
>> drivers/net/ethernet/mellanox/mlx4/en_tx.c | 39 +++++++++++++++++++++++-----
>> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 3 +++
>> 3 files changed, 43 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
>> index 3a098cc..2c60f0c 100644
>> --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
>> @@ -161,12 +161,16 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
>> cq->mcq.comp = cq->is_tx ? mlx4_en_tx_irq : mlx4_en_rx_irq;
>> cq->mcq.event = mlx4_en_cq_event;
>>
>> - if (!cq->is_tx) {
>> + if (cq->is_tx) {
>> + netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_tx_cq,
>> + MLX4_EN_TX_BUDGET);
>> + } else {
>
> TX completion is not supposed to have a 'budget'.
>
> You should consume all completed descriptors.
>
> BQL should already drive the dynamic of the TX queue.
Actually MLX4_EN_TX_BUDGET is 64 already.
Will use NAPI_POLL_WEIGHT instead in V1.
Amir
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow
2013-12-09 9:44 ` Amir Vadai
@ 2013-12-09 15:17 ` Eric Dumazet
2013-12-10 9:47 ` Amir Vadai
0 siblings, 1 reply; 21+ messages in thread
From: Eric Dumazet @ 2013-12-09 15:17 UTC (permalink / raw)
To: Amir Vadai
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On Mon, 2013-12-09 at 11:44 +0200, Amir Vadai wrote:
> We get optimal performance when rx rings are mapped 1:1 to cpu's - IRQ
> affinity is set to this CPU, and memory is allocated on the NUMA node
> close to it (ring->numa_node)
> In order to do that, we will post a patch soon to use
> irq_set_affinity_hint() in order to hint the irq balancer. Till this
> patch is applied, users should set irq affinity through sysfs and
> disable the irq balancer.
Point is : You have nothing to do to affine memory allocations.
The big problem is correct IRQ affinity, which as you said is addressed
in a different way/patch
If the IRQ is properly setup, automatically or by irq affinities,
then memory will be allocated on the right node, its properly done
by all memory allocators.
The xxx_alloc_node() variants in the fast path are therefore not needed,
and not using them avoids catastrophic results if affinities are not
properly set.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow
2013-12-09 15:17 ` Eric Dumazet
@ 2013-12-10 9:47 ` Amir Vadai
0 siblings, 0 replies; 21+ messages in thread
From: Amir Vadai @ 2013-12-10 9:47 UTC (permalink / raw)
To: Eric Dumazet, Amir Vadai
Cc: David S. Miller, Or Gerlitz, Yevgeny Petrilin, netdev,
Eugenia Emantayev
On 09/12/2013 17:17, Eric Dumazet wrote:
> On Mon, 2013-12-09 at 11:44 +0200, Amir Vadai wrote:
>
>> We get optimal performance when rx rings are mapped 1:1 to cpu's - IRQ
>> affinity is set to this CPU, and memory is allocated on the NUMA node
>> close to it (ring->numa_node)
>> In order to do that, we will post a patch soon to use
>> irq_set_affinity_hint() in order to hint the irq balancer. Till this
>> patch is applied, users should set irq affinity through sysfs and
>> disable the irq balancer.
>
> Point is : You have nothing to do to affine memory allocations.
>
> The big problem is correct IRQ affinity, which as you said is addressed
> in a different way/patch
>
> If the IRQ is properly setup, automatically or by irq affinities,
> then memory will be allocated on the right node, its properly done
> by all memory allocators.
>
> The xxx_alloc_node() variants in the fast path are therefore not needed,
> and not using them avoids catastrophic results if affinities are not
> properly set.
You are right. Will be fixed in V1 too.
We'll use xxx_alloc_node() on the initialization phase, but on the fast
path let the kernel choose the NUMA node.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2013-12-10 9:47 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-08 12:35 [PATCH net-next 00/12] net/mlx4: Mellanox driver update 08-12-2013 Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 01/12] net/mlx4_en: Reuse mapped memory in RX flow Amir Vadai
2013-12-08 19:20 ` Eric Dumazet
2013-12-09 9:44 ` Amir Vadai
2013-12-09 15:17 ` Eric Dumazet
2013-12-10 9:47 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 02/12] net/mlx4_core: Remove zeroed out of explicit QUERY_FUNC_CAP fields Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 03/12] net/mlx4_core: Rename " Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 04/12] net/mlx4_core: Introduce nic_info new flag in QUERY_FUNC_CAP Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 05/12] net/mlx4_core: Expose physical port id as PF/VF capability Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 06/12] net/mlx4_en: Implement ndo_get_phys_port_id Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 07/12] net/mlx4_en: Configure the XPS queue mapping on driver load Amir Vadai
2013-12-08 15:28 ` Sergei Shtylyov
2013-12-08 15:42 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 08/12] net/mlx4_core: Set CQE/EQE size to 64B by default Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 09/12] net/mlx4_en: Ignore irrelevant hypervisor events Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 10/12] net/mlx4_en: Add NAPI support for transmit side Amir Vadai
2013-12-08 19:22 ` Eric Dumazet
2013-12-09 10:07 ` Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 11/12] net/mlx4_en: Fix Supported/Advertised link mode reported by ethtool Amir Vadai
2013-12-08 12:35 ` [PATCH net-next 12/12] net/mlx4_core: Check port number for validity before accessing data Amir Vadai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).