From: Jesper Dangaard Brouer <brouer@redhat.com>
To: netdev@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
Jesper Dangaard Brouer <brouer@redhat.com>
Cc: "Toke Høiland-Jørgensen" <toke@toke.dk>,
ard.biesheuvel@linaro.org, "Jason Wang" <jasowang@redhat.com>,
ilias.apalodimas@linaro.org, BjörnTöpel <bjorn.topel@intel.com>,
w@1wt.eu, "Saeed Mahameed" <saeedm@mellanox.com>,
mykyta.iziumtsev@gmail.com,
"Daniel Borkmann" <borkmann@iogearbox.net>,
"Alexei Starovoitov" <alexei.starovoitov@gmail.com>,
"Tariq Toukan" <tariqt@mellanox.com>
Subject: [net-next PATCH RFC 5/8] net: mvneta: remove copybreak, prefetch and use build_skb
Date: Fri, 07 Dec 2018 00:25:52 +0100 [thread overview]
Message-ID: <154413875238.21735.7746697931250893385.stgit@firesoul> (raw)
In-Reply-To: <154413868810.21735.572808840657728172.stgit@firesoul>
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
The driver memcpy for packets < 256b and it's recycle tricks are not
needed anymore. As previous patch introduces buffer recycling using
the page_pool API (although recycling doesn't get fully activated in
this patch). After this switch to using build_skb().
This patch implicit fixes a driver bug where the memory is copied
prior to it's syncing for the CPU, in the < 256b case (as this case is
removed).
We also remove the data prefetch completely. The original driver had
the prefetch misplaced before any dma_sync operations took place.
Based on Jesper's analysis even if the prefetch is placed after
any DMA sync ops it ends up hurting performance.
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
drivers/net/ethernet/marvell/mvneta.c | 81 +++++++++------------------------
1 file changed, 22 insertions(+), 59 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 2354421fe96f..78f1fcdc1f00 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -643,7 +643,6 @@ static int txq_number = 8;
static int rxq_def;
static int rx_copybreak __read_mostly = 256;
-static int rx_header_size __read_mostly = 128;
/* HW BM need that each port be identify by a unique ID */
static int global_port_id;
@@ -1823,7 +1822,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
phys_addr = page_pool_get_dma_addr(page);
- phys_addr += pp->rx_offset_correction;
+ phys_addr += pp->rx_offset_correction + NET_SKB_PAD;
mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
return 0;
}
@@ -1944,14 +1943,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
struct page *page;
dma_addr_t phys_addr;
u32 rx_status, index;
- int rx_bytes, skb_size, copy_size;
- int frag_num, frag_size, frag_offset;
+ int frag_num, frag_size;
+ int rx_bytes;
index = rx_desc - rxq->descs;
page = (struct page *)rxq->buf_virt_addr[index];
data = page_address(page);
- /* Prefetch header */
- prefetch(data);
phys_addr = rx_desc->buf_phys_addr;
rx_status = rx_desc->status;
@@ -1969,49 +1966,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
rx_bytes = rx_desc->data_size -
(ETH_FCS_LEN + MVNETA_MH_SIZE);
- /* Allocate small skb for each new packet */
- skb_size = max(rx_copybreak, rx_header_size);
- rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size);
- if (unlikely(!rxq->skb)) {
- netdev_err(dev,
- "Can't allocate skb on queue %d\n",
- rxq->id);
- dev->stats.rx_dropped++;
- rxq->skb_alloc_err++;
- continue;
- }
- copy_size = min(skb_size, rx_bytes);
-
- /* Copy data from buffer to SKB, skip Marvell header */
- memcpy(rxq->skb->data, data + MVNETA_MH_SIZE,
- copy_size);
- skb_put(rxq->skb, copy_size);
- rxq->left_size = rx_bytes - copy_size;
- mvneta_rx_csum(pp, rx_status, rxq->skb);
- if (rxq->left_size == 0) {
- int size = copy_size + MVNETA_MH_SIZE;
-
- dma_sync_single_range_for_cpu(dev->dev.parent,
- phys_addr, 0,
- size,
- DMA_FROM_DEVICE);
+ dma_sync_single_range_for_cpu(dev->dev.parent,
+ phys_addr, 0,
+ rx_bytes,
+ DMA_FROM_DEVICE);
- /* leave the descriptor and buffer untouched */
- } else {
- /* refill descriptor with new buffer later */
- rx_desc->buf_phys_addr = 0;
+ rxq->skb = build_skb(data, PAGE_SIZE);
+ if (!rxq->skb)
+ break;
- frag_num = 0;
- frag_offset = copy_size + MVNETA_MH_SIZE;
- frag_size = min(rxq->left_size,
- (int)(PAGE_SIZE - frag_offset));
- skb_add_rx_frag(rxq->skb, frag_num, page,
- frag_offset, frag_size,
- PAGE_SIZE);
- page_pool_unmap_page(rxq->page_pool, page);
- rxq->left_size -= frag_size;
- }
+ rx_desc->buf_phys_addr = 0;
+ frag_num = 0;
+ skb_reserve(rxq->skb, MVNETA_MH_SIZE + NET_SKB_PAD);
+ skb_put(rxq->skb, rx_bytes < PAGE_SIZE ? rx_bytes :
+ PAGE_SIZE);
+ mvneta_rx_csum(pp, rx_status, rxq->skb);
+ page_pool_unmap_page(rxq->page_pool, page);
+ rxq->left_size = rx_bytes < PAGE_SIZE ? 0 : rx_bytes -
+ PAGE_SIZE;
} else {
/* Middle or Last descriptor */
if (unlikely(!rxq->skb)) {
@@ -2019,24 +1992,14 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
rx_status);
continue;
}
- if (!rxq->left_size) {
- /* last descriptor has only FCS */
- /* and can be discarded */
- dma_sync_single_range_for_cpu(dev->dev.parent,
- phys_addr, 0,
- ETH_FCS_LEN,
- DMA_FROM_DEVICE);
- /* leave the descriptor and buffer untouched */
- } else {
+ if (rxq->left_size) {
/* refill descriptor with new buffer later */
rx_desc->buf_phys_addr = 0;
frag_num = skb_shinfo(rxq->skb)->nr_frags;
- frag_offset = 0;
- frag_size = min(rxq->left_size,
- (int)(PAGE_SIZE - frag_offset));
+ frag_size = min(rxq->left_size, (int)PAGE_SIZE);
skb_add_rx_frag(rxq->skb, frag_num, page,
- frag_offset, frag_size,
+ 0, frag_size,
PAGE_SIZE);
page_pool_unmap_page(rxq->page_pool, page);
next prev parent reply other threads:[~2018-12-06 23:25 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-06 23:25 [net-next PATCH RFC 0/8] page_pool DMA handling and allow to recycles frames via SKB Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 1/8] page_pool: add helper functions for DMA Jesper Dangaard Brouer
2018-12-08 7:06 ` David Miller
2018-12-08 7:55 ` Ilias Apalodimas
2018-12-06 23:25 ` [net-next PATCH RFC 2/8] net: mvneta: use page pool API for sw buffer manager Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 3/8] xdp: reduce size of struct xdp_mem_info Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 4/8] net: core: add recycle capabilities on skbs via page_pool API Jesper Dangaard Brouer
2018-12-08 7:15 ` David Miller
2018-12-08 7:54 ` Ilias Apalodimas
2018-12-08 9:57 ` [net-next, RFC, " Florian Westphal
2018-12-08 11:36 ` Jesper Dangaard Brouer
2018-12-08 20:10 ` David Miller
2018-12-08 12:29 ` Eric Dumazet
2018-12-08 12:34 ` Eric Dumazet
2018-12-08 13:45 ` Jesper Dangaard Brouer
2018-12-08 14:57 ` Ilias Apalodimas
2018-12-08 17:07 ` Andrew Lunn
2018-12-08 19:26 ` Eric Dumazet
2018-12-08 20:11 ` Jesper Dangaard Brouer
2018-12-08 20:14 ` Ilias Apalodimas
2018-12-08 21:06 ` Willy Tarreau
2018-12-10 7:54 ` Ilias Apalodimas
2018-12-08 22:36 ` Eric Dumazet
2018-12-08 20:21 ` David Miller
2018-12-08 20:29 ` Ilias Apalodimas
2018-12-10 9:51 ` Saeed Mahameed
2018-12-06 23:25 ` Jesper Dangaard Brouer [this message]
2018-12-06 23:25 ` [net-next PATCH RFC 6/8] mvneta: activate page recycling via skb using page_pool Jesper Dangaard Brouer
2018-12-06 23:26 ` [net-next PATCH RFC 7/8] xdp: bpf: cpumap redirect must update skb->mem_info Jesper Dangaard Brouer
2018-12-06 23:26 ` [net-next PATCH RFC 8/8] veth: xdp_frames redirected into veth need to transfer xdp_mem_info Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=154413875238.21735.7746697931250893385.stgit@firesoul \
--to=brouer@redhat.com \
--cc=alexei.starovoitov@gmail.com \
--cc=ard.biesheuvel@linaro.org \
--cc=bjorn.topel@intel.com \
--cc=borkmann@iogearbox.net \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=jasowang@redhat.com \
--cc=mykyta.iziumtsev@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
--cc=toke@toke.dk \
--cc=w@1wt.eu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox