public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: netdev@vger.kernel.org, BjörnTöpel <bjorn.topel@intel.com>,
	magnus.karlsson@intel.com
Cc: eugenia@mellanox.com, Jason Wang <jasowang@redhat.com>,
	John Fastabend <john.fastabend@gmail.com>,
	Eran Ben Elisha <eranbe@mellanox.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	galp@mellanox.com, Jesper Dangaard Brouer <brouer@redhat.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Tariq Toukan <tariqt@mellanox.com>
Subject: [bpf-next V5 PATCH 12/15] xdp: allow page_pool as an allocator type in xdp_return_frame
Date: Fri, 23 Mar 2018 13:18:59 +0100	[thread overview]
Message-ID: <152180753987.20167.7085747286116186761.stgit@firesoul> (raw)
In-Reply-To: <152180742196.20167.5168801400337773178.stgit@firesoul>

New allocator type MEM_TYPE_PAGE_POOL for page_pool usage.

The registered allocator page_pool pointer is not available directly
from xdp_rxq_info, but it could be (if needed).  For now, the driver
should keep separate track of the page_pool pointer, which it should
use for RX-ring page allocation.

As suggested by Saeed, to maintain a symmetric API it is the drivers
responsibility to allocate/create and free/destroy the page_pool.
Thus, after the driver have called xdp_rxq_info_unreg(), it is drivers
responsibility to free the page_pool, but with a RCU free call.  This
is done easily via the page_pool helper page_pool_destroy_rcu() (which
avoids touching any driver code during the RCU callback, which could
happen after the driver have been unloaded).

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 include/net/xdp.h |    3 +++
 net/core/xdp.c    |   23 ++++++++++++++++++++---
 2 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 859aa9b737fe..98b55eaf8fd7 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -36,6 +36,7 @@
 enum mem_type {
 	MEM_TYPE_PAGE_SHARED = 0, /* Split-page refcnt based model */
 	MEM_TYPE_PAGE_ORDER0,     /* Orig XDP full page model */
+	MEM_TYPE_PAGE_POOL,
 	MEM_TYPE_MAX,
 };
 
@@ -44,6 +45,8 @@ struct xdp_mem_info {
 	u32 id;
 };
 
+struct page_pool;
+
 struct xdp_rxq_info {
 	struct net_device *dev;
 	u32 queue_index;
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 06a5b39491ad..fe8e87abc266 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -8,6 +8,7 @@
 #include <linux/slab.h>
 #include <linux/idr.h>
 #include <linux/rhashtable.h>
+#include <net/page_pool.h>
 
 #include <net/xdp.h>
 
@@ -27,7 +28,10 @@ static struct rhashtable *mem_id_ht;
 
 struct xdp_mem_allocator {
 	struct xdp_mem_info mem;
-	void *allocator;
+	union {
+		void *allocator;
+		struct page_pool *page_pool;
+	};
 	struct rhash_head node;
 	struct rcu_head rcu;
 };
@@ -74,7 +78,9 @@ void __xdp_mem_allocator_rcu_free(struct rcu_head *rcu)
 	/* Allow this ID to be reused */
 	ida_simple_remove(&mem_id_pool, xa->mem.id);
 
-	/* TODO: Depending on allocator type/pointer free resources */
+	/* Notice, driver is expected to free the *allocator,
+	 * e.g. page_pool, and MUST also use RCU free.
+	 */
 
 	/* Poison memory */
 	xa->mem.id = 0xFFFF;
@@ -290,11 +296,21 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model);
 
 void xdp_return_frame(void *data, struct xdp_mem_info *mem)
 {
-	struct xdp_mem_allocator *xa;
+	struct xdp_mem_allocator *xa = NULL;
 
 	rcu_read_lock();
 	if (mem->id)
 		xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
+
+	if (mem->type == MEM_TYPE_PAGE_POOL) {
+		struct page *page = virt_to_head_page(data);
+
+		if (xa)
+			page_pool_put_page(xa->page_pool, page);
+		else
+			put_page(page);
+		return;
+	}
 	rcu_read_unlock();
 
 	if (mem->type == MEM_TYPE_PAGE_SHARED) {
@@ -306,6 +322,7 @@ void xdp_return_frame(void *data, struct xdp_mem_info *mem)
 		struct page *page = virt_to_page(data); /* Assumes order0 page*/
 
 		put_page(page);
+		return;
 	}
 }
 EXPORT_SYMBOL_GPL(xdp_return_frame);

  parent reply	other threads:[~2018-03-23 12:19 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-23 12:17 [bpf-next V5 PATCH 00/15] XDP redirect memory return API Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 01/15] mlx5: basic XDP_REDIRECT forward support Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 02/15] xdp: introduce xdp_return_frame API and use in cpumap Jesper Dangaard Brouer
2018-03-23 16:35   ` Alexander Duyck
2018-03-26 19:06     ` Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 03/15] ixgbe: use xdp_return_frame API Jesper Dangaard Brouer
2018-03-23 16:46   ` Alexander Duyck
2018-03-23 12:18 ` [bpf-next V5 PATCH 04/15] xdp: move struct xdp_buff from filter.h to xdp.h Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 05/15] xdp: introduce a new xdp_frame type Jesper Dangaard Brouer
2018-03-23 17:11   ` Alexander Duyck
2018-03-23 12:18 ` [bpf-next V5 PATCH 06/15] tun: convert to use generic xdp_frame and xdp_return_frame API Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 07/15] virtio_net: " Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 08/15] bpf: cpumap convert to use generic xdp_frame Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 09/15] mlx5: register a memory model when XDP is enabled Jesper Dangaard Brouer
2018-03-23 16:18   ` Sergei Shtylyov
2018-03-23 12:18 ` [bpf-next V5 PATCH 10/15] xdp: rhashtable with allocator ID to pointer mapping Jesper Dangaard Brouer
2018-03-23 16:56   ` Alexander Duyck
2018-03-23 18:15     ` Jesper Dangaard Brouer
2018-03-23 18:22       ` Alexander Duyck
2018-03-26 21:04     ` Jesper Dangaard Brouer
2018-03-23 12:18 ` [bpf-next V5 PATCH 11/15] page_pool: refurbish version of page_pool code Jesper Dangaard Brouer
2018-03-23 13:28   ` Eric Dumazet
2018-03-26 14:09     ` Jesper Dangaard Brouer
2018-03-23 13:29   ` Eric Dumazet
2018-03-23 14:15     ` Jesper Dangaard Brouer
2018-03-23 14:55       ` Eric Dumazet
2018-03-26 15:19         ` Jesper Dangaard Brouer
2018-03-23 13:37   ` Eric Dumazet
2018-03-23 12:18 ` Jesper Dangaard Brouer [this message]
2018-03-23 17:02   ` [bpf-next V5 PATCH 12/15] xdp: allow page_pool as an allocator type in xdp_return_frame Alexander Duyck
2018-03-23 12:19 ` [bpf-next V5 PATCH 13/15] mlx5: use page_pool for xdp_return_frame call Jesper Dangaard Brouer
2018-03-23 12:19 ` [bpf-next V5 PATCH 14/15] xdp: transition into using xdp_frame for return API Jesper Dangaard Brouer
2018-03-23 17:29   ` Alexander Duyck
2018-03-26 11:42     ` Jesper Dangaard Brouer
2018-03-23 12:19 ` [bpf-next V5 PATCH 15/15] xdp: transition into using xdp_frame for ndo_xdp_xmit Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=152180753987.20167.7085747286116186761.stgit@firesoul \
    --to=brouer@redhat.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=bjorn.topel@intel.com \
    --cc=borkmann@iogearbox.net \
    --cc=eranbe@mellanox.com \
    --cc=eugenia@mellanox.com \
    --cc=galp@mellanox.com \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox