netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop
       [not found] <1438966019-19322-1-git-send-email-julien.grall@citrix.com>
@ 2015-08-07 16:46 ` Julien Grall
  2015-08-08 13:59   ` Wei Liu
  2015-08-07 16:46 ` [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity Julien Grall
  2015-08-07 16:46 ` [PATCH v3 18/20] net/xen-netback: " Julien Grall
  2 siblings, 1 reply; 9+ messages in thread
From: Julien Grall @ 2015-08-07 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, ian.campbell, stefano.stabellini, netdev, linux-kernel,
	Julien Grall, linux-arm-kernel

The skb doesn't change within the function. Therefore it's only
necessary to check if we need GSO once at the beginning.

Signed-off-by: Julien Grall <julien.grall@citrix.com>

---
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: netdev@vger.kernel.org

    Changes in v2:
        - Patch added
---
 drivers/net/xen-netback/netback.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 8293374..66f1780 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -277,6 +277,13 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb
 	unsigned long bytes;
 	int gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
+	if (skb_is_gso(skb)) {
+		if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
+			gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
+		else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+			gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
+	}
+
 	/* Data must not cross a page boundary. */
 	BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
 
@@ -336,13 +343,6 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb
 		}
 
 		/* Leave a gap for the GSO descriptor. */
-		if (skb_is_gso(skb)) {
-			if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
-				gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
-			else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
-				gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
-		}
-
 		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
 			queue->rx.req_cons++;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity
       [not found] <1438966019-19322-1-git-send-email-julien.grall@citrix.com>
  2015-08-07 16:46 ` [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop Julien Grall
@ 2015-08-07 16:46 ` Julien Grall
  2015-08-20 10:03   ` [Xen-devel] " David Vrabel
  2015-08-07 16:46 ` [PATCH v3 18/20] net/xen-netback: " Julien Grall
  2 siblings, 1 reply; 9+ messages in thread
From: Julien Grall @ 2015-08-07 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: ian.campbell, netdev, stefano.stabellini, linux-kernel,
	Julien Grall, David Vrabel, Boris Ostrovsky, linux-arm-kernel

The PV network protocol is using 4KB page granularity. The goal of this
patch is to allow a Linux using 64KB page granularity using network
device on a non-modified Xen.

It's only necessary to adapt the ring size and break skb data in small
chunk of 4KB. The rest of the code is relying on the grant table code.

Note that we allocate a Linux page for each rx skb but only the first
4KB is used. We may improve the memory usage by extending the size of
the rx skb.

Signed-off-by: Julien Grall <julien.grall@citrix.com>

---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: netdev@vger.kernel.org

Improvement such as support of 64KB grant is not taken into
consideration in this patch because we have the requirement to run a Linux
using 64KB pages on a non-modified Xen.

Tested with workload such as ping, ssh, wget, git... I would happy if
someone give details how to test all the path.

    Changes in v3:
        - Fix errors reported by checkpatch.pl
        - s/mfn/gfn/ base on the new naming
        - xennet_tx_setup_grant was calling itself resulting an
        guest stall when using iperf.
        - The grant callback doesn't allow anymore to change the len
        (wasn't used here)
        - gnttab_foreach_grant has been renamed to gnttab_foreach_grant_in_range
        - gnttab_page_grant_foreign_ref has been renamed to
        gnttab_foreach_grant_foreign_ref_one

    Changes in v2:
        - Use gnttab_foreach_grant to split a Linux page in grant
        - Fix count slots
---
 drivers/net/xen-netfront.c | 122 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 86 insertions(+), 36 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 47f791e..204ffb8 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -74,8 +74,8 @@ struct netfront_cb {
 
 #define GRANT_INVALID_REF	0
 
-#define NET_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
-#define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
+#define NET_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE)
+#define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE)
 
 /* Minimum number of Rx slots (includes slot for GSO metadata). */
 #define NET_RX_SLOTS_MIN (XEN_NETIF_NR_SLOTS_MIN + 1)
@@ -291,7 +291,7 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 		struct sk_buff *skb;
 		unsigned short id;
 		grant_ref_t ref;
-		unsigned long gfn;
+		struct page *page;
 		struct xen_netif_rx_request *req;
 
 		skb = xennet_alloc_one_rx_buffer(queue);
@@ -307,14 +307,13 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 		BUG_ON((signed short)ref < 0);
 		queue->grant_rx_ref[id] = ref;
 
-		gfn = xen_page_to_gfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
 		req = RING_GET_REQUEST(&queue->rx, req_prod);
-		gnttab_grant_foreign_access_ref(ref,
-						queue->info->xbdev->otherend_id,
-						gfn,
-						0);
-
+		gnttab_page_grant_foreign_access_ref_one(ref,
+							 queue->info->xbdev->otherend_id,
+							 page,
+							 0);
 		req->id = id;
 		req->gref = ref;
 	}
@@ -415,25 +414,33 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 	xennet_maybe_wake_tx(queue);
 }
 
-static struct xen_netif_tx_request *xennet_make_one_txreq(
-	struct netfront_queue *queue, struct sk_buff *skb,
-	struct page *page, unsigned int offset, unsigned int len)
+struct xennet_gnttab_make_txreq {
+	struct netfront_queue *queue;
+	struct sk_buff *skb;
+	struct page *page;
+	struct xen_netif_tx_request *tx; /* Last request */
+	unsigned int size;
+};
+
+static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
+				  unsigned int len, void *data)
 {
+	struct xennet_gnttab_make_txreq *info = data;
 	unsigned int id;
 	struct xen_netif_tx_request *tx;
 	grant_ref_t ref;
-
-	len = min_t(unsigned int, PAGE_SIZE - offset, len);
+	/* convenient aliases */
+	struct page *page = info->page;
+	struct netfront_queue *queue = info->queue;
+	struct sk_buff *skb = info->skb;
 
 	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
 	tx = RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
 	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 
-	gnttab_grant_foreign_access_ref(ref,
-					queue->info->xbdev->otherend_id,
-					xen_page_to_gfn(page),
-					GNTMAP_readonly);
+	gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
+					gfn, GNTMAP_readonly);
 
 	queue->tx_skbs[id].skb = skb;
 	queue->grant_tx_page[id] = page;
@@ -445,7 +452,34 @@ static struct xen_netif_tx_request *xennet_make_one_txreq(
 	tx->size = len;
 	tx->flags = 0;
 
-	return tx;
+	info->tx = tx;
+	info->size += tx->size;
+}
+
+static struct xen_netif_tx_request *xennet_make_first_txreq(
+	struct netfront_queue *queue, struct sk_buff *skb,
+	struct page *page, unsigned int offset, unsigned int len)
+{
+	struct xennet_gnttab_make_txreq info = {
+		.queue = queue,
+		.skb = skb,
+		.page = page,
+		.size = 0,
+	};
+
+	gnttab_one_grant(page, offset, len, xennet_tx_setup_grant, &info);
+
+	return info.tx;
+}
+
+static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
+				  unsigned int len, void *data)
+{
+	struct xennet_gnttab_make_txreq *info = data;
+
+	info->tx->flags |= XEN_NETTXF_more_data;
+	skb_get(info->skb);
+	xennet_tx_setup_grant(gfn, offset, len, data);
 }
 
 static struct xen_netif_tx_request *xennet_make_txreqs(
@@ -453,20 +487,30 @@ static struct xen_netif_tx_request *xennet_make_txreqs(
 	struct sk_buff *skb, struct page *page,
 	unsigned int offset, unsigned int len)
 {
+	struct xennet_gnttab_make_txreq info = {
+		.queue = queue,
+		.skb = skb,
+		.tx = tx,
+	};
+
 	/* Skip unused frames from start of page */
 	page += offset >> PAGE_SHIFT;
 	offset &= ~PAGE_MASK;
 
 	while (len) {
-		tx->flags |= XEN_NETTXF_more_data;
-		tx = xennet_make_one_txreq(queue, skb_get(skb),
-					   page, offset, len);
+		info.page = page;
+		info.size = 0;
+
+		gnttab_foreach_grant_in_range(page, offset, len,
+					      xennet_make_one_txreq,
+					      &info);
+
 		page++;
 		offset = 0;
-		len -= tx->size;
+		len -= info.size;
 	}
 
-	return tx;
+	return info.tx;
 }
 
 /*
@@ -476,9 +520,10 @@ static struct xen_netif_tx_request *xennet_make_txreqs(
 static int xennet_count_skb_slots(struct sk_buff *skb)
 {
 	int i, frags = skb_shinfo(skb)->nr_frags;
-	int pages;
+	int slots;
 
-	pages = PFN_UP(offset_in_page(skb->data) + skb_headlen(skb));
+	slots = gnttab_count_grant(offset_in_page(skb->data),
+				   skb_headlen(skb));
 
 	for (i = 0; i < frags; i++) {
 		skb_frag_t *frag = skb_shinfo(skb)->frags + i;
@@ -488,10 +533,10 @@ static int xennet_count_skb_slots(struct sk_buff *skb)
 		/* Skip unused frames from start of page */
 		offset &= ~PAGE_MASK;
 
-		pages += PFN_UP(offset + size);
+		slots += gnttab_count_grant(offset, size);
 	}
 
-	return pages;
+	return slots;
 }
 
 static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
@@ -512,6 +557,8 @@ static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
 	return queue_idx;
 }
 
+#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
@@ -546,7 +593,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	}
 
 	slots = xennet_count_skb_slots(skb);
-	if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
+	if (unlikely(slots > MAX_XEN_SKB_FRAGS + 1)) {
 		net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
 				    slots, skb->len);
 		if (skb_linearize(skb))
@@ -567,10 +614,13 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	}
 
 	/* First request for the linear area. */
-	first_tx = tx = xennet_make_one_txreq(queue, skb,
-					      page, offset, len);
-	page++;
-	offset = 0;
+	first_tx = tx = xennet_make_first_txreq(queue, skb,
+						page, offset, len);
+	offset += tx->size;
+	if (offset == PAGE_SIZE) {
+		page++;
+		offset = 0;
+	}
 	len -= tx->size;
 
 	if (skb->ip_summed == CHECKSUM_PARTIAL)
@@ -732,7 +782,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 
 	for (;;) {
 		if (unlikely(rx->status < 0 ||
-			     rx->offset + rx->status > PAGE_SIZE)) {
+			     rx->offset + rx->status > XEN_PAGE_SIZE)) {
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %u, size: %d\n",
 					 rx->offset, rx->status);
@@ -1495,7 +1545,7 @@ static int setup_netfront(struct xenbus_device *dev,
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, txs, 1, &gref);
 	if (err < 0)
@@ -1509,7 +1559,7 @@ static int setup_netfront(struct xenbus_device *dev,
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, rxs, 1, &gref);
 	if (err < 0)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
       [not found] <1438966019-19322-1-git-send-email-julien.grall@citrix.com>
  2015-08-07 16:46 ` [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop Julien Grall
  2015-08-07 16:46 ` [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity Julien Grall
@ 2015-08-07 16:46 ` Julien Grall
  2015-08-08 14:55   ` Wei Liu
  2 siblings, 1 reply; 9+ messages in thread
From: Julien Grall @ 2015-08-07 16:46 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-arm-kernel, ian.campbell, stefano.stabellini, linux-kernel,
	Julien Grall, Wei Liu, netdev

The PV network protocol is using 4KB page granularity. The goal of this
patch is to allow a Linux using 64KB page granularity working as a
network backend on a non-modified Xen.

It's only necessary to adapt the ring size and break skb data in small
chunk of 4KB. The rest of the code is relying on the grant table code.

Signed-off-by: Julien Grall <julien.grall@citrix.com>

---
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: netdev@vger.kernel.org

Improvement such as support of 64KB grant is not taken into
consideration in this patch because we have the requirement to run a
Linux using 64KB pages on a non-modified Xen.

    Changes in v3:
        - Fix errors reported by checkpatch.pl
        - s/mfn/gfn/ based on the new naming
        - gnttab_foreach_grant has been renamed to gnttab_forach_grant_in_range
        - The grant callback doesn't allow anymore to use less data. An
        helpers has been added in netback to handle this.

    Changes in v2:
        - Correctly set MAX_GRANT_COPY_OPS and XEN_NETBK_RX_SLOTS_MAX
        - Don't use XEN_PAGE_SIZE in handle_frag_list as we coalesce
        fragment into a new skb
        - Use gnntab_foreach_grant to split a Linux page into grant
---
 drivers/net/xen-netback/common.h  |  15 ++--
 drivers/net/xen-netback/netback.c | 153 ++++++++++++++++++++++++--------------
 2 files changed, 107 insertions(+), 61 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 8a495b3..bb68211 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -44,6 +44,7 @@
 #include <xen/interface/grant_table.h>
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
+#include <xen/page.h>
 #include <linux/debugfs.h>
 
 typedef unsigned int pending_ring_idx_t;
@@ -64,8 +65,8 @@ struct pending_tx_info {
 	struct ubuf_info callback_struct;
 };
 
-#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
-#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
+#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE)
+#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE)
 
 struct xenvif_rx_meta {
 	int id;
@@ -80,16 +81,18 @@ struct xenvif_rx_meta {
 /* Discriminate from any valid pending_idx value. */
 #define INVALID_PENDING_IDX 0xFFFF
 
-#define MAX_BUFFER_OFFSET PAGE_SIZE
+#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
 
 #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
 
+#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
+
 /* It's possible for an skb to have a maximal number of frags
  * but still be less than MAX_BUFFER_OFFSET in size. Thus the
- * worst-case number of copy operations is MAX_SKB_FRAGS per
+ * worst-case number of copy operations is MAX_XEN_SKB_FRAGS per
  * ring slot.
  */
-#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
+#define MAX_GRANT_COPY_OPS (MAX_XEN_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
 #define NETBACK_INVALID_HANDLE -1
 
@@ -203,7 +206,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
 /* Maximum number of Rx slots a to-guest packet may use, including the
  * slot needed for GSO meta-data.
  */
-#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1)
+#define XEN_NETBK_RX_SLOTS_MAX ((MAX_XEN_SKB_FRAGS + 1))
 
 enum state_bit_shift {
 	/* This bit marks that the vif is connected */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 66f1780..c32a9f2 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -263,6 +263,80 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 	return meta;
 }
 
+struct gop_frag_copy {
+	struct xenvif_queue *queue;
+	struct netrx_pending_operations *npo;
+	struct xenvif_rx_meta *meta;
+	int head;
+	int gso_type;
+
+	struct page *page;
+};
+
+static void xenvif_setup_copy_gop(unsigned long gfn,
+				  unsigned int offset,
+				  unsigned int *len,
+				  struct gop_frag_copy *info)
+{
+	struct gnttab_copy *copy_gop;
+	struct xen_page_foreign *foreign;
+	/* Convenient aliases */
+	struct xenvif_queue *queue = info->queue;
+	struct netrx_pending_operations *npo = info->npo;
+	struct page *page = info->page;
+
+	BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET);
+
+	if (npo->copy_off == MAX_BUFFER_OFFSET)
+		info->meta = get_next_rx_buffer(queue, npo);
+
+	if (npo->copy_off + *len > MAX_BUFFER_OFFSET)
+		*len = MAX_BUFFER_OFFSET - npo->copy_off;
+
+	copy_gop = npo->copy + npo->copy_prod++;
+	copy_gop->flags = GNTCOPY_dest_gref;
+	copy_gop->len = *len;
+
+	foreign = xen_page_foreign(page);
+	if (foreign) {
+		copy_gop->source.domid = foreign->domid;
+		copy_gop->source.u.ref = foreign->gref;
+		copy_gop->flags |= GNTCOPY_source_gref;
+	} else {
+		copy_gop->source.domid = DOMID_SELF;
+		copy_gop->source.u.gmfn = gfn;
+	}
+	copy_gop->source.offset = offset;
+
+	copy_gop->dest.domid = queue->vif->domid;
+	copy_gop->dest.offset = npo->copy_off;
+	copy_gop->dest.u.ref = npo->copy_gref;
+
+	npo->copy_off += *len;
+	info->meta->size += *len;
+
+	/* Leave a gap for the GSO descriptor. */
+	if (info->head && ((1 << info->gso_type) & queue->vif->gso_mask))
+		queue->rx.req_cons++;
+
+	info->head = 0; /* There must be something in this buffer now */
+}
+
+static void xenvif_gop_frag_copy_grant(unsigned long gfn,
+				       unsigned offset,
+				       unsigned int len,
+				       void *data)
+{
+	unsigned int bytes;
+
+	while (len) {
+		bytes = len;
+		xenvif_setup_copy_gop(gfn, offset, &bytes, data);
+		offset += bytes;
+		len -= bytes;
+	}
+}
+
 /*
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
@@ -272,83 +346,52 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
 {
-	struct gnttab_copy *copy_gop;
-	struct xenvif_rx_meta *meta;
+	struct gop_frag_copy info = {
+		.queue = queue,
+		.npo = npo,
+		.head = *head,
+		.gso_type = XEN_NETIF_GSO_TYPE_NONE,
+	};
 	unsigned long bytes;
-	int gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
 	if (skb_is_gso(skb)) {
 		if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
-			gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
+			info.gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
 		else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
-			gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
+			info.gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
 	}
 
 	/* Data must not cross a page boundary. */
 	BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
 
-	meta = npo->meta + npo->meta_prod - 1;
+	info.meta = npo->meta + npo->meta_prod - 1;
 
 	/* Skip unused frames from start of page */
 	page += offset >> PAGE_SHIFT;
 	offset &= ~PAGE_MASK;
 
 	while (size > 0) {
-		struct xen_page_foreign *foreign;
-
 		BUG_ON(offset >= PAGE_SIZE);
-		BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET);
-
-		if (npo->copy_off == MAX_BUFFER_OFFSET)
-			meta = get_next_rx_buffer(queue, npo);
 
 		bytes = PAGE_SIZE - offset;
 		if (bytes > size)
 			bytes = size;
 
-		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
-			bytes = MAX_BUFFER_OFFSET - npo->copy_off;
-
-		copy_gop = npo->copy + npo->copy_prod++;
-		copy_gop->flags = GNTCOPY_dest_gref;
-		copy_gop->len = bytes;
-
-		foreign = xen_page_foreign(page);
-		if (foreign) {
-			copy_gop->source.domid = foreign->domid;
-			copy_gop->source.u.ref = foreign->gref;
-			copy_gop->flags |= GNTCOPY_source_gref;
-		} else {
-			copy_gop->source.domid = DOMID_SELF;
-			copy_gop->source.u.gmfn =
-				virt_to_gfn(page_address(page));
-		}
-		copy_gop->source.offset = offset;
-
-		copy_gop->dest.domid = queue->vif->domid;
-		copy_gop->dest.offset = npo->copy_off;
-		copy_gop->dest.u.ref = npo->copy_gref;
-
-		npo->copy_off += bytes;
-		meta->size += bytes;
-
-		offset += bytes;
+		info.page = page;
+		gnttab_foreach_grant_in_range(page, offset, bytes,
+					      xenvif_gop_frag_copy_grant,
+					      &info);
 		size -= bytes;
+		offset = 0;
 
-		/* Next frame */
-		if (offset == PAGE_SIZE && size) {
+		/* Next page */
+		if (size) {
 			BUG_ON(!PageCompound(page));
 			page++;
-			offset = 0;
 		}
-
-		/* Leave a gap for the GSO descriptor. */
-		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
-			queue->rx.req_cons++;
-
-		*head = 0; /* There must be something in this buffer now. */
-
 	}
+
+	*head = info.head;
 }
 
 /*
@@ -747,7 +790,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
 		first->size -= txp->size;
 		slots++;
 
-		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
+		if (unlikely((txp->offset + txp->size) > XEN_PAGE_SIZE)) {
 			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %u, size: %u\n",
 				 txp->offset, txp->size);
 			xenvif_fatal_tx_err(queue->vif);
@@ -1241,11 +1284,11 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
-		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
+		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
 			netdev_err(queue->vif->dev,
 				   "txreq.offset: %u, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~PAGE_MASK) + txreq.size);
+				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
@@ -1287,7 +1330,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 			virt_to_gfn(skb->data);
 		queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
 		queue->tx_copy_ops[*copy_ops].dest.offset =
-			offset_in_page(skb->data);
+			offset_in_page(skb->data) & ~XEN_PAGE_MASK;
 
 		queue->tx_copy_ops[*copy_ops].len = data_len;
 		queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
@@ -1780,7 +1823,7 @@ int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE);
 
 	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     &rx_ring_ref, 1, &addr);
@@ -1788,7 +1831,7 @@ int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
 
 	return 0;
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop
  2015-08-07 16:46 ` [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop Julien Grall
@ 2015-08-08 13:59   ` Wei Liu
  0 siblings, 0 replies; 9+ messages in thread
From: Wei Liu @ 2015-08-08 13:59 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, linux-arm-kernel, ian.campbell, stefano.stabellini,
	linux-kernel, Wei Liu, netdev

On Fri, Aug 07, 2015 at 05:46:40PM +0100, Julien Grall wrote:
> The skb doesn't change within the function. Therefore it's only
> necessary to check if we need GSO once at the beginning.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> 

Acked-by: Wei Liu <wei.liu2@citrix.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
  2015-08-07 16:46 ` [PATCH v3 18/20] net/xen-netback: " Julien Grall
@ 2015-08-08 14:55   ` Wei Liu
  2015-08-10  9:57     ` Julien Grall
  0 siblings, 1 reply; 9+ messages in thread
From: Wei Liu @ 2015-08-08 14:55 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, linux-arm-kernel, ian.campbell, stefano.stabellini,
	linux-kernel, Wei Liu, netdev

On Fri, Aug 07, 2015 at 05:46:57PM +0100, Julien Grall wrote:
> The PV network protocol is using 4KB page granularity. The goal of this
> patch is to allow a Linux using 64KB page granularity working as a
> network backend on a non-modified Xen.
> 
> It's only necessary to adapt the ring size and break skb data in small
> chunk of 4KB. The rest of the code is relying on the grant table code.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> 
> ---
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: netdev@vger.kernel.org
> 
[...]
> +#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE)
> +#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE)
>  
>  struct xenvif_rx_meta {
>  	int id;
> @@ -80,16 +81,18 @@ struct xenvif_rx_meta {
>  /* Discriminate from any valid pending_idx value. */
>  #define INVALID_PENDING_IDX 0xFFFF
>  
> -#define MAX_BUFFER_OFFSET PAGE_SIZE
> +#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
>  
>  #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
>  
> +#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
> +

It might be clearer if you add a comment saying the maximum number of
frags is derived from the page size of the grant page, which happens to
be XEN_PAGE_SIZE at the moment. 

In the future we need to figure out the page size of grant page in a
dynamic way. We shall cross the bridge when we get there.

>  /* It's possible for an skb to have a maximal number of frags
>   * but still be less than MAX_BUFFER_OFFSET in size. Thus the
> - * worst-case number of copy operations is MAX_SKB_FRAGS per
> + * worst-case number of copy operations is MAX_XEN_SKB_FRAGS per
>   * ring slot.
>   */
> -#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
> +#define MAX_GRANT_COPY_OPS (MAX_XEN_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
>  
>  #define NETBACK_INVALID_HANDLE -1
>  
> @@ -203,7 +206,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
>  /* Maximum number of Rx slots a to-guest packet may use, including the
>   * slot needed for GSO meta-data.
>   */
> -#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1)
> +#define XEN_NETBK_RX_SLOTS_MAX ((MAX_XEN_SKB_FRAGS + 1))
>  
>  enum state_bit_shift {
>  	/* This bit marks that the vif is connected */
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 66f1780..c32a9f2 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -263,6 +263,80 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
>  	return meta;
>  }
>  
[...]
>   * Set up the grant operations for this fragment. If it's a flipping
>   * interface, we also set up the unmap request from here.
> @@ -272,83 +346,52 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb
>  				 struct page *page, unsigned long size,
>  				 unsigned long offset, int *head)
>  {
> -	struct gnttab_copy *copy_gop;
> -	struct xenvif_rx_meta *meta;
> +	struct gop_frag_copy info = {
> +		.queue = queue,
> +		.npo = npo,
> +		.head = *head,
> +		.gso_type = XEN_NETIF_GSO_TYPE_NONE,
> +	};
>  	unsigned long bytes;
> -	int gso_type = XEN_NETIF_GSO_TYPE_NONE;
>  
>  	if (skb_is_gso(skb)) {
>  		if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
> -			gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
> +			info.gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
>  		else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
> -			gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
> +			info.gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
>  	}
>  
>  	/* Data must not cross a page boundary. */
>  	BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
>  
> -	meta = npo->meta + npo->meta_prod - 1;
> +	info.meta = npo->meta + npo->meta_prod - 1;
>  
>  	/* Skip unused frames from start of page */
>  	page += offset >> PAGE_SHIFT;
>  	offset &= ~PAGE_MASK;
>  
>  	while (size > 0) {
> -		struct xen_page_foreign *foreign;
> -
>  		BUG_ON(offset >= PAGE_SIZE);
> -		BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET);
> -
> -		if (npo->copy_off == MAX_BUFFER_OFFSET)
> -			meta = get_next_rx_buffer(queue, npo);
>  
>  		bytes = PAGE_SIZE - offset;
>  		if (bytes > size)
>  			bytes = size;
>  
> -		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
> -			bytes = MAX_BUFFER_OFFSET - npo->copy_off;
> -
> -		copy_gop = npo->copy + npo->copy_prod++;
> -		copy_gop->flags = GNTCOPY_dest_gref;
> -		copy_gop->len = bytes;
> -
> -		foreign = xen_page_foreign(page);
> -		if (foreign) {
> -			copy_gop->source.domid = foreign->domid;
> -			copy_gop->source.u.ref = foreign->gref;
> -			copy_gop->flags |= GNTCOPY_source_gref;
> -		} else {
> -			copy_gop->source.domid = DOMID_SELF;
> -			copy_gop->source.u.gmfn =
> -				virt_to_gfn(page_address(page));
> -		}
> -		copy_gop->source.offset = offset;
> -
> -		copy_gop->dest.domid = queue->vif->domid;
> -		copy_gop->dest.offset = npo->copy_off;
> -		copy_gop->dest.u.ref = npo->copy_gref;
> -
> -		npo->copy_off += bytes;
> -		meta->size += bytes;
> -
> -		offset += bytes;
> +		info.page = page;
> +		gnttab_foreach_grant_in_range(page, offset, bytes,
> +					      xenvif_gop_frag_copy_grant,
> +					      &info);

Looks like I need to at least wait until the API is settle before giving
my ack.

>  		size -= bytes;
> +		offset = 0;

This looks wrong. Should be offset += bytes.

>  
> -		/* Next frame */
> -		if (offset == PAGE_SIZE && size) {
> +		/* Next page */
> +		if (size) {
>  			BUG_ON(!PageCompound(page));
>  			page++;
> -			offset = 0;

And this should not be deleted, I think.

What is the reason for changing offset calculation? I think there is
still compound page when using 64K page.

>  		}
> -
> -		/* Leave a gap for the GSO descriptor. */
> -		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
> -			queue->rx.req_cons++;
> -
> -		*head = 0; /* There must be something in this buffer now. */
> -
>  	}
> +
> +	*head = info.head;
>  }
>  

The reset looks OK.

Wei.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
  2015-08-08 14:55   ` Wei Liu
@ 2015-08-10  9:57     ` Julien Grall
  2015-08-10 11:39       ` Wei Liu
  0 siblings, 1 reply; 9+ messages in thread
From: Julien Grall @ 2015-08-10  9:57 UTC (permalink / raw)
  To: Wei Liu
  Cc: xen-devel, linux-arm-kernel, ian.campbell, stefano.stabellini,
	linux-kernel, netdev

Hi Wei,

On 08/08/2015 15:55, Wei Liu wrote:
>>   struct xenvif_rx_meta {
>>   	int id;
>> @@ -80,16 +81,18 @@ struct xenvif_rx_meta {
>>   /* Discriminate from any valid pending_idx value. */
>>   #define INVALID_PENDING_IDX 0xFFFF
>>
>> -#define MAX_BUFFER_OFFSET PAGE_SIZE
>> +#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE
>>
>>   #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
>>
>> +#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
>> +
>
> It might be clearer if you add a comment saying the maximum number of
> frags is derived from the page size of the grant page, which happens to
> be XEN_PAGE_SIZE at the moment.

Will do.


> In the future we need to figure out the page size of grant page in a
> dynamic way. We shall cross the bridge when we get there.

Right, there is few other places where we would need to do that too (see 
MAX_BUFFER_OFFSET for instance).


[..]

>> +		info.page = page;
>> +		gnttab_foreach_grant_in_range(page, offset, bytes,
>> +					      xenvif_gop_frag_copy_grant,
>> +					      &info);
>
> Looks like I need to at least wait until the API is settle before giving
> my ack.
>
>>   		size -= bytes;
>> +		offset = 0;
>
> This looks wrong. Should be offset += bytes.

With the new implementation of the loop, each iteration will be on a 
different page.
So only the first page has an offset different than zero.

>
>>
>> -		/* Next frame */
>> -		if (offset == PAGE_SIZE && size) {
>> +		/* Next page */
>> +		if (size) {
>>   			BUG_ON(!PageCompound(page));
>>   			page++;
>> -			offset = 0;
>
> And this should not be deleted, I think.
>
> What is the reason for changing offset calculation? I think there is
> still compound page when using 64K page.

The compound pages are still working ... gnttab_foreach_grant_in_range 
is called once per page. So the offset can be reset to 0 every time. No 
need to add code which would make the result less clear.

We only need to know if the size is not 0 to get the next page.

The patch may not be clear enough to see it's working so I've copied the 
result loop below:

         while (size > 0) {
                 BUG_ON(offset >= PAGE_SIZE);

                 bytes = PAGE_SIZE - offset;
                 if (bytes > size)
                         bytes = size;

                 info.page = page;
                 gnttab_foreach_grant_in_range(page, offset, bytes,
                                              xenvif_gop_frag_copy_grant,
                                               &info);
                 size -= bytes;
                 offset = 0;

                 /* Next page */
                 if (size) {
                         BUG_ON(!PageCompound(page));
                         page++;
                 }
         }

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
  2015-08-10  9:57     ` Julien Grall
@ 2015-08-10 11:39       ` Wei Liu
  2015-08-10 12:00         ` Julien Grall
  0 siblings, 1 reply; 9+ messages in thread
From: Wei Liu @ 2015-08-10 11:39 UTC (permalink / raw)
  To: Julien Grall
  Cc: Wei Liu, xen-devel, linux-arm-kernel, ian.campbell,
	stefano.stabellini, linux-kernel, netdev

On Mon, Aug 10, 2015 at 10:57:48AM +0100, Julien Grall wrote:
[...]
> 
> >>+		info.page = page;
> >>+		gnttab_foreach_grant_in_range(page, offset, bytes,
> >>+					      xenvif_gop_frag_copy_grant,
> >>+					      &info);
> >
> >Looks like I need to at least wait until the API is settle before giving
> >my ack.
> >
> >>  		size -= bytes;
> >>+		offset = 0;
> >
> >This looks wrong. Should be offset += bytes.
> 
> With the new implementation of the loop, each iteration will be on a
> different page.
> So only the first page has an offset different than zero.
> 
> >
> >>
> >>-		/* Next frame */
> >>-		if (offset == PAGE_SIZE && size) {
> >>+		/* Next page */
> >>+		if (size) {
> >>  			BUG_ON(!PageCompound(page));
> >>  			page++;
> >>-			offset = 0;
> >
> >And this should not be deleted, I think.
> >
> >What is the reason for changing offset calculation? I think there is
> >still compound page when using 64K page.
> 
> The compound pages are still working ... gnttab_foreach_grant_in_range is
> called once per page. So the offset can be reset to 0 every time. No need to
> add code which would make the result less clear.
> 
> We only need to know if the size is not 0 to get the next page.
> 
> The patch may not be clear enough to see it's working so I've copied the
> result loop below:
> 
>         while (size > 0) {
>                 BUG_ON(offset >= PAGE_SIZE);
> 
>                 bytes = PAGE_SIZE - offset;
>                 if (bytes > size)
>                         bytes = size;
> 
>                 info.page = page;
>                 gnttab_foreach_grant_in_range(page, offset, bytes,
>                                              xenvif_gop_frag_copy_grant,
>                                               &info);
>                 size -= bytes;
>                 offset = 0;
> 
>                 /* Next page */
>                 if (size) {
>                         BUG_ON(!PageCompound(page));
>                         page++;
>                 }
>         }
> 

Right. That doesn't mean the original code was wrong or anything. But I
don't want to bikeshed about this.

Please add a comment saying that offset is always 0 starting from second
iteration because the gnttab_foreach_grant_in_range makes sure we handle
one page in one go.

Wei.


> Regards,
> 
> -- 
> Julien Grall

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
  2015-08-10 11:39       ` Wei Liu
@ 2015-08-10 12:00         ` Julien Grall
  0 siblings, 0 replies; 9+ messages in thread
From: Julien Grall @ 2015-08-10 12:00 UTC (permalink / raw)
  To: Wei Liu
  Cc: ian.campbell, stefano.stabellini, netdev, linux-kernel, xen-devel,
	linux-arm-kernel

On 10/08/15 12:39, Wei Liu wrote:
> On Mon, Aug 10, 2015 at 10:57:48AM +0100, Julien Grall wrote:
>>         while (size > 0) {
>>                 BUG_ON(offset >= PAGE_SIZE);
>>
>>                 bytes = PAGE_SIZE - offset;
>>                 if (bytes > size)
>>                         bytes = size;
>>
>>                 info.page = page;
>>                 gnttab_foreach_grant_in_range(page, offset, bytes,
>>                                              xenvif_gop_frag_copy_grant,
>>                                               &info);
>>                 size -= bytes;
>>                 offset = 0;
>>
>>                 /* Next page */
>>                 if (size) {
>>                         BUG_ON(!PageCompound(page));
>>                         page++;
>>                 }
>>         }
>>
> 
> Right. That doesn't mean the original code was wrong or anything. But I
> don't want to bikeshed about this.

I never said the original code was wrong... The original code was
allowing the possibility to copy less data than the length contained in
page.

In the new version, it has been pushed with the callback
xenvif_gop_frag_copy_grant.

> Please add a comment saying that offset is always 0 starting from second
> iteration because the gnttab_foreach_grant_in_range makes sure we handle
> one page in one go.

I think this is superfluous. To be honest, the comment should have been
on the original version and not in the new one. The construction of the
loop was far from obvious that we copied less data.

In this new version, the reason is not because of
gnttab_foreach_grant_in_range is always a page but how the loop has been
constructed.

If you look how bytes has been defined, it will always contain

min(PAGE_SIZE - offset, size)

So for the first page, this will be PAGE_SIZE - offset. A the end of the
loop we reset the offset 0, indeed we copy all the data of the first
page. For the second page and onwards this will always be PAGE_SIZE
except for the last one where we took size.


Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity
  2015-08-07 16:46 ` [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity Julien Grall
@ 2015-08-20 10:03   ` David Vrabel
  0 siblings, 0 replies; 9+ messages in thread
From: David Vrabel @ 2015-08-20 10:03 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: ian.campbell, netdev, stefano.stabellini, linux-kernel,
	David Vrabel, Boris Ostrovsky, linux-arm-kernel

On 07/08/15 17:46, Julien Grall wrote:
> The PV network protocol is using 4KB page granularity. The goal of this
> patch is to allow a Linux using 64KB page granularity using network
> device on a non-modified Xen.
> 
> It's only necessary to adapt the ring size and break skb data in small
> chunk of 4KB. The rest of the code is relying on the grant table code.
> 
> Note that we allocate a Linux page for each rx skb but only the first
> 4KB is used. We may improve the memory usage by extending the size of
> the rx skb.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-08-20 10:03 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1438966019-19322-1-git-send-email-julien.grall@citrix.com>
2015-08-07 16:46 ` [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop Julien Grall
2015-08-08 13:59   ` Wei Liu
2015-08-07 16:46 ` [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity Julien Grall
2015-08-20 10:03   ` [Xen-devel] " David Vrabel
2015-08-07 16:46 ` [PATCH v3 18/20] net/xen-netback: " Julien Grall
2015-08-08 14:55   ` Wei Liu
2015-08-10  9:57     ` Julien Grall
2015-08-10 11:39       ` Wei Liu
2015-08-10 12:00         ` Julien Grall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).