* [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
@ 2013-04-22 12:20 Wei Liu
0 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 12:20 UTC (permalink / raw)
To: davem; +Cc: netdev, ian.campbell, xen-devel
Hi Dave
This series is now rebased onto net-next.
We would also like to ask you to queue it for stable-ish tree. I can do the
backport if necessary.
Thanks
Wei.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH net-next V7 1/4] xen-netfront: frags -> slots in log message
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
@ 2013-04-22 12:20 ` Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header Wei Liu
` (6 subsequent siblings)
7 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 12:20 UTC (permalink / raw)
To: davem; +Cc: netdev, Wei Liu, ian.campbell, xen-devel
Also fix a typo in comment.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
drivers/net/xen-netfront.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d9097a7..1bb2e20 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -735,7 +735,7 @@ static int xennet_get_responses(struct netfront_info *np,
/*
* This definitely indicates a bug, either in this driver or in
* the backend driver. In future this should flag the bad
- * situation to the system controller to reboot the backed.
+ * situation to the system controller to reboot the backend.
*/
if (ref == GRANT_INVALID_REF) {
if (net_ratelimit())
@@ -771,7 +771,7 @@ next:
if (unlikely(slots > max)) {
if (net_ratelimit())
- dev_warn(dev, "Too many frags\n");
+ dev_warn(dev, "Too many slots\n");
err = -E2BIG;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
2013-04-22 12:20 ` [PATCH net-next V7 1/4] xen-netfront: frags -> slots in log message Wei Liu
@ 2013-04-22 12:20 ` Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Wei Liu
` (5 subsequent siblings)
7 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 12:20 UTC (permalink / raw)
To: davem; +Cc: netdev, Wei Liu, ian.campbell, xen-devel
The maximum packet including header that can be handled by netfront / netback
wire format is 65535. Reduce gso_max_size accordingly.
Drop skb and print warning when skb->len > 65535. This can 1) save the effort
to send malformed packet to netback, 2) help spotting misconfiguration of
netfront in the future.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
drivers/net/xen-netfront.c | 17 +++++++++++++++--
include/xen/interface/io/netif.h | 1 +
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 1bb2e20..1db10141 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -36,7 +36,7 @@
#include <linux/skbuff.h>
#include <linux/ethtool.h>
#include <linux/if_ether.h>
-#include <linux/tcp.h>
+#include <net/tcp.h>
#include <linux/udp.h>
#include <linux/moduleparam.h>
#include <linux/mm.h>
@@ -547,6 +547,16 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned int len = skb_headlen(skb);
unsigned long flags;
+ /* If skb->len is too big for wire format, drop skb and alert
+ * user about misconfiguration.
+ */
+ if (unlikely(skb->len > XEN_NETIF_MAX_TX_SIZE)) {
+ net_alert_ratelimited(
+ "xennet: skb->len = %u, too big for wire format\n",
+ skb->len);
+ goto drop;
+ }
+
slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
xennet_count_skb_frag_slots(skb);
if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
@@ -1058,7 +1068,8 @@ err:
static int xennet_change_mtu(struct net_device *dev, int mtu)
{
- int max = xennet_can_sg(dev) ? 65535 - ETH_HLEN : ETH_DATA_LEN;
+ int max = xennet_can_sg(dev) ?
+ XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN;
if (mtu > max)
return -EINVAL;
@@ -1362,6 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
SET_ETHTOOL_OPS(netdev, &xennet_ethtool_ops);
SET_NETDEV_DEV(netdev, &dev->dev);
+ netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER);
+
np->netdev = netdev;
netif_carrier_off(netdev);
diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index 9dfc120..58fadca 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -47,6 +47,7 @@
#define _XEN_NETTXF_extra_info (3)
#define XEN_NETTXF_extra_info (1U<<_XEN_NETTXF_extra_info)
+#define XEN_NETIF_MAX_TX_SIZE 0xFFFF
struct xen_netif_tx_request {
grant_ref_t gref; /* Reference to buffer page */
uint16_t offset; /* Offset within buffer page */
--
1.7.10.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
2013-04-22 12:20 ` [PATCH net-next V7 1/4] xen-netfront: frags -> slots in log message Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header Wei Liu
@ 2013-04-22 12:20 ` Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 4/4] xen-netback: don't disconnect frontend when seeing oversize packet Wei Liu
` (4 subsequent siblings)
7 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 12:20 UTC (permalink / raw)
To: davem; +Cc: netdev, Wei Liu, ian.campbell, xen-devel
This patch tries to coalesce tx requests when constructing grant copy
structures. It enables netback to deal with situation when frontend's
MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
With the help of coalescing, this patch tries to address two regressions
avoid reopening the security hole in XSA-39.
Regression 1. The reduction of the number of supported ring entries (slots)
per packet (from 18 to 17). This regression has been around for some time but
remains unnoticed until XSA-39 security fix. This is fixed by coalescing
slots.
Regression 2. The XSA-39 security fix turning "too many frags" errors from
just dropping the packet to a fatal error and disabling the VIF. This is fixed
by coalescing slots (handling 18 slots when backend's MAX_SKB_FRAGS is 17)
which rules out false positive (using 18 slots is legit) and dropping packets
using 19 to `max_skb_slots` slots.
To avoid reopening security hole in XSA-39, frontend sending packet using more
than max_skb_slots is considered malicious.
The behavior of netback for packet is thus:
1-18 slots: valid
19-max_skb_slots slots: drop and respond with an error
max_skb_slots+ slots: fatal error
max_skb_slots is configurable by admin, default value is 20.
Also change variable name from "frags" to "slots" in netbk_count_requests.
Please note that RX path still has dependency on MAX_SKB_FRAGS. This will be
fixed with separate patch.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
drivers/net/xen-netback/netback.c | 275 ++++++++++++++++++++++++++++++-------
include/xen/interface/io/netif.h | 18 +++
2 files changed, 242 insertions(+), 51 deletions(-)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9f71844..d9292c5 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -47,11 +47,25 @@
#include <asm/xen/hypercall.h>
#include <asm/xen/page.h>
+/*
+ * This is the maximum slots a skb can have. If a guest sends a skb
+ * which exceeds this limit it is considered malicious.
+ */
+#define MAX_SKB_SLOTS_DEFAULT 20
+static unsigned int max_skb_slots = MAX_SKB_SLOTS_DEFAULT;
+module_param(max_skb_slots, uint, 0444);
+
+typedef unsigned int pending_ring_idx_t;
+#define INVALID_PENDING_RING_IDX (~0U)
+
struct pending_tx_info {
- struct xen_netif_tx_request req;
+ struct xen_netif_tx_request req; /* coalesced tx request */
struct xenvif *vif;
+ pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
+ * if it is head of one or more tx
+ * reqs
+ */
};
-typedef unsigned int pending_ring_idx_t;
struct netbk_rx_meta {
int id;
@@ -102,7 +116,11 @@ struct xen_netbk {
atomic_t netfront_count;
struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
- struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
+ /* Coalescing tx requests before copying makes number of grant
+ * copy ops greater or equal to number of slots required. In
+ * worst case a tx request consumes 2 gnttab_copy.
+ */
+ struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
u16 pending_ring[MAX_PENDING_REQS];
@@ -118,6 +136,16 @@ struct xen_netbk {
static struct xen_netbk *xen_netbk;
static int xen_netbk_group_nr;
+/*
+ * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
+ * one or more merged tx requests, otherwise it is the continuation of
+ * previous tx request.
+ */
+static inline int pending_tx_is_head(struct xen_netbk *netbk, RING_IDX idx)
+{
+ return netbk->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+}
+
void xen_netbk_add_xenvif(struct xenvif *vif)
{
int i;
@@ -250,6 +278,7 @@ static int max_required_rx_slots(struct xenvif *vif)
{
int max = DIV_ROUND_UP(vif->dev->mtu, PAGE_SIZE);
+ /* XXX FIXME: RX path dependent on MAX_SKB_FRAGS */
if (vif->can_sg || vif->gso || vif->gso_prefix)
max += MAX_SKB_FRAGS + 1; /* extra_info + frags */
@@ -657,6 +686,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
__skb_queue_tail(&rxq, skb);
/* Filled the batch queue? */
+ /* XXX FIXME: RX path dependent on MAX_SKB_FRAGS */
if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
break;
}
@@ -898,47 +928,78 @@ static void netbk_fatal_tx_err(struct xenvif *vif)
static int netbk_count_requests(struct xenvif *vif,
struct xen_netif_tx_request *first,
+ RING_IDX first_idx,
struct xen_netif_tx_request *txp,
int work_to_do)
{
RING_IDX cons = vif->tx.req_cons;
- int frags = 0;
+ int slots = 0;
+ int drop_err = 0;
if (!(first->flags & XEN_NETTXF_more_data))
return 0;
do {
- if (frags >= work_to_do) {
- netdev_err(vif->dev, "Need more frags\n");
+ if (slots >= work_to_do) {
+ netdev_err(vif->dev,
+ "Asked for %d slots but exceeds this limit\n",
+ work_to_do);
netbk_fatal_tx_err(vif);
return -ENODATA;
}
- if (unlikely(frags >= MAX_SKB_FRAGS)) {
- netdev_err(vif->dev, "Too many frags\n");
+ /* This guest is really using too many slots and
+ * considered malicious.
+ */
+ if (unlikely(slots >= max_skb_slots)) {
+ netdev_err(vif->dev,
+ "Malicious frontend using %d slots, threshold %u\n",
+ slots, max_skb_slots);
netbk_fatal_tx_err(vif);
return -E2BIG;
}
- memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + frags),
+ /* Xen network protocol had implicit dependency on
+ * MAX_SKB_FRAGS. XEN_NETIF_NR_SLOTS_MIN is set to the
+ * historical MAX_SKB_FRAGS value 18 to honor the same
+ * behavior as before. Any packet using more than 18
+ * slots but less than max_skb_slots slots is dropped
+ */
+ if (!drop_err && slots >= XEN_NETIF_NR_SLOTS_MIN) {
+ if (net_ratelimit())
+ netdev_dbg(vif->dev,
+ "Too many slots (%d) exceeding limit (%d), dropping packet\n",
+ slots, XEN_NETIF_NR_SLOTS_MIN);
+ drop_err = -E2BIG;
+ }
+
+ memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
sizeof(*txp));
if (txp->size > first->size) {
- netdev_err(vif->dev, "Frag is bigger than frame.\n");
+ netdev_err(vif->dev,
+ "Invalid tx request, slot size %u > remaining size %u\n",
+ txp->size, first->size);
netbk_fatal_tx_err(vif);
return -EIO;
}
first->size -= txp->size;
- frags++;
+ slots++;
if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
- netdev_err(vif->dev, "txp->offset: %x, size: %u\n",
+ netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
txp->offset, txp->size);
netbk_fatal_tx_err(vif);
return -EINVAL;
}
} while ((txp++)->flags & XEN_NETTXF_more_data);
- return frags;
+
+ if (drop_err) {
+ netbk_tx_err(vif, first, first_idx + slots);
+ return drop_err;
+ }
+
+ return slots;
}
static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk,
@@ -962,48 +1023,114 @@ static struct gnttab_copy *xen_netbk_get_requests(struct xen_netbk *netbk,
struct skb_shared_info *shinfo = skb_shinfo(skb);
skb_frag_t *frags = shinfo->frags;
u16 pending_idx = *((u16 *)skb->data);
- int i, start;
+ u16 head_idx = 0;
+ int slot, start;
+ struct page *page;
+ pending_ring_idx_t index, start_idx = 0;
+ uint16_t dst_offset;
+ unsigned int nr_slots;
+ struct pending_tx_info *first = NULL;
+
+ /* At this point shinfo->nr_frags is in fact the number of
+ * slots, which can be as large as XEN_NETIF_NR_SLOTS_MIN.
+ */
+ nr_slots = shinfo->nr_frags;
/* Skip first skb fragment if it is on same page as header fragment. */
start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
- for (i = start; i < shinfo->nr_frags; i++, txp++) {
- struct page *page;
- pending_ring_idx_t index;
+ /* Coalesce tx requests, at this point the packet passed in
+ * should be <= 64K. Any packets larger than 64K have been
+ * handled in netbk_count_requests().
+ */
+ for (shinfo->nr_frags = slot = start; slot < nr_slots;
+ shinfo->nr_frags++) {
struct pending_tx_info *pending_tx_info =
netbk->pending_tx_info;
- index = pending_index(netbk->pending_cons++);
- pending_idx = netbk->pending_ring[index];
- page = xen_netbk_alloc_page(netbk, pending_idx);
+ page = alloc_page(GFP_KERNEL|__GFP_COLD);
if (!page)
goto err;
- gop->source.u.ref = txp->gref;
- gop->source.domid = vif->domid;
- gop->source.offset = txp->offset;
-
- gop->dest.u.gmfn = virt_to_mfn(page_address(page));
- gop->dest.domid = DOMID_SELF;
- gop->dest.offset = txp->offset;
-
- gop->len = txp->size;
- gop->flags = GNTCOPY_source_gref;
+ dst_offset = 0;
+ first = NULL;
+ while (dst_offset < PAGE_SIZE && slot < nr_slots) {
+ gop->flags = GNTCOPY_source_gref;
+
+ gop->source.u.ref = txp->gref;
+ gop->source.domid = vif->domid;
+ gop->source.offset = txp->offset;
+
+ gop->dest.domid = DOMID_SELF;
+
+ gop->dest.offset = dst_offset;
+ gop->dest.u.gmfn = virt_to_mfn(page_address(page));
+
+ if (dst_offset + txp->size > PAGE_SIZE) {
+ /* This page can only merge a portion
+ * of tx request. Do not increment any
+ * pointer / counter here. The txp
+ * will be dealt with in future
+ * rounds, eventually hitting the
+ * `else` branch.
+ */
+ gop->len = PAGE_SIZE - dst_offset;
+ txp->offset += gop->len;
+ txp->size -= gop->len;
+ dst_offset += gop->len; /* quit loop */
+ } else {
+ /* This tx request can be merged in the page */
+ gop->len = txp->size;
+ dst_offset += gop->len;
+
+ index = pending_index(netbk->pending_cons++);
+
+ pending_idx = netbk->pending_ring[index];
+
+ memcpy(&pending_tx_info[pending_idx].req, txp,
+ sizeof(*txp));
+ xenvif_get(vif);
+
+ pending_tx_info[pending_idx].vif = vif;
+
+ /* Poison these fields, corresponding
+ * fields for head tx req will be set
+ * to correct values after the loop.
+ */
+ netbk->mmap_pages[pending_idx] = (void *)(~0UL);
+ pending_tx_info[pending_idx].head =
+ INVALID_PENDING_RING_IDX;
+
+ if (!first) {
+ first = &pending_tx_info[pending_idx];
+ start_idx = index;
+ head_idx = pending_idx;
+ }
+
+ txp++;
+ slot++;
+ }
- gop++;
+ gop++;
+ }
- memcpy(&pending_tx_info[pending_idx].req, txp, sizeof(*txp));
- xenvif_get(vif);
- pending_tx_info[pending_idx].vif = vif;
- frag_set_pending_idx(&frags[i], pending_idx);
+ first->req.offset = 0;
+ first->req.size = dst_offset;
+ first->head = start_idx;
+ set_page_ext(page, netbk, head_idx);
+ netbk->mmap_pages[head_idx] = page;
+ frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
}
+ BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
+
return gop;
err:
/* Unwind, freeing all pages and sending error responses. */
- while (i-- > start) {
- xen_netbk_idx_release(netbk, frag_get_pending_idx(&frags[i]),
- XEN_NETIF_RSP_ERROR);
+ while (shinfo->nr_frags-- > start) {
+ xen_netbk_idx_release(netbk,
+ frag_get_pending_idx(&frags[shinfo->nr_frags]),
+ XEN_NETIF_RSP_ERROR);
}
/* The head too, if necessary. */
if (start)
@@ -1019,8 +1146,10 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
struct gnttab_copy *gop = *gopp;
u16 pending_idx = *((u16 *)skb->data);
struct skb_shared_info *shinfo = skb_shinfo(skb);
+ struct pending_tx_info *tx_info;
int nr_frags = shinfo->nr_frags;
int i, err, start;
+ u16 peek; /* peek into next tx request */
/* Check status of header. */
err = gop->status;
@@ -1032,11 +1161,20 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
for (i = start; i < nr_frags; i++) {
int j, newerr;
+ pending_ring_idx_t head;
pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
+ tx_info = &netbk->pending_tx_info[pending_idx];
+ head = tx_info->head;
/* Check error status: if okay then remember grant handle. */
- newerr = (++gop)->status;
+ do {
+ newerr = (++gop)->status;
+ if (newerr)
+ break;
+ peek = netbk->pending_ring[pending_index(++head)];
+ } while (!pending_tx_is_head(netbk, peek));
+
if (likely(!newerr)) {
/* Had a previous error? Invalidate this fragment. */
if (unlikely(err))
@@ -1256,11 +1394,12 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
struct sk_buff *skb;
int ret;
- while (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
+ while ((nr_pending_reqs(netbk) + XEN_NETIF_NR_SLOTS_MIN
+ < MAX_PENDING_REQS) &&
!list_empty(&netbk->net_schedule_list)) {
struct xenvif *vif;
struct xen_netif_tx_request txreq;
- struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
+ struct xen_netif_tx_request txfrags[max_skb_slots];
struct page *page;
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
u16 pending_idx;
@@ -1321,7 +1460,8 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
continue;
}
- ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do);
+ ret = netbk_count_requests(vif, &txreq, idx,
+ txfrags, work_to_do);
if (unlikely(ret < 0))
continue;
@@ -1348,7 +1488,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
pending_idx = netbk->pending_ring[index];
data_len = (txreq.size > PKT_PROT_LEN &&
- ret < MAX_SKB_FRAGS) ?
+ ret < XEN_NETIF_NR_SLOTS_MIN) ?
PKT_PROT_LEN : txreq.size;
skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
@@ -1398,6 +1538,7 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
memcpy(&netbk->pending_tx_info[pending_idx].req,
&txreq, sizeof(txreq));
netbk->pending_tx_info[pending_idx].vif = vif;
+ netbk->pending_tx_info[pending_idx].head = index;
*((u16 *)skb->data) = pending_idx;
__skb_put(skb, data_len);
@@ -1528,7 +1669,10 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,
{
struct xenvif *vif;
struct pending_tx_info *pending_tx_info;
- pending_ring_idx_t index;
+ pending_ring_idx_t head;
+ u16 peek; /* peek into next tx request */
+
+ BUG_ON(netbk->mmap_pages[pending_idx] == (void *)(~0UL));
/* Already complete? */
if (netbk->mmap_pages[pending_idx] == NULL)
@@ -1537,19 +1681,40 @@ static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,
pending_tx_info = &netbk->pending_tx_info[pending_idx];
vif = pending_tx_info->vif;
+ head = pending_tx_info->head;
- make_tx_response(vif, &pending_tx_info->req, status);
+ BUG_ON(!pending_tx_is_head(netbk, head));
+ BUG_ON(netbk->pending_ring[pending_index(head)] != pending_idx);
- index = pending_index(netbk->pending_prod++);
- netbk->pending_ring[index] = pending_idx;
+ do {
+ pending_ring_idx_t index;
+ pending_ring_idx_t idx = pending_index(head);
+ u16 info_idx = netbk->pending_ring[idx];
- xenvif_put(vif);
+ pending_tx_info = &netbk->pending_tx_info[info_idx];
+ make_tx_response(vif, &pending_tx_info->req, status);
+
+ /* Setting any number other than
+ * INVALID_PENDING_RING_IDX indicates this slot is
+ * starting a new packet / ending a previous packet.
+ */
+ pending_tx_info->head = 0;
+
+ index = pending_index(netbk->pending_prod++);
+ netbk->pending_ring[index] = netbk->pending_ring[info_idx];
- netbk->mmap_pages[pending_idx]->mapping = NULL;
+ xenvif_put(vif);
+
+ peek = netbk->pending_ring[pending_index(++head)];
+
+ } while (!pending_tx_is_head(netbk, peek));
+
+ netbk->mmap_pages[pending_idx]->mapping = 0;
put_page(netbk->mmap_pages[pending_idx]);
netbk->mmap_pages[pending_idx] = NULL;
}
+
static void make_tx_response(struct xenvif *vif,
struct xen_netif_tx_request *txp,
s8 st)
@@ -1602,8 +1767,9 @@ static inline int rx_work_todo(struct xen_netbk *netbk)
static inline int tx_work_todo(struct xen_netbk *netbk)
{
- if (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
- !list_empty(&netbk->net_schedule_list))
+ if ((nr_pending_reqs(netbk) + XEN_NETIF_NR_SLOTS_MIN
+ < MAX_PENDING_REQS) &&
+ !list_empty(&netbk->net_schedule_list))
return 1;
return 0;
@@ -1686,6 +1852,13 @@ static int __init netback_init(void)
if (!xen_domain())
return -ENODEV;
+ if (max_skb_slots < XEN_NETIF_NR_SLOTS_MIN) {
+ printk(KERN_INFO
+ "xen-netback: max_skb_slots too small (%d), bump it to XEN_NETIF_NR_SLOTS_MIN (%d)\n",
+ max_skb_slots, XEN_NETIF_NR_SLOTS_MIN);
+ max_skb_slots = XEN_NETIF_NR_SLOTS_MIN;
+ }
+
xen_netbk_group_nr = num_online_cpus();
xen_netbk = vzalloc(sizeof(struct xen_netbk) * xen_netbk_group_nr);
if (!xen_netbk)
diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index 58fadca..3ef3fe0 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -13,6 +13,24 @@
#include <xen/interface/grant_table.h>
/*
+ * Older implementation of Xen network frontend / backend has an
+ * implicit dependency on the MAX_SKB_FRAGS as the maximum number of
+ * ring slots a skb can use. Netfront / netback may not work as
+ * expected when frontend and backend have different MAX_SKB_FRAGS.
+ *
+ * A better approach is to add mechanism for netfront / netback to
+ * negotiate this value. However we cannot fix all possible
+ * frontends, so we need to define a value which states the minimum
+ * slots backend must support.
+ *
+ * The minimum value derives from older Linux kernel's MAX_SKB_FRAGS
+ * (18), which is proved to work with most frontends. Any new backend
+ * which doesn't negotiate with frontend should expect frontend to
+ * send a valid packet using slots up to this value.
+ */
+#define XEN_NETIF_NR_SLOTS_MIN 18
+
+/*
* Notifications after enqueuing any type of message should be conditional on
* the appropriate req_event or rsp_event field in the shared ring.
* If the client sends notification for rx requests then it should specify
--
1.7.10.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH net-next V7 4/4] xen-netback: don't disconnect frontend when seeing oversize packet
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
` (2 preceding siblings ...)
2013-04-22 12:20 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Wei Liu
@ 2013-04-22 12:20 ` Wei Liu
2013-04-22 19:41 ` [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback David Miller
` (3 subsequent siblings)
7 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 12:20 UTC (permalink / raw)
To: davem; +Cc: netdev, David Vrabel, Wei Liu, ian.campbell, xen-devel
Some frontend drivers are sending packets > 64 KiB in length. This length
overflows the length field in the first slot making the following slots have
an invalid length.
Turn this error back into a non-fatal error by dropping the packet. To avoid
having the following slots having fatal errors, consume all slots in the
packet.
This does not reopen the security hole in XSA-39 as if the packet as an
invalid number of slots it will still hit fatal error case.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
drivers/net/xen-netback/netback.c | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index d9292c5..a2865f1 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -975,12 +975,22 @@ static int netbk_count_requests(struct xenvif *vif,
memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
sizeof(*txp));
- if (txp->size > first->size) {
- netdev_err(vif->dev,
- "Invalid tx request, slot size %u > remaining size %u\n",
- txp->size, first->size);
- netbk_fatal_tx_err(vif);
- return -EIO;
+
+ /* If the guest submitted a frame >= 64 KiB then
+ * first->size overflowed and following slots will
+ * appear to be larger than the frame.
+ *
+ * This cannot be fatal error as there are buggy
+ * frontends that do this.
+ *
+ * Consume all slots and drop the packet.
+ */
+ if (!drop_err && txp->size > first->size) {
+ if (net_ratelimit())
+ netdev_dbg(vif->dev,
+ "Invalid tx request, slot size %u > remaining size %u\n",
+ txp->size, first->size);
+ drop_err = -EIO;
}
first->size -= txp->size;
--
1.7.10.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
` (3 preceding siblings ...)
2013-04-22 12:20 ` [PATCH net-next V7 4/4] xen-netback: don't disconnect frontend when seeing oversize packet Wei Liu
@ 2013-04-22 19:41 ` David Miller
[not found] ` <20130422.154139.1046488577191797292.davem@davemloft.net>
` (2 subsequent siblings)
7 siblings, 0 replies; 19+ messages in thread
From: David Miller @ 2013-04-22 19:41 UTC (permalink / raw)
To: wei.liu2; +Cc: netdev, ian.campbell, xen-devel
From: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 22 Apr 2013 13:20:39 +0100
> This series is now rebased onto net-next.
>
> We would also like to ask you to queue it for stable-ish tree. I can do the
> backport if necessary.
All applied, but this was a disaster.
If you want bug fixes propagated into -stable you submit them to 'net'
from the beginning.
There is no other method by which to do this.
By merging all of these changes to net-next, you will now have to get
them accepted again into 'net', and then (and only then) can you make
a request for -stable inclusion.
You are making so much extra work for me, please don't do this ever
again.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
[not found] ` <20130422.154139.1046488577191797292.davem@davemloft.net>
@ 2013-04-22 19:53 ` Wei Liu
[not found] ` <20130422195335.GA30755@zion.uk.xensource.com>
1 sibling, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-22 19:53 UTC (permalink / raw)
To: David Miller
Cc: netdev@vger.kernel.org, Wei Liu, Ian Campbell,
xen-devel@lists.xen.org
On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Mon, 22 Apr 2013 13:20:39 +0100
>
> > This series is now rebased onto net-next.
> >
> > We would also like to ask you to queue it for stable-ish tree. I can do the
> > backport if necessary.
>
> All applied, but this was a disaster.
>
Thanks, I misunderstood the workflow.
> If you want bug fixes propagated into -stable you submit them to 'net'
> from the beginning.
>
> There is no other method by which to do this.
>
> By merging all of these changes to net-next, you will now have to get
> them accepted again into 'net', and then (and only then) can you make
> a request for -stable inclusion.
>
Understood. Will submit them against 'net' later.
> You are making so much extra work for me, please don't do this ever
> again.
>
Sorry for this.
Wei.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header
[not found] ` <1366633243-17775-3-git-send-email-wei.liu2@citrix.com>
@ 2013-04-30 12:09 ` Jan Beulich
0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2013-04-30 12:09 UTC (permalink / raw)
To: ian.campbell, Wei Liu; +Cc: xen-devel
>>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> The maximum packet including header that can be handled by netfront / netback
> wire format is 65535. Reduce gso_max_size accordingly.
>
> Drop skb and print warning when skb->len > 65535. This can 1) save the effort
> to send malformed packet to netback, 2) help spotting misconfiguration of
> netfront in the future.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
With this, shouldn't we see to get this ...
> --- a/include/xen/interface/io/netif.h
> +++ b/include/xen/interface/io/netif.h
> @@ -47,6 +47,7 @@
> #define _XEN_NETTXF_extra_info (3)
> #define XEN_NETTXF_extra_info (1U<<_XEN_NETTXF_extra_info)
>
> +#define XEN_NETIF_MAX_TX_SIZE 0xFFFF
> struct xen_netif_tx_request {
> grant_ref_t gref; /* Reference to buffer page */
> uint16_t offset; /* Offset within buffer page */
... into the canonical public headers too?
Jan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
[not found] ` <1366633243-17775-4-git-send-email-wei.liu2@citrix.com>
@ 2013-04-30 12:59 ` Jan Beulich
2013-04-30 13:55 ` Jan Beulich
` (2 subsequent siblings)
3 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2013-04-30 12:59 UTC (permalink / raw)
To: Wei Liu; +Cc: netdev, xen-devel, ian.campbell, davem
>>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> @@ -898,47 +928,78 @@ static void netbk_fatal_tx_err(struct xenvif *vif)
>
> static int netbk_count_requests(struct xenvif *vif,
> struct xen_netif_tx_request *first,
> + RING_IDX first_idx,
> struct xen_netif_tx_request *txp,
> int work_to_do)
> {
> RING_IDX cons = vif->tx.req_cons;
It looks bogus and confusing to me to pass first_idx into this
function when this really is the same as cons.
Jan
> - int frags = 0;
> + int slots = 0;
> + int drop_err = 0;
>
> if (!(first->flags & XEN_NETTXF_more_data))
> return 0;
>
> do {
> - if (frags >= work_to_do) {
> - netdev_err(vif->dev, "Need more frags\n");
> + if (slots >= work_to_do) {
> + netdev_err(vif->dev,
> + "Asked for %d slots but exceeds this limit\n",
> + work_to_do);
> netbk_fatal_tx_err(vif);
> return -ENODATA;
> }
>
> - if (unlikely(frags >= MAX_SKB_FRAGS)) {
> - netdev_err(vif->dev, "Too many frags\n");
> + /* This guest is really using too many slots and
> + * considered malicious.
> + */
> + if (unlikely(slots >= max_skb_slots)) {
> + netdev_err(vif->dev,
> + "Malicious frontend using %d slots, threshold %u\n",
> + slots, max_skb_slots);
> netbk_fatal_tx_err(vif);
> return -E2BIG;
> }
>
> - memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + frags),
> + /* Xen network protocol had implicit dependency on
> + * MAX_SKB_FRAGS. XEN_NETIF_NR_SLOTS_MIN is set to the
> + * historical MAX_SKB_FRAGS value 18 to honor the same
> + * behavior as before. Any packet using more than 18
> + * slots but less than max_skb_slots slots is dropped
> + */
> + if (!drop_err && slots >= XEN_NETIF_NR_SLOTS_MIN) {
> + if (net_ratelimit())
> + netdev_dbg(vif->dev,
> + "Too many slots (%d) exceeding limit (%d), dropping packet\n",
> + slots, XEN_NETIF_NR_SLOTS_MIN);
> + drop_err = -E2BIG;
> + }
> +
> + memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
> sizeof(*txp));
> if (txp->size > first->size) {
> - netdev_err(vif->dev, "Frag is bigger than frame.\n");
> + netdev_err(vif->dev,
> + "Invalid tx request, slot size %u > remaining size %u\n",
> + txp->size, first->size);
> netbk_fatal_tx_err(vif);
> return -EIO;
> }
>
> first->size -= txp->size;
> - frags++;
> + slots++;
>
> if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
> - netdev_err(vif->dev, "txp->offset: %x, size: %u\n",
> + netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
> txp->offset, txp->size);
> netbk_fatal_tx_err(vif);
> return -EINVAL;
> }
> } while ((txp++)->flags & XEN_NETTXF_more_data);
> - return frags;
> +
> + if (drop_err) {
> + netbk_tx_err(vif, first, first_idx + slots);
> + return drop_err;
> + }
> +
> + return slots;
> }
>
> static struct page *xen_netbk_alloc_page(struct xen_netbk *netbk,
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
[not found] ` <1366633243-17775-4-git-send-email-wei.liu2@citrix.com>
2013-04-30 12:59 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Jan Beulich
@ 2013-04-30 13:55 ` Jan Beulich
2013-04-30 14:04 ` Jan Beulich
[not found] ` <517FDC6F02000078000D21B8@nat28.tlf.novell.com>
3 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2013-04-30 13:55 UTC (permalink / raw)
To: Wei Liu; +Cc: netdev, xen-devel, ian.campbell, davem
>>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> @@ -1256,11 +1394,12 @@ static unsigned xen_netbk_tx_build_gops(struct xen_netbk *netbk)
> struct sk_buff *skb;
> int ret;
>
> - while (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
> + while ((nr_pending_reqs(netbk) + XEN_NETIF_NR_SLOTS_MIN
> + < MAX_PENDING_REQS) &&
> !list_empty(&netbk->net_schedule_list)) {
> struct xenvif *vif;
> struct xen_netif_tx_request txreq;
> - struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
> + struct xen_netif_tx_request txfrags[max_skb_slots];
With max_skb_slots only having a lower limit enforced, this
basically gives the admin a way to crash the kernel without
necessarily being aware (and, considering that this would be
memory corruption, without necessarily being able to readily
connect the crash to the too high module parameter).
I was anyway of the opinion that dynamically sized stack
objects aren't really desirable to have in the kernel.
In any event, with a few tweaks netbk_count_requests() could
certainly be made not touch txp-s past XEN_NETIF_NR_SLOTS_MIN
(maybe XEN_NETIF_NR_SLOTS_MIN + 1).
Jan
> struct page *page;
> struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
> u16 pending_idx;
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
[not found] ` <1366633243-17775-4-git-send-email-wei.liu2@citrix.com>
2013-04-30 12:59 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Jan Beulich
2013-04-30 13:55 ` Jan Beulich
@ 2013-04-30 14:04 ` Jan Beulich
2013-04-30 15:45 ` Wei Liu
[not found] ` <517FDC6F02000078000D21B8@nat28.tlf.novell.com>
3 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2013-04-30 14:04 UTC (permalink / raw)
To: Wei Liu; +Cc: ian.campbell, xen-devel
>>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> --- a/include/xen/interface/io/netif.h
> +++ b/include/xen/interface/io/netif.h
> @@ -13,6 +13,24 @@
> #include <xen/interface/grant_table.h>
>
> /*
> + * Older implementation of Xen network frontend / backend has an
> + * implicit dependency on the MAX_SKB_FRAGS as the maximum number of
> + * ring slots a skb can use. Netfront / netback may not work as
> + * expected when frontend and backend have different MAX_SKB_FRAGS.
> + *
> + * A better approach is to add mechanism for netfront / netback to
> + * negotiate this value. However we cannot fix all possible
> + * frontends, so we need to define a value which states the minimum
> + * slots backend must support.
> + *
> + * The minimum value derives from older Linux kernel's MAX_SKB_FRAGS
> + * (18), which is proved to work with most frontends. Any new backend
> + * which doesn't negotiate with frontend should expect frontend to
> + * send a valid packet using slots up to this value.
> + */
> +#define XEN_NETIF_NR_SLOTS_MIN 18
> +
> +/*
> * Notifications after enqueuing any type of message should be conditional on
> * the appropriate req_event or rsp_event field in the shared ring.
> * If the client sends notification for rx requests then it should specify
Just like with the other public header change in this series - care
to submit a patch against xen-unstable, to have the master copy
of the header updated?
Jan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
[not found] ` <517FDC6F02000078000D21B8@nat28.tlf.novell.com>
@ 2013-04-30 15:44 ` Wei Liu
0 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-30 15:44 UTC (permalink / raw)
To: Jan Beulich
Cc: netdev@vger.kernel.org, xen-devel@lists.xen.org, Wei Liu,
Ian Campbell, davem@davemloft.net
On Tue, Apr 30, 2013 at 01:59:59PM +0100, Jan Beulich wrote:
> >>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> > @@ -898,47 +928,78 @@ static void netbk_fatal_tx_err(struct xenvif *vif)
> >
> > static int netbk_count_requests(struct xenvif *vif,
> > struct xen_netif_tx_request *first,
> > + RING_IDX first_idx,
> > struct xen_netif_tx_request *txp,
> > int work_to_do)
> > {
> > RING_IDX cons = vif->tx.req_cons;
>
> It looks bogus and confusing to me to pass first_idx into this
> function when this really is the same as cons.
>
So the idea came from David Vrabel and went through two iterations.
Looking at the code he might be mimicking other error handling path up
in the caller, i.e. xen_netbk_tx_build_gops.
It's safe to remove first_idx I think. Just tested replacing first_idx
with cons, everything worked as before.
Wei.
> Jan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions
2013-04-30 14:04 ` Jan Beulich
@ 2013-04-30 15:45 ` Wei Liu
0 siblings, 0 replies; 19+ messages in thread
From: Wei Liu @ 2013-04-30 15:45 UTC (permalink / raw)
To: Jan Beulich; +Cc: Wei Liu, Ian Campbell, xen-devel@lists.xen.org
On Tue, Apr 30, 2013 at 03:04:48PM +0100, Jan Beulich wrote:
> >>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@citrix.com> wrote:
> > --- a/include/xen/interface/io/netif.h
> > +++ b/include/xen/interface/io/netif.h
> > @@ -13,6 +13,24 @@
> > #include <xen/interface/grant_table.h>
> >
> > /*
> > + * Older implementation of Xen network frontend / backend has an
> > + * implicit dependency on the MAX_SKB_FRAGS as the maximum number of
> > + * ring slots a skb can use. Netfront / netback may not work as
> > + * expected when frontend and backend have different MAX_SKB_FRAGS.
> > + *
> > + * A better approach is to add mechanism for netfront / netback to
> > + * negotiate this value. However we cannot fix all possible
> > + * frontends, so we need to define a value which states the minimum
> > + * slots backend must support.
> > + *
> > + * The minimum value derives from older Linux kernel's MAX_SKB_FRAGS
> > + * (18), which is proved to work with most frontends. Any new backend
> > + * which doesn't negotiate with frontend should expect frontend to
> > + * send a valid packet using slots up to this value.
> > + */
> > +#define XEN_NETIF_NR_SLOTS_MIN 18
> > +
> > +/*
> > * Notifications after enqueuing any type of message should be conditional on
> > * the appropriate req_event or rsp_event field in the shared ring.
> > * If the client sends notification for rx requests then it should specify
>
> Just like with the other public header change in this series - care
> to submit a patch against xen-unstable, to have the master copy
> of the header updated?
>
Re all the header changes, I will post separate patch for Xen.
Wei.
> Jan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
[not found] ` <20130422195335.GA30755@zion.uk.xensource.com>
@ 2014-02-02 7:23 ` Matt Wilson
2014-02-03 10:30 ` Wei Liu
0 siblings, 1 reply; 19+ messages in thread
From: Matt Wilson @ 2014-02-02 7:23 UTC (permalink / raw)
To: Wei Liu; +Cc: Stefan Bader, xen-devel@lists.xen.org
On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Mon, 22 Apr 2013 13:20:39 +0100
> >
> > > This series is now rebased onto net-next.
> > >
> > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > backport if necessary.
> >
> > All applied, but this was a disaster.
> >
>
> Thanks, I misunderstood the workflow.
>
> > If you want bug fixes propagated into -stable you submit them to 'net'
> > from the beginning.
> >
> > There is no other method by which to do this.
> >
> > By merging all of these changes to net-next, you will now have to get
> > them accepted again into 'net', and then (and only then) can you make
> > a request for -stable inclusion.
> >
>
> Understood. Will submit them against 'net' later.
Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
to account for max TCP header) at all related to the "skb rides the
rocket" related TX packet drops reported against 3.8.x kernels?
https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
It seems like there are still some outstanding bugs in various -stable
releases.
--msw
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
2014-02-02 7:23 ` Matt Wilson
@ 2014-02-03 10:30 ` Wei Liu
2014-02-03 10:39 ` Ian Campbell
0 siblings, 1 reply; 19+ messages in thread
From: Wei Liu @ 2014-02-03 10:30 UTC (permalink / raw)
To: Matt Wilson; +Cc: Wei Liu, Stefan Bader, xen-devel@lists.xen.org
On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > From: Wei Liu <wei.liu2@citrix.com>
> > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > >
> > > > This series is now rebased onto net-next.
> > > >
> > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > backport if necessary.
> > >
> > > All applied, but this was a disaster.
> > >
> >
> > Thanks, I misunderstood the workflow.
> >
> > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > from the beginning.
> > >
> > > There is no other method by which to do this.
> > >
> > > By merging all of these changes to net-next, you will now have to get
> > > them accepted again into 'net', and then (and only then) can you make
> > > a request for -stable inclusion.
> > >
> >
> > Understood. Will submit them against 'net' later.
>
> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> to account for max TCP header) at all related to the "skb rides the
> rocket" related TX packet drops reported against 3.8.x kernels?
>
> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
>
> It seems like there are still some outstanding bugs in various -stable
> releases.
>
As far as I can remember Ian and I requested relavant patches be
backported in May, after these series settled in mainline for some time.
<1369734465.3469.52.camel@zakaz.uk.xensource.com>
These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
up.
Wei.
> --msw
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
2014-02-03 10:30 ` Wei Liu
@ 2014-02-03 10:39 ` Ian Campbell
2014-02-03 10:49 ` Stefan Bader
0 siblings, 1 reply; 19+ messages in thread
From: Ian Campbell @ 2014-02-03 10:39 UTC (permalink / raw)
To: Wei Liu; +Cc: Matt Wilson, Stefan Bader, xen-devel@lists.xen.org
On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> > On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > > From: Wei Liu <wei.liu2@citrix.com>
> > > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > > >
> > > > > This series is now rebased onto net-next.
> > > > >
> > > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > > backport if necessary.
> > > >
> > > > All applied, but this was a disaster.
> > > >
> > >
> > > Thanks, I misunderstood the workflow.
> > >
> > > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > > from the beginning.
> > > >
> > > > There is no other method by which to do this.
> > > >
> > > > By merging all of these changes to net-next, you will now have to get
> > > > them accepted again into 'net', and then (and only then) can you make
> > > > a request for -stable inclusion.
> > > >
> > >
> > > Understood. Will submit them against 'net' later.
> >
> > Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> > to account for max TCP header) at all related to the "skb rides the
> > rocket" related TX packet drops reported against 3.8.x kernels?
> >
> > https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
> >
> > It seems like there are still some outstanding bugs in various -stable
> > releases.
> >
>
> As far as I can remember Ian and I requested relavant patches be
> backported in May, after these series settled in mainline for some time.
>
> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>
> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
> up.
The stable guys don't maintain every tree indefinitely, usually only for
a couple of releases after the next mainline release or something (I
suppose you can find the official policy online somewhere). Presumably
these fixes came too late for the 3.8.y branch.
Longterm stable trees are an exception and get longer backports, I don't
think 3.8 is one of those though.
If anyone wants further backports then they will need to speak to the
Linux stable maintainers, although they should probably expect a "this
stable tree is now closed" type response for 3.8.
Or perhaps the above link implies that Canonical are supporting their
own LTS of Linux 3.8.y -- in which case the request should be made to
whoever that maintainer is.
Ian.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
2014-02-03 10:39 ` Ian Campbell
@ 2014-02-03 10:49 ` Stefan Bader
2014-02-03 11:08 ` Stefan Bader
0 siblings, 1 reply; 19+ messages in thread
From: Stefan Bader @ 2014-02-03 10:49 UTC (permalink / raw)
To: Ian Campbell, Wei Liu; +Cc: Matt Wilson, xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 2794 bytes --]
On 03.02.2014 11:39, Ian Campbell wrote:
> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>
>>>>>> This series is now rebased onto net-next.
>>>>>>
>>>>>> We would also like to ask you to queue it for stable-ish tree. I can do the
>>>>>> backport if necessary.
>>>>>
>>>>> All applied, but this was a disaster.
>>>>>
>>>>
>>>> Thanks, I misunderstood the workflow.
>>>>
>>>>> If you want bug fixes propagated into -stable you submit them to 'net'
>>>>> from the beginning.
>>>>>
>>>>> There is no other method by which to do this.
>>>>>
>>>>> By merging all of these changes to net-next, you will now have to get
>>>>> them accepted again into 'net', and then (and only then) can you make
>>>>> a request for -stable inclusion.
>>>>>
>>>>
>>>> Understood. Will submit them against 'net' later.
>>>
>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
>>> to account for max TCP header) at all related to the "skb rides the
>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>
>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
>>>
>>> It seems like there are still some outstanding bugs in various -stable
>>> releases.
>>>
>>
>> As far as I can remember Ian and I requested relavant patches be
>> backported in May, after these series settled in mainline for some time.
>>
>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>
>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
>> up.
>
> The stable guys don't maintain every tree indefinitely, usually only for
> a couple of releases after the next mainline release or something (I
> suppose you can find the official policy online somewhere). Presumably
> these fixes came too late for the 3.8.y branch.
>
> Longterm stable trees are an exception and get longer backports, I don't
> think 3.8 is one of those though.
>
> If anyone wants further backports then they will need to speak to the
> Linux stable maintainers, although they should probably expect a "this
> stable tree is now closed" type response for 3.8.
>
> Or perhaps the above link implies that Canonical are supporting their
> own LTS of Linux 3.8.y -- in which case the request should be made to
> whoever that maintainer is.
>
> Ian.
>
Yeah, it would be a Canonical maintained longterm tree. I am just checking to
verify which ones are missing the series. I will send out a request to pull them
in after that.
-Stefan
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
2014-02-03 10:49 ` Stefan Bader
@ 2014-02-03 11:08 ` Stefan Bader
2014-02-03 11:12 ` Ian Campbell
0 siblings, 1 reply; 19+ messages in thread
From: Stefan Bader @ 2014-02-03 11:08 UTC (permalink / raw)
To: Ian Campbell, Wei Liu; +Cc: Matt Wilson, xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 3323 bytes --]
On 03.02.2014 11:49, Stefan Bader wrote:
> On 03.02.2014 11:39, Ian Campbell wrote:
>> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>>
>>>>>>> This series is now rebased onto net-next.
>>>>>>>
>>>>>>> We would also like to ask you to queue it for stable-ish tree. I can do the
>>>>>>> backport if necessary.
>>>>>>
>>>>>> All applied, but this was a disaster.
>>>>>>
>>>>>
>>>>> Thanks, I misunderstood the workflow.
>>>>>
>>>>>> If you want bug fixes propagated into -stable you submit them to 'net'
>>>>>> from the beginning.
>>>>>>
>>>>>> There is no other method by which to do this.
>>>>>>
>>>>>> By merging all of these changes to net-next, you will now have to get
>>>>>> them accepted again into 'net', and then (and only then) can you make
>>>>>> a request for -stable inclusion.
>>>>>>
>>>>>
>>>>> Understood. Will submit them against 'net' later.
>>>>
>>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
>>>> to account for max TCP header) at all related to the "skb rides the
>>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>>
>>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
>>>>
>>>> It seems like there are still some outstanding bugs in various -stable
>>>> releases.
>>>>
>>>
>>> As far as I can remember Ian and I requested relavant patches be
>>> backported in May, after these series settled in mainline for some time.
>>>
>>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>>
>>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
>>> up.
>>
>> The stable guys don't maintain every tree indefinitely, usually only for
>> a couple of releases after the next mainline release or something (I
>> suppose you can find the official policy online somewhere). Presumably
>> these fixes came too late for the 3.8.y branch.
>>
>> Longterm stable trees are an exception and get longer backports, I don't
>> think 3.8 is one of those though.
>>
>> If anyone wants further backports then they will need to speak to the
>> Linux stable maintainers, although they should probably expect a "this
>> stable tree is now closed" type response for 3.8.
>>
>> Or perhaps the above link implies that Canonical are supporting their
>> own LTS of Linux 3.8.y -- in which case the request should be made to
>> whoever that maintainer is.
>>
>> Ian.
>>
> Yeah, it would be a Canonical maintained longterm tree. I am just checking to
> verify which ones are missing the series. I will send out a request to pull them
> in after that.
>
> -Stefan
>
It turns out that most of the series was applied to the 3.8.y.z longterm we look
after and through that made its way into the Raring kernel which is based on
that. Only the first patch of the series fails to apply. But that is only
changing a error message which actually looks to be correct in that series.
http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y
-Stefan
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback
2014-02-03 11:08 ` Stefan Bader
@ 2014-02-03 11:12 ` Ian Campbell
0 siblings, 0 replies; 19+ messages in thread
From: Ian Campbell @ 2014-02-03 11:12 UTC (permalink / raw)
To: Stefan Bader; +Cc: Matt Wilson, Wei Liu, xen-devel@lists.xen.org
On Mon, 2014-02-03 at 12:08 +0100, Stefan Bader wrote:
> It turns out that most of the series was applied to the 3.8.y.z longterm we look
> after and through that made its way into the Raring kernel which is based on
> that. Only the first patch of the series fails to apply. But that is only
> changing a error message which actually looks to be correct in that series.
>
> http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y
So all is good -- thanks!
Ian.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2014-02-03 11:12 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
2013-04-22 12:20 ` [PATCH net-next V7 1/4] xen-netfront: frags -> slots in log message Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Wei Liu
2013-04-22 12:20 ` [PATCH net-next V7 4/4] xen-netback: don't disconnect frontend when seeing oversize packet Wei Liu
2013-04-22 19:41 ` [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback David Miller
[not found] ` <20130422.154139.1046488577191797292.davem@davemloft.net>
2013-04-22 19:53 ` Wei Liu
[not found] ` <20130422195335.GA30755@zion.uk.xensource.com>
2014-02-02 7:23 ` Matt Wilson
2014-02-03 10:30 ` Wei Liu
2014-02-03 10:39 ` Ian Campbell
2014-02-03 10:49 ` Stefan Bader
2014-02-03 11:08 ` Stefan Bader
2014-02-03 11:12 ` Ian Campbell
[not found] ` <1366633243-17775-3-git-send-email-wei.liu2@citrix.com>
2013-04-30 12:09 ` [PATCH net-next V7 2/4] xen-netfront: reduce gso_max_size to account for max TCP header Jan Beulich
[not found] ` <1366633243-17775-4-git-send-email-wei.liu2@citrix.com>
2013-04-30 12:59 ` [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions Jan Beulich
2013-04-30 13:55 ` Jan Beulich
2013-04-30 14:04 ` Jan Beulich
2013-04-30 15:45 ` Wei Liu
[not found] ` <517FDC6F02000078000D21B8@nat28.tlf.novell.com>
2013-04-30 15:44 ` Wei Liu
2013-04-22 12:20 [PATCH net-next V7 0/4] Bundle fixes for Xen netfront / netback Wei Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).