* [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
@ 2024-10-29 8:46 ` Xuan Zhuo
2024-11-05 9:22 ` Jason Wang
2024-11-06 7:37 ` Michael S. Tsirkin
2024-10-29 8:46 ` [PATCH net-next v1 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
` (4 subsequent siblings)
5 siblings, 2 replies; 15+ messages in thread
From: Xuan Zhuo @ 2024-10-29 8:46 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, virtualization, Si-Wei Liu, Darren Kenny
When the frag just got a page, then may lead to regression on VM.
Specially if the sysctl net.core.high_order_alloc_disable value is 1,
then the frag always get a page when do refill.
Which could see reliable crashes or scp failure (scp a file 100M in size
to VM).
The issue is that the virtnet_rq_dma takes up 16 bytes at the beginning
of a new frag. When the frag size is larger than PAGE_SIZE,
everything is fine. However, if the frag is only one page and the
total size of the buffer and virtnet_rq_dma is larger than one page, an
overflow may occur.
The commit f9dac92ba908 ("virtio_ring: enable premapped mode whatever
use_dma_api") introduced this problem. And we reverted some commits to
fix this in last linux version. Now we try to enable it and fix this
bug directly.
Here, when the frag size is not enough, we reduce the buffer len to fix
this problem.
Reported-by: "Si-Wei Liu" <si-wei.liu@oracle.com>
Tested-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
drivers/net/virtio_net.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 792e9eadbfc3..d50c1940eb23 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -926,9 +926,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
void *buf, *head;
dma_addr_t addr;
- if (unlikely(!skb_page_frag_refill(size, alloc_frag, gfp)))
- return NULL;
-
head = page_address(alloc_frag->page);
if (rq->do_dma) {
@@ -2423,6 +2420,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
len = SKB_DATA_ALIGN(len) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ if (unlikely(!skb_page_frag_refill(len, &rq->alloc_frag, gfp)))
+ return -ENOMEM;
+
buf = virtnet_rq_alloc(rq, len, gfp);
if (unlikely(!buf))
return -ENOMEM;
@@ -2525,6 +2525,12 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
*/
len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room);
+ if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
+ return -ENOMEM;
+
+ if (!alloc_frag->offset && len + room + sizeof(struct virtnet_rq_dma) > alloc_frag->size)
+ len -= sizeof(struct virtnet_rq_dma);
+
buf = virtnet_rq_alloc(rq, len + room, gfp);
if (unlikely(!buf))
return -ENOMEM;
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc
2024-10-29 8:46 ` [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc Xuan Zhuo
@ 2024-11-05 9:22 ` Jason Wang
2024-11-06 7:37 ` Michael S. Tsirkin
1 sibling, 0 replies; 15+ messages in thread
From: Jason Wang @ 2024-11-05 9:22 UTC (permalink / raw)
To: Xuan Zhuo
Cc: netdev, Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
virtualization, Si-Wei Liu, Darren Kenny
On Tue, Oct 29, 2024 at 4:47 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> When the frag just got a page, then may lead to regression on VM.
> Specially if the sysctl net.core.high_order_alloc_disable value is 1,
> then the frag always get a page when do refill.
>
> Which could see reliable crashes or scp failure (scp a file 100M in size
> to VM).
>
> The issue is that the virtnet_rq_dma takes up 16 bytes at the beginning
> of a new frag. When the frag size is larger than PAGE_SIZE,
> everything is fine. However, if the frag is only one page and the
> total size of the buffer and virtnet_rq_dma is larger than one page, an
> overflow may occur.
>
> The commit f9dac92ba908 ("virtio_ring: enable premapped mode whatever
> use_dma_api") introduced this problem. And we reverted some commits to
> fix this in last linux version. Now we try to enable it and fix this
> bug directly.
>
> Here, when the frag size is not enough, we reduce the buffer len to fix
> this problem.
>
> Reported-by: "Si-Wei Liu" <si-wei.liu@oracle.com>
> Tested-by: Darren Kenny <darren.kenny@oracle.com>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc
2024-10-29 8:46 ` [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc Xuan Zhuo
2024-11-05 9:22 ` Jason Wang
@ 2024-11-06 7:37 ` Michael S. Tsirkin
1 sibling, 0 replies; 15+ messages in thread
From: Michael S. Tsirkin @ 2024-11-06 7:37 UTC (permalink / raw)
To: Xuan Zhuo
Cc: netdev, Jason Wang, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
virtualization, Si-Wei Liu, Darren Kenny
On Tue, Oct 29, 2024 at 04:46:12PM +0800, Xuan Zhuo wrote:
> When the frag just got a page, then may lead to regression on VM.
> Specially if the sysctl net.core.high_order_alloc_disable value is 1,
> then the frag always get a page when do refill.
>
> Which could see reliable crashes or scp failure (scp a file 100M in size
> to VM).
>
> The issue is that the virtnet_rq_dma takes up 16 bytes at the beginning
> of a new frag. When the frag size is larger than PAGE_SIZE,
> everything is fine. However, if the frag is only one page and the
> total size of the buffer and virtnet_rq_dma is larger than one page, an
> overflow may occur.
>
> The commit f9dac92ba908 ("virtio_ring: enable premapped mode whatever
> use_dma_api") introduced this problem. And we reverted some commits to
> fix this in last linux version. Now we try to enable it and fix this
> bug directly.
>
> Here, when the frag size is not enough, we reduce the buffer len to fix
> this problem.
>
> Reported-by: "Si-Wei Liu" <si-wei.liu@oracle.com>
> Tested-by: Darren Kenny <darren.kenny@oracle.com>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
This one belongs in net I feel. However, patches 2 and on
depend on it not to cause regressions. So I suggest merging it
on both branches, git will figure out the conflict I think.
> ---
> drivers/net/virtio_net.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 792e9eadbfc3..d50c1940eb23 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -926,9 +926,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
> void *buf, *head;
> dma_addr_t addr;
>
> - if (unlikely(!skb_page_frag_refill(size, alloc_frag, gfp)))
> - return NULL;
> -
> head = page_address(alloc_frag->page);
>
> if (rq->do_dma) {
> @@ -2423,6 +2420,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
> len = SKB_DATA_ALIGN(len) +
> SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>
> + if (unlikely(!skb_page_frag_refill(len, &rq->alloc_frag, gfp)))
> + return -ENOMEM;
> +
> buf = virtnet_rq_alloc(rq, len, gfp);
> if (unlikely(!buf))
> return -ENOMEM;
> @@ -2525,6 +2525,12 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> */
> len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room);
>
> + if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
> + return -ENOMEM;
> +
> + if (!alloc_frag->offset && len + room + sizeof(struct virtnet_rq_dma) > alloc_frag->size)
> + len -= sizeof(struct virtnet_rq_dma);
> +
> buf = virtnet_rq_alloc(rq, len + room, gfp);
> if (unlikely(!buf))
> return -ENOMEM;
> --
> 2.32.0.3.g01195cf9f
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v1 2/4] virtio_net: big mode skip the unmap check
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
2024-10-29 8:46 ` [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc Xuan Zhuo
@ 2024-10-29 8:46 ` Xuan Zhuo
2024-10-29 8:46 ` [PATCH net-next v1 3/4] virtio_net: enable premapped mode for merge and small by default Xuan Zhuo
` (3 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Xuan Zhuo @ 2024-10-29 8:46 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, virtualization, Darren Kenny
The virtio-net big mode did not enable premapped mode,
so we did not need to check the unmap. And the subsequent
commit will remove the failover code for failing enable
premapped for merge and small mode. So we need to remove
the checking do_dma code in the big mode path.
Tested-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
drivers/net/virtio_net.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d50c1940eb23..7557808e8c1f 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -987,7 +987,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
return;
}
- if (rq->do_dma)
+ if (!vi->big_packets || vi->mergeable_rx_bufs)
virtnet_rq_unmap(rq, buf, 0);
virtnet_rq_free_buf(vi, rq, buf);
@@ -2712,7 +2712,7 @@ static int virtnet_receive_packets(struct virtnet_info *vi,
}
} else {
while (packets < budget &&
- (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
+ (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
receive_buf(vi, rq, buf, len, NULL, xdp_xmit, stats);
packets++;
}
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH net-next v1 3/4] virtio_net: enable premapped mode for merge and small by default
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
2024-10-29 8:46 ` [PATCH net-next v1 1/4] virtio-net: fix overflow inside virtnet_rq_alloc Xuan Zhuo
2024-10-29 8:46 ` [PATCH net-next v1 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
@ 2024-10-29 8:46 ` Xuan Zhuo
2024-10-29 8:46 ` [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code Xuan Zhuo
` (2 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Xuan Zhuo @ 2024-10-29 8:46 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, virtualization, Darren Kenny
Currently, the virtio core will perform a dma operation for each
buffer. Although, the same page may be operated multiple times.
In premapped mod, we can perform only one dma operation for the pages of
the alloc frag. This is beneficial for the iommu device.
kernel command line: intel_iommu=on iommu.passthrough=0
| strict=0 | strict=1
Before | 775496pps | 428614pps
After | 1109316pps | 742853pps
In the 6.11, we disabled this feature because a regress [1].
Now, we fix the problem and re-enable it.
[1]: http://lore.kernel.org/all/8b20cc28-45a9-4643-8e87-ba164a540c0a@oracle.com
Tested-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
drivers/net/virtio_net.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 7557808e8c1f..ea433e9650eb 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -6107,6 +6107,17 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
return -ENOMEM;
}
+static void virtnet_rq_set_premapped(struct virtnet_info *vi)
+{
+ int i;
+
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ /* error should never happen */
+ BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq));
+ vi->rq[i].do_dma = true;
+ }
+}
+
static int init_vqs(struct virtnet_info *vi)
{
int ret;
@@ -6120,6 +6131,10 @@ static int init_vqs(struct virtnet_info *vi)
if (ret)
goto err_free;
+ /* disable for big mode */
+ if (!vi->big_packets || vi->mergeable_rx_bufs)
+ virtnet_rq_set_premapped(vi);
+
cpus_read_lock();
virtnet_set_affinity(vi);
cpus_read_unlock();
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
` (2 preceding siblings ...)
2024-10-29 8:46 ` [PATCH net-next v1 3/4] virtio_net: enable premapped mode for merge and small by default Xuan Zhuo
@ 2024-10-29 8:46 ` Xuan Zhuo
2024-11-05 9:23 ` Jason Wang
2024-11-05 2:46 ` [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Jakub Kicinski
2024-11-05 11:00 ` patchwork-bot+netdevbpf
5 siblings, 1 reply; 15+ messages in thread
From: Xuan Zhuo @ 2024-10-29 8:46 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, virtualization, Darren Kenny
Now, the premapped mode can be enabled unconditionally.
So we can remove the failover code for merge and small mode.
The virtnet_rq_xxx() helper would be only used if the mode is using pre
mapping. A check is added to prevent misusing of these API.
Tested-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
drivers/net/virtio_net.c | 90 ++++++++++++++++++++--------------------
1 file changed, 44 insertions(+), 46 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index ea433e9650eb..169c1aa87da5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -356,9 +356,6 @@ struct receive_queue {
struct xdp_rxq_info xsk_rxq_info;
struct xdp_buff **xsk_buffs;
-
- /* Do dma by self */
- bool do_dma;
};
/* This structure can contain rss message with maximum settings for indirection table and keysize
@@ -856,11 +853,14 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
{
+ struct virtnet_info *vi = rq->vq->vdev->priv;
struct page *page = virt_to_head_page(buf);
struct virtnet_rq_dma *dma;
void *head;
int offset;
+ BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
+
head = page_address(page);
dma = head;
@@ -885,10 +885,13 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
{
+ struct virtnet_info *vi = rq->vq->vdev->priv;
void *buf;
+ BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
+
buf = virtqueue_get_buf_ctx(rq->vq, len, ctx);
- if (buf && rq->do_dma)
+ if (buf)
virtnet_rq_unmap(rq, buf, *len);
return buf;
@@ -896,15 +899,13 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
{
+ struct virtnet_info *vi = rq->vq->vdev->priv;
struct virtnet_rq_dma *dma;
dma_addr_t addr;
u32 offset;
void *head;
- if (!rq->do_dma) {
- sg_init_one(rq->sg, buf, len);
- return;
- }
+ BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
head = page_address(rq->alloc_frag.page);
@@ -922,50 +923,51 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
{
struct page_frag *alloc_frag = &rq->alloc_frag;
+ struct virtnet_info *vi = rq->vq->vdev->priv;
struct virtnet_rq_dma *dma;
void *buf, *head;
dma_addr_t addr;
+ BUG_ON(vi->big_packets && !vi->mergeable_rx_bufs);
+
head = page_address(alloc_frag->page);
- if (rq->do_dma) {
- dma = head;
-
- /* new pages */
- if (!alloc_frag->offset) {
- if (rq->last_dma) {
- /* Now, the new page is allocated, the last dma
- * will not be used. So the dma can be unmapped
- * if the ref is 0.
- */
- virtnet_rq_unmap(rq, rq->last_dma, 0);
- rq->last_dma = NULL;
- }
+ dma = head;
- dma->len = alloc_frag->size - sizeof(*dma);
+ /* new pages */
+ if (!alloc_frag->offset) {
+ if (rq->last_dma) {
+ /* Now, the new page is allocated, the last dma
+ * will not be used. So the dma can be unmapped
+ * if the ref is 0.
+ */
+ virtnet_rq_unmap(rq, rq->last_dma, 0);
+ rq->last_dma = NULL;
+ }
- addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1,
- dma->len, DMA_FROM_DEVICE, 0);
- if (virtqueue_dma_mapping_error(rq->vq, addr))
- return NULL;
+ dma->len = alloc_frag->size - sizeof(*dma);
- dma->addr = addr;
- dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr);
+ addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1,
+ dma->len, DMA_FROM_DEVICE, 0);
+ if (virtqueue_dma_mapping_error(rq->vq, addr))
+ return NULL;
- /* Add a reference to dma to prevent the entire dma from
- * being released during error handling. This reference
- * will be freed after the pages are no longer used.
- */
- get_page(alloc_frag->page);
- dma->ref = 1;
- alloc_frag->offset = sizeof(*dma);
+ dma->addr = addr;
+ dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr);
- rq->last_dma = dma;
- }
+ /* Add a reference to dma to prevent the entire dma from
+ * being released during error handling. This reference
+ * will be freed after the pages are no longer used.
+ */
+ get_page(alloc_frag->page);
+ dma->ref = 1;
+ alloc_frag->offset = sizeof(*dma);
- ++dma->ref;
+ rq->last_dma = dma;
}
+ ++dma->ref;
+
buf = head + alloc_frag->offset;
get_page(alloc_frag->page);
@@ -2433,8 +2435,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
if (err < 0) {
- if (rq->do_dma)
- virtnet_rq_unmap(rq, buf, 0);
+ virtnet_rq_unmap(rq, buf, 0);
put_page(virt_to_head_page(buf));
}
@@ -2554,8 +2555,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
ctx = mergeable_len_to_ctx(len + room, headroom);
err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
if (err < 0) {
- if (rq->do_dma)
- virtnet_rq_unmap(rq, buf, 0);
+ virtnet_rq_unmap(rq, buf, 0);
put_page(virt_to_head_page(buf));
}
@@ -5922,7 +5922,7 @@ static void free_receive_page_frags(struct virtnet_info *vi)
int i;
for (i = 0; i < vi->max_queue_pairs; i++)
if (vi->rq[i].alloc_frag.page) {
- if (vi->rq[i].do_dma && vi->rq[i].last_dma)
+ if (vi->rq[i].last_dma)
virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0);
put_page(vi->rq[i].alloc_frag.page);
}
@@ -6111,11 +6111,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
{
int i;
- for (i = 0; i < vi->max_queue_pairs; i++) {
+ for (i = 0; i < vi->max_queue_pairs; i++)
/* error should never happen */
BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq));
- vi->rq[i].do_dma = true;
- }
}
static int init_vqs(struct virtnet_info *vi)
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code
2024-10-29 8:46 ` [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code Xuan Zhuo
@ 2024-11-05 9:23 ` Jason Wang
0 siblings, 0 replies; 15+ messages in thread
From: Jason Wang @ 2024-11-05 9:23 UTC (permalink / raw)
To: Xuan Zhuo
Cc: netdev, Michael S. Tsirkin, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
virtualization, Darren Kenny
On Tue, Oct 29, 2024 at 4:47 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Now, the premapped mode can be enabled unconditionally.
>
> So we can remove the failover code for merge and small mode.
>
> The virtnet_rq_xxx() helper would be only used if the mode is using pre
> mapping. A check is added to prevent misusing of these API.
>
> Tested-by: Darren Kenny <darren.kenny@oracle.com>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
` (3 preceding siblings ...)
2024-10-29 8:46 ` [PATCH net-next v1 4/4] virtio_net: rx remove premapped failover code Xuan Zhuo
@ 2024-11-05 2:46 ` Jakub Kicinski
2024-11-05 5:07 ` Jason Wang
2024-11-06 7:38 ` Michael S. Tsirkin
2024-11-05 11:00 ` patchwork-bot+netdevbpf
5 siblings, 2 replies; 15+ messages in thread
From: Jakub Kicinski @ 2024-11-05 2:46 UTC (permalink / raw)
To: Michael S. Tsirkin, Jason Wang
Cc: Xuan Zhuo, netdev, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, virtualization
On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> In the last linux version, we disabled this feature to fix the
> regress[1].
>
> The patch set is try to fix the problem and re-enable it.
>
> More info: http://lore.kernel.org/all/20240820071913.68004-1-xuanzhuo@linux.alibaba.com
Sorry to ping, Michael, Jason we're waiting to hear from you on
this one.
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-11-05 2:46 ` [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Jakub Kicinski
@ 2024-11-05 5:07 ` Jason Wang
2024-11-06 7:38 ` Michael S. Tsirkin
1 sibling, 0 replies; 15+ messages in thread
From: Jason Wang @ 2024-11-05 5:07 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Michael S. Tsirkin, Xuan Zhuo, netdev, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Paolo Abeni,
virtualization
Hi Jakub:
On Tue, Nov 5, 2024 at 10:46 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> > In the last linux version, we disabled this feature to fix the
> > regress[1].
> >
> > The patch set is try to fix the problem and re-enable it.
> >
> > More info: http://lore.kernel.org/all/20240820071913.68004-1-xuanzhuo@linux.alibaba.com
>
> Sorry to ping, Michael, Jason we're waiting to hear from you on
> this one.
>
Will review this today.
Thanks
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-11-05 2:46 ` [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Jakub Kicinski
2024-11-05 5:07 ` Jason Wang
@ 2024-11-06 7:38 ` Michael S. Tsirkin
2024-11-06 8:46 ` Xuan Zhuo
1 sibling, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2024-11-06 7:38 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Jason Wang, Xuan Zhuo, netdev, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, virtualization
On Mon, Nov 04, 2024 at 06:46:41PM -0800, Jakub Kicinski wrote:
> On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> > In the last linux version, we disabled this feature to fix the
> > regress[1].
> >
> > The patch set is try to fix the problem and re-enable it.
> >
> > More info: http://lore.kernel.org/all/20240820071913.68004-1-xuanzhuo@linux.alibaba.com
>
> Sorry to ping, Michael, Jason we're waiting to hear from you on
> this one.
Can patch 1 be applied on net as well? Or I can take it through
my tree. It's a bugfix, just for an uncommon configuration.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-11-06 7:38 ` Michael S. Tsirkin
@ 2024-11-06 8:46 ` Xuan Zhuo
2024-11-06 9:16 ` Michael S. Tsirkin
0 siblings, 1 reply; 15+ messages in thread
From: Xuan Zhuo @ 2024-11-06 8:46 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, netdev, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, virtualization,
Jakub Kicinski
On Wed, 6 Nov 2024 02:38:40 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Mon, Nov 04, 2024 at 06:46:41PM -0800, Jakub Kicinski wrote:
> > On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> > > In the last linux version, we disabled this feature to fix the
> > > regress[1].
> > >
> > > The patch set is try to fix the problem and re-enable it.
> > >
> > > More info: http://lore.kernel.org/all/20240820071913.68004-1-xuanzhuo@linux.alibaba.com
> >
> > Sorry to ping, Michael, Jason we're waiting to hear from you on
> > this one.
>
> Can patch 1 be applied on net as well? Or I can take it through
> my tree. It's a bugfix, just for an uncommon configuration.
Why? That can not be triggered in net branch.
Thanks
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-11-06 8:46 ` Xuan Zhuo
@ 2024-11-06 9:16 ` Michael S. Tsirkin
2024-11-06 18:36 ` Jakub Kicinski
0 siblings, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2024-11-06 9:16 UTC (permalink / raw)
To: Xuan Zhuo
Cc: Jason Wang, netdev, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, virtualization,
Jakub Kicinski
On Wed, Nov 06, 2024 at 04:46:23PM +0800, Xuan Zhuo wrote:
> On Wed, 6 Nov 2024 02:38:40 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Mon, Nov 04, 2024 at 06:46:41PM -0800, Jakub Kicinski wrote:
> > > On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> > > > In the last linux version, we disabled this feature to fix the
> > > > regress[1].
> > > >
> > > > The patch set is try to fix the problem and re-enable it.
> > > >
> > > > More info: http://lore.kernel.org/all/20240820071913.68004-1-xuanzhuo@linux.alibaba.com
> > >
> > > Sorry to ping, Michael, Jason we're waiting to hear from you on
> > > this one.
> >
> > Can patch 1 be applied on net as well? Or I can take it through
> > my tree. It's a bugfix, just for an uncommon configuration.
>
>
> Why? That can not be triggered in net branch.
>
> Thanks
I thought it can but can't remember why now. OK, nm then, thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-11-06 9:16 ` Michael S. Tsirkin
@ 2024-11-06 18:36 ` Jakub Kicinski
0 siblings, 0 replies; 15+ messages in thread
From: Jakub Kicinski @ 2024-11-06 18:36 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Xuan Zhuo, Jason Wang, netdev, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, virtualization
On Wed, 6 Nov 2024 04:16:07 -0500 Michael S. Tsirkin wrote:
> I thought it can but can't remember why now. OK, nm then, thanks!
FWIW (I think there was confusion in earlier discussions) we do merge
net into net-next once a week. So we can always apply stuff to net,
and then depending patches to net-next within a week. Just for future
reference, this patch IIUC we just leave be.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default
2024-10-29 8:46 [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Xuan Zhuo
` (4 preceding siblings ...)
2024-11-05 2:46 ` [PATCH net-next v1 0/4] virtio_net: enable premapped mode by default Jakub Kicinski
@ 2024-11-05 11:00 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 15+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-11-05 11:00 UTC (permalink / raw)
To: Xuan Zhuo
Cc: netdev, mst, jasowang, eperezma, andrew+netdev, davem, edumazet,
kuba, pabeni, virtualization
Hello:
This series was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@redhat.com>:
On Tue, 29 Oct 2024 16:46:11 +0800 you wrote:
> v1:
> 1. fix some small problems
> 2. remove commit "virtio_net: introduce vi->mode"
>
> In the last linux version, we disabled this feature to fix the
> regress[1].
>
> [...]
Here is the summary with links:
- [net-next,v1,1/4] virtio-net: fix overflow inside virtnet_rq_alloc
https://git.kernel.org/netdev/net-next/c/6aacd1484468
- [net-next,v1,2/4] virtio_net: big mode skip the unmap check
https://git.kernel.org/netdev/net-next/c/a33f3df85075
- [net-next,v1,3/4] virtio_net: enable premapped mode for merge and small by default
https://git.kernel.org/netdev/net-next/c/47008bb51c3e
- [net-next,v1,4/4] virtio_net: rx remove premapped failover code
https://git.kernel.org/netdev/net-next/c/fb22437c1ba3
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 15+ messages in thread