* [PATCH net] tuntap: add missing xdp flush
@ 2018-02-07 9:14 Jason Wang
2018-02-08 19:11 ` David Miller
0 siblings, 1 reply; 2+ messages in thread
From: Jason Wang @ 2018-02-07 9:14 UTC (permalink / raw)
To: netdev; +Cc: mst, Jason Wang
When using devmap to redirect packets between interfaces,
xdp_do_flush() is usually a must to flush any batched
packets. Unfortunately this is missed in current tuntap
implementation.
Unlike most hardware driver which did XDP inside NAPI loop and call
xdp_do_flush() at then end of each round of poll. TAP did it in the
context of process e.g tun_get_user(). So fix this by count the
pending redirected packets and flush when it exceeds NAPI_POLL_WEIGHT
or MSG_MORE was cleared by sendmsg() caller.
With this fix, xdp_redirect_map works again between two TAPs.
Fixes: 761876c857cb ("tap: XDP support")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
drivers/net/tun.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 0dc66e4..17e496b 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -181,6 +181,7 @@ struct tun_file {
struct tun_struct *detached;
struct ptr_ring tx_ring;
struct xdp_rxq_info xdp_rxq;
+ int xdp_pending_pkts;
};
struct tun_flow_entry {
@@ -1665,6 +1666,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
case XDP_REDIRECT:
get_page(alloc_frag->page);
alloc_frag->offset += buflen;
+ ++tfile->xdp_pending_pkts;
err = xdp_do_redirect(tun->dev, &xdp, xdp_prog);
if (err)
goto err_redirect;
@@ -1986,6 +1988,11 @@ static ssize_t tun_chr_write_iter(struct kiocb *iocb, struct iov_iter *from)
result = tun_get_user(tun, tfile, NULL, from,
file->f_flags & O_NONBLOCK, false);
+ if (tfile->xdp_pending_pkts) {
+ tfile->xdp_pending_pkts = 0;
+ xdp_do_flush_map();
+ }
+
tun_put(tun);
return result;
}
@@ -2322,6 +2329,13 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
ret = tun_get_user(tun, tfile, m->msg_control, &m->msg_iter,
m->msg_flags & MSG_DONTWAIT,
m->msg_flags & MSG_MORE);
+
+ if (tfile->xdp_pending_pkts >= NAPI_POLL_WEIGHT ||
+ !(m->msg_flags & MSG_MORE)) {
+ tfile->xdp_pending_pkts = 0;
+ xdp_do_flush_map();
+ }
+
tun_put(tun);
return ret;
}
@@ -3153,6 +3167,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
sock_set_flag(&tfile->sk, SOCK_ZEROCOPY);
memset(&tfile->tx_ring, 0, sizeof(tfile->tx_ring));
+ tfile->xdp_pending_pkts = 0;
return 0;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH net] tuntap: add missing xdp flush
2018-02-07 9:14 [PATCH net] tuntap: add missing xdp flush Jason Wang
@ 2018-02-08 19:11 ` David Miller
0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2018-02-08 19:11 UTC (permalink / raw)
To: jasowang; +Cc: netdev, mst
From: Jason Wang <jasowang@redhat.com>
Date: Wed, 7 Feb 2018 17:14:46 +0800
> When using devmap to redirect packets between interfaces,
> xdp_do_flush() is usually a must to flush any batched
> packets. Unfortunately this is missed in current tuntap
> implementation.
>
> Unlike most hardware driver which did XDP inside NAPI loop and call
> xdp_do_flush() at then end of each round of poll. TAP did it in the
> context of process e.g tun_get_user(). So fix this by count the
> pending redirected packets and flush when it exceeds NAPI_POLL_WEIGHT
> or MSG_MORE was cleared by sendmsg() caller.
>
> With this fix, xdp_redirect_map works again between two TAPs.
>
> Fixes: 761876c857cb ("tap: XDP support")
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Applied and queued up for -stable, thanks Jason.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2018-02-08 19:11 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-07 9:14 [PATCH net] tuntap: add missing xdp flush Jason Wang
2018-02-08 19:11 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).