From: Jason Wang <jasowang@redhat.com>
To: Simon Schippers <simon.schippers@tu-dortmund.de>
Cc: willemdebruijn.kernel@gmail.com, andrew+netdev@lunn.ch,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, mst@redhat.com, eperezma@redhat.com,
leiyang@redhat.com, stephen@networkplumber.org, jon@nutanix.com,
tim.gebauer@tu-dortmund.de, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
virtualization@lists.linux.dev
Subject: Re: [PATCH net-next v7 3/9] tun/tap: add ptr_ring consume helper with netdev queue wakeup
Date: Wed, 28 Jan 2026 15:03:11 +0800 [thread overview]
Message-ID: <CACGkMEv71kn91FPAUrxJg=YB3+B0MRTgOidMPHjK7Qq0WEhGtw@mail.gmail.com> (raw)
In-Reply-To: <3a1d6232-efe4-4e79-a196-44794fdc0f33@tu-dortmund.de>
On Wed, Jan 28, 2026 at 12:48 AM Simon Schippers
<simon.schippers@tu-dortmund.de> wrote:
>
> On 1/23/26 10:54, Simon Schippers wrote:
> > On 1/23/26 04:05, Jason Wang wrote:
> >> On Thu, Jan 22, 2026 at 1:35 PM Jason Wang <jasowang@redhat.com> wrote:
> >>>
> >>> On Wed, Jan 21, 2026 at 5:33 PM Simon Schippers
> >>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>
> >>>> On 1/9/26 07:02, Jason Wang wrote:
> >>>>> On Thu, Jan 8, 2026 at 3:41 PM Simon Schippers
> >>>>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>>>
> >>>>>> On 1/8/26 04:38, Jason Wang wrote:
> >>>>>>> On Thu, Jan 8, 2026 at 5:06 AM Simon Schippers
> >>>>>>> <simon.schippers@tu-dortmund.de> wrote:
> >>>>>>>>
> >>>>>>>> Introduce {tun,tap}_ring_consume() helpers that wrap __ptr_ring_consume()
> >>>>>>>> and wake the corresponding netdev subqueue when consuming an entry frees
> >>>>>>>> space in the underlying ptr_ring.
> >>>>>>>>
> >>>>>>>> Stopping of the netdev queue when the ptr_ring is full will be introduced
> >>>>>>>> in an upcoming commit.
> >>>>>>>>
> >>>>>>>> Co-developed-by: Tim Gebauer <tim.gebauer@tu-dortmund.de>
> >>>>>>>> Signed-off-by: Tim Gebauer <tim.gebauer@tu-dortmund.de>
> >>>>>>>> Signed-off-by: Simon Schippers <simon.schippers@tu-dortmund.de>
> >>>>>>>> ---
> >>>>>>>> drivers/net/tap.c | 23 ++++++++++++++++++++++-
> >>>>>>>> drivers/net/tun.c | 25 +++++++++++++++++++++++--
> >>>>>>>> 2 files changed, 45 insertions(+), 3 deletions(-)
> >>>>>>>>
> >>>>>>>> diff --git a/drivers/net/tap.c b/drivers/net/tap.c
> >>>>>>>> index 1197f245e873..2442cf7ac385 100644
> >>>>>>>> --- a/drivers/net/tap.c
> >>>>>>>> +++ b/drivers/net/tap.c
> >>>>>>>> @@ -753,6 +753,27 @@ static ssize_t tap_put_user(struct tap_queue *q,
> >>>>>>>> return ret ? ret : total;
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> +static void *tap_ring_consume(struct tap_queue *q)
> >>>>>>>> +{
> >>>>>>>> + struct ptr_ring *ring = &q->ring;
> >>>>>>>> + struct net_device *dev;
> >>>>>>>> + void *ptr;
> >>>>>>>> +
> >>>>>>>> + spin_lock(&ring->consumer_lock);
> >>>>>>>> +
> >>>>>>>> + ptr = __ptr_ring_consume(ring);
> >>>>>>>> + if (unlikely(ptr && __ptr_ring_consume_created_space(ring, 1))) {
> >>>>>>>> + rcu_read_lock();
> >>>>>>>> + dev = rcu_dereference(q->tap)->dev;
> >>>>>>>> + netif_wake_subqueue(dev, q->queue_index);
> >>>>>>>> + rcu_read_unlock();
> >>>>>>>> + }
> >>>>>>>> +
> >>>>>>>> + spin_unlock(&ring->consumer_lock);
> >>>>>>>> +
> >>>>>>>> + return ptr;
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>> static ssize_t tap_do_read(struct tap_queue *q,
> >>>>>>>> struct iov_iter *to,
> >>>>>>>> int noblock, struct sk_buff *skb)
> >>>>>>>> @@ -774,7 +795,7 @@ static ssize_t tap_do_read(struct tap_queue *q,
> >>>>>>>> TASK_INTERRUPTIBLE);
> >>>>>>>>
> >>>>>>>> /* Read frames from the queue */
> >>>>>>>> - skb = ptr_ring_consume(&q->ring);
> >>>>>>>> + skb = tap_ring_consume(q);
> >>>>>>>> if (skb)
> >>>>>>>> break;
> >>>>>>>> if (noblock) {
> >>>>>>>> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> >>>>>>>> index 8192740357a0..7148f9a844a4 100644
> >>>>>>>> --- a/drivers/net/tun.c
> >>>>>>>> +++ b/drivers/net/tun.c
> >>>>>>>> @@ -2113,13 +2113,34 @@ static ssize_t tun_put_user(struct tun_struct *tun,
> >>>>>>>> return total;
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> +static void *tun_ring_consume(struct tun_file *tfile)
> >>>>>>>> +{
> >>>>>>>> + struct ptr_ring *ring = &tfile->tx_ring;
> >>>>>>>> + struct net_device *dev;
> >>>>>>>> + void *ptr;
> >>>>>>>> +
> >>>>>>>> + spin_lock(&ring->consumer_lock);
> >>>>>>>> +
> >>>>>>>> + ptr = __ptr_ring_consume(ring);
> >>>>>>>> + if (unlikely(ptr && __ptr_ring_consume_created_space(ring, 1))) {
> >>>>>>>
> >>>>>>> I guess it's the "bug" I mentioned in the previous patch that leads to
> >>>>>>> the check of __ptr_ring_consume_created_space() here. If it's true,
> >>>>>>> another call to tweak the current API.
> >>>>>>>
> >>>>>>>> + rcu_read_lock();
> >>>>>>>> + dev = rcu_dereference(tfile->tun)->dev;
> >>>>>>>> + netif_wake_subqueue(dev, tfile->queue_index);
> >>>>>>>
> >>>>>>> This would cause the producer TX_SOFTIRQ to run on the same cpu which
> >>>>>>> I'm not sure is what we want.
> >>>>>>
> >>>>>> What else would you suggest calling to wake the queue?
> >>>>>
> >>>>> I don't have a good method in my mind, just want to point out its implications.
> >>>>
> >>>> I have to admit I'm a bit stuck at this point, particularly with this
> >>>> aspect.
> >>>>
> >>>> What is the correct way to pass the producer CPU ID to the consumer?
> >>>> Would it make sense to store smp_processor_id() in the tfile inside
> >>>> tun_net_xmit(), or should it instead be stored in the skb (similar to the
> >>>> XDP bit)? In the latter case, my concern is that this information may
> >>>> already be significantly outdated by the time it is used.
> >>>>
> >>>> Based on that, my idea would be for the consumer to wake the producer by
> >>>> invoking a new function (e.g., tun_wake_queue()) on the producer CPU via
> >>>> smp_call_function_single().
> >>>> Is this a reasonable approach?
> >>>
> >>> I'm not sure but it would introduce costs like IPI.
> >>>
> >>>>
> >>>> More generally, would triggering TX_SOFTIRQ on the consumer CPU be
> >>>> considered a deal-breaker for the patch set?
> >>>
> >>> It depends on whether or not it has effects on the performance.
> >>> Especially when vhost is pinned.
> >>
> >> I meant we can benchmark to see the impact. For example, pin vhost to
> >> a specific CPU and the try to see the impact of the TX_SOFTIRQ.
> >>
> >> Thanks
> >>
> >
> > I ran benchmarks with vhost pinned to CPU 0 using taskset -p -c 0 ...
> > for both the stock and patched versions. The benchmarks were run with
> > the full patch series applied, since testing only patches 1-3 would not
> > be meaningful - the queue is never stopped in that case, so no
> > TX_SOFTIRQ is triggered.
> >
> > Compared to the non-pinned CPU benchmarks in the cover letter,
> > performance is lower for pktgen with a single thread but higher with
> > four threads. The results show no regression for the patched version,
> > with even slight performance improvements observed:
> >
> > +-------------------------+-----------+----------------+
> > | pktgen benchmarks to | Stock | Patched with |
> > | Debian VM, i5 6300HQ, | | fq_codel qdisc |
> > | 100M packets | | |
> > | vhost pinned to core 0 | | |
> > +-----------+-------------+-----------+----------------+
> > | TAP | Transmitted | 452 Kpps | 454 Kpps |
> > | + +-------------+-----------+----------------+
> > | vhost-net | Lost | 1154 Kpps | 0 |
> > +-----------+-------------+-----------+----------------+
> >
> > +-------------------------+-----------+----------------+
> > | pktgen benchmarks to | Stock | Patched with |
> > | Debian VM, i5 6300HQ, | | fq_codel qdisc |
> > | 100M packets | | |
> > | vhost pinned to core 0 | | |
> > | *4 threads* | | |
> > +-----------+-------------+-----------+----------------+
> > | TAP | Transmitted | 71 Kpps | 79 Kpps |
> > | + +-------------+-----------+----------------+
> > | vhost-net | Lost | 1527 Kpps | 0 |
> > +-----------+-------------+-----------+----------------+
The PPS seems to be low. I'd suggest using testpmd (rxonly) mode in
the guest or an xdp program that did XDP_DROP in the guest.
> >
> > +------------------------+-------------+----------------+
> > | iperf3 TCP benchmarks | Stock | Patched with |
> > | to Debian VM 120s | | fq_codel qdisc |
> > | vhost pinned to core 0 | | |
> > +------------------------+-------------+----------------+
> > | TAP | 22.0 Gbit/s | 22.0 Gbit/s |
> > | + | | |
> > | vhost-net | | |
> > +------------------------+-------------+----------------+
> >
> > +---------------------------+-------------+----------------+
> > | iperf3 TCP benchmarks | Stock | Patched with |
> > | to Debian VM 120s | | fq_codel qdisc |
> > | vhost pinned to core 0 | | |
> > | *4 iperf3 client threads* | | |
> > +---------------------------+-------------+----------------+
> > | TAP | 21.4 Gbit/s | 21.5 Gbit/s |
> > | + | | |
> > | vhost-net | | |
> > +---------------------------+-------------+----------------+
>
> What are your thoughts on this?
>
> Thanks!
>
>
Thanks
next prev parent reply other threads:[~2026-01-28 7:03 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-07 21:04 [PATCH net-next v7 0/9] tun/tap & vhost-net: apply qdisc backpressure on full ptr_ring to reduce TX drops Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 1/9] ptr_ring: move free-space check into separate helper Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 2/9] ptr_ring: add helper to detect newly freed space on consume Simon Schippers
2026-01-08 3:23 ` Jason Wang
2026-01-08 7:20 ` Simon Schippers
2026-01-09 6:01 ` Jason Wang
2026-01-09 6:47 ` Michael S. Tsirkin
2026-01-09 7:22 ` Michael S. Tsirkin
2026-01-09 7:35 ` Simon Schippers
2026-01-09 8:31 ` Michael S. Tsirkin
2026-01-09 9:06 ` Simon Schippers
2026-01-12 16:29 ` Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 3/9] tun/tap: add ptr_ring consume helper with netdev queue wakeup Simon Schippers
2026-01-08 3:38 ` Jason Wang
2026-01-08 7:40 ` Simon Schippers
2026-01-09 6:02 ` Jason Wang
2026-01-09 9:31 ` Simon Schippers
2026-01-21 9:32 ` Simon Schippers
2026-01-22 5:35 ` Jason Wang
2026-01-23 3:05 ` Jason Wang
2026-01-23 9:54 ` Simon Schippers
2026-01-27 16:47 ` Simon Schippers
2026-01-28 7:03 ` Jason Wang [this message]
2026-01-28 7:53 ` Simon Schippers
2026-01-29 1:14 ` Jason Wang
2026-01-29 9:24 ` Simon Schippers
2026-01-30 1:51 ` Jason Wang
2026-02-01 20:19 ` Simon Schippers
2026-02-03 3:48 ` Jason Wang
2026-02-04 15:43 ` Simon Schippers
2026-02-05 3:59 ` Jason Wang
2026-02-05 22:28 ` Simon Schippers
2026-02-06 3:21 ` Jason Wang
2026-02-08 18:18 ` Simon Schippers
2026-02-12 0:12 ` Simon Schippers
2026-02-12 7:06 ` Michael S. Tsirkin
2026-02-12 8:03 ` Simon Schippers
2026-02-12 8:14 ` Jason Wang
2026-02-14 17:13 ` Simon Schippers
2026-02-14 18:18 ` Michael S. Tsirkin
2026-02-14 19:51 ` Simon Schippers
2026-02-14 23:49 ` Michael S. Tsirkin
2026-02-15 10:38 ` Michael S. Tsirkin
2026-02-16 13:27 ` Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 4/9] tun/tap: add batched ptr_ring consume functions " Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 5/9] tun/tap: add unconsume function for returning entries to ptr_ring Simon Schippers
2026-01-08 3:40 ` Jason Wang
2026-01-07 21:04 ` [PATCH net-next v7 6/9] tun/tap: add helper functions to check file type Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 7/9] vhost-net: vhost-net: replace rx_ring with tun/tap ring wrappers Simon Schippers
2026-01-08 4:38 ` Jason Wang
2026-01-08 7:47 ` Simon Schippers
2026-01-09 6:04 ` Jason Wang
2026-01-09 9:57 ` Simon Schippers
2026-01-12 2:54 ` Jason Wang
2026-01-12 4:42 ` Michael S. Tsirkin
2026-01-07 21:04 ` [PATCH net-next v7 8/9] tun/tap: drop get ring exports Simon Schippers
2026-01-07 21:04 ` [PATCH net-next v7 9/9] tun/tap & vhost-net: avoid ptr_ring tail-drop when qdisc is present Simon Schippers
2026-01-08 4:37 ` Jason Wang
2026-01-08 8:01 ` Simon Schippers
2026-01-09 6:09 ` Jason Wang
2026-01-09 10:14 ` Simon Schippers
2026-01-12 2:22 ` Jason Wang
2026-01-12 11:08 ` Simon Schippers
2026-01-12 11:18 ` Michael S. Tsirkin
2026-01-13 6:26 ` Jason Wang
2026-01-12 4:33 ` Michael S. Tsirkin
2026-01-12 11:17 ` Simon Schippers
2026-01-12 11:19 ` Michael S. Tsirkin
2026-01-12 11:28 ` Simon Schippers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CACGkMEv71kn91FPAUrxJg=YB3+B0MRTgOidMPHjK7Qq0WEhGtw@mail.gmail.com' \
--to=jasowang@redhat.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=jon@nutanix.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=leiyang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=simon.schippers@tu-dortmund.de \
--cc=stephen@networkplumber.org \
--cc=tim.gebauer@tu-dortmund.de \
--cc=virtualization@lists.linux.dev \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox