From: Simon Schippers <simon.schippers@tu-dortmund.de>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: willemdebruijn.kernel@gmail.com, jasowang@redhat.com,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, eperezma@redhat.com,
leiyang@redhat.com, stephen@networkplumber.org, jon@nutanix.com,
tim.gebauer@tu-dortmund.de, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
virtualization@lists.linux.dev
Subject: Re: [PATCH net-next v11 1/4] tun/tap: add ptr_ring consume helper with netdev queue wakeup
Date: Sun, 10 May 2026 16:01:39 +0200 [thread overview]
Message-ID: <e5d69f44-4483-49f3-b7fc-aca2cdb24792@tu-dortmund.de> (raw)
In-Reply-To: <20260510094020-mutt-send-email-mst@kernel.org>
On 5/10/26 15:40, Michael S. Tsirkin wrote:
> On Sun, May 10, 2026 at 10:55:34AM +0200, Simon Schippers wrote:
>> On 5/10/26 09:03, Simon Schippers wrote:
>>> On 5/10/26 00:44, Michael S. Tsirkin wrote:
>>>> On Sat, May 09, 2026 at 06:31:47PM +0200, Simon Schippers wrote:
>>>>> On 5/8/26 17:10, Simon Schippers wrote:
>>>>>> +static void tun_queue_purge(struct tun_struct *tun, struct tun_file *tfile)
>>>>>> {
>>>>>> void *ptr;
>>>>>>
>>>>>> - while ((ptr = ptr_ring_consume(&tfile->tx_ring)) != NULL)
>>>>>> + while ((ptr = tun_ring_consume(tun, tfile)) != NULL)
>>>>>> tun_ptr_free(ptr);
>>>>>>
>>>>>> skb_queue_purge(&tfile->sk.sk_write_queue);
>>>>>
>>>>> Sashiko is right once again. tun_ring_consume() in tun_queue_purge()
>>>>> operates on a tfile that is being torn down. Its queue_index is no
>>>>> longer valid. After the swap in __tun_detach(), it points to the
>>>>> netdev subqueue of a different tfile.
>>>>> --> We should not wake there.
>>>>
>>>> Does it not exactly point at ntfile which is what we want to wake?
>>>>
>>>
>>> I see your point. But calling tun_ring_consume() as done here is
>>> wrong, because it does not wake if the tx_ring of the tfile
>>> (that is currently torn down) is empty. We could change
>>> tun_ring_consume() to call __tun_wake_queue()
>>> with consumed=0 if !ptr but I think this would slow down the consumer
>>> path.
>>>
>>
>> My statement is wrong:
>> There is no way that the tx_ring is empty and the queue is stopped
>> at the same time. So we do not need to touch tun_ring_consume() and
>> this works just fine.
>>
>>>>
>>>>> I will swap tun_ring_consume() with ptr_ring_consume() again and
>>>>> submit a v12 :)
>>>>
>>>> If so then maybe
>>>> netif_tx_wake_queue(netdev_get_tx_queue(tun->dev, index));
>>>>
>>>
>>> But we should only do this if there is space in the ntfile.
>>> My approach:
>>>
>>> @@ -586,12 +588,18 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
>>> BUG_ON(index >= tun->numqueues);
>>>
>>> rcu_assign_pointer(tun->tfiles[index],
>>> tun->tfiles[tun->numqueues - 1]);
>>> ntfile = rtnl_dereference(tun->tfiles[index]);
>>> + spin_lock(&ntfile->tx_ring.consumer_lock);
>>> ntfile->queue_index = index;
>>> ntfile->xdp_rxq.queue_index = index;
>>> + ntfile->cons_cnt = 0;
>>> + if (__ptr_ring_empty(&ntfile->tx_ring)) {
>>> + netif_wake_subqueue(tun->dev, index);
>>> + }
>>> + spin_unlock(&ntfile->tx_ring.consumer_lock);
>>> rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
>>> NULL);
>>>
>>> ntfile->cons_cnt is unvalid, because the new queue might not be stopped.
>>> That is the reason why I reset it to 0.
>>
>> However, I still prefer this approach because the code is easier to
>> understand.
>
>
> So do you want me to finish review of this one and ack, or want to
> post v12?
>
I will post a v12 with the proposed changes for patch 1.
No other changes.
Thanks!
next prev parent reply other threads:[~2026-05-10 14:01 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 15:10 [PATCH net-next v11 0/4] tun/tap & vhost-net: apply qdisc backpressure on full ptr_ring to reduce TX drops Simon Schippers
2026-05-08 15:10 ` [PATCH net-next v11 1/4] tun/tap: add ptr_ring consume helper with netdev queue wakeup Simon Schippers
2026-05-09 16:31 ` Simon Schippers
2026-05-09 22:44 ` Michael S. Tsirkin
2026-05-10 7:03 ` Simon Schippers
2026-05-10 8:55 ` Simon Schippers
2026-05-10 13:40 ` Michael S. Tsirkin
2026-05-10 14:01 ` Simon Schippers [this message]
2026-05-10 15:44 ` Michael S. Tsirkin
2026-05-10 16:22 ` Simon Schippers
2026-05-10 18:27 ` Michael S. Tsirkin
2026-05-08 15:10 ` [PATCH net-next v11 2/4] vhost-net: wake queue of tun/tap after ptr_ring consume Simon Schippers
2026-05-08 15:10 ` [PATCH net-next v11 3/4] ptr_ring: move free-space check into separate helper Simon Schippers
2026-05-08 15:10 ` [PATCH net-next v11 4/4] tun/tap & vhost-net: avoid ptr_ring tail-drop when a qdisc is present Simon Schippers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e5d69f44-4483-49f3-b7fc-aca2cdb24792@tu-dortmund.de \
--to=simon.schippers@tu-dortmund.de \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=jon@nutanix.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=leiyang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=stephen@networkplumber.org \
--cc=tim.gebauer@tu-dortmund.de \
--cc=virtualization@lists.linux.dev \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox