From: "Michael S. Tsirkin" <mst@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: Maxime Coquelin <mcoqueli@redhat.com>,
Yongji Xie <xieyongji@bytedance.com>,
virtualization@lists.linux.dev, linux-kernel@vger.kernel.org,
Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
Dragos Tatulea DE <dtatulea@nvidia.com>,
jasowang@redhat.com
Subject: Re: [RFC 1/2] virtio_net: timeout control virtqueue commands
Date: Wed, 15 Oct 2025 02:33:18 -0400 [thread overview]
Message-ID: <20251015023020-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CAJaqyWe-mn4e+1egNCH+R1x4R7DB6U1SZ-mRAXYPTtA27hKCVA@mail.gmail.com>
On Wed, Oct 15, 2025 at 08:08:31AM +0200, Eugenio Perez Martin wrote:
> On Tue, Oct 14, 2025 at 11:25 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Tue, Oct 14, 2025 at 11:14:40AM +0200, Maxime Coquelin wrote:
> > > On Tue, Oct 14, 2025 at 10:29 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Tue, Oct 07, 2025 at 03:06:21PM +0200, Eugenio Pérez wrote:
> > > > > An userland device implemented through VDUSE could take rtnl forever if
> > > > > the virtio-net driver is running on top of virtio_vdpa. Let's break the
> > > > > device if it does not return the buffer in a longer-than-assumible
> > > > > timeout.
> > > >
> > > > So now I can't debug qemu with gdb because guest dies :(
> > > > Let's not break valid use-cases please.
> > > >
> > > >
> > > > Instead, solve it in vduse, probably by handling cvq within
> > > > kernel.
> > >
> > > Would a shadow control virtqueue implementation in the VDUSE driver work?
> > > It would ack systematically messages sent by the Virtio-net driver,
> > > and so assume the userspace application will Ack them.
> > >
> > > When the userspace application handles the message, if the handling fails,
> > > it somehow marks the device as broken?
> > >
> > > Thanks,
> > > Maxime
> >
> > Yes but it's a bit more convoluted than just acking them.
> > Once you use the buffer you can get another one and so on
> > with no limit.
> > One fix is to actually maintain device state in the
> > kernel, update it, and then notify userspace.
> >
>
> I thought of implementing this approach at first, but it has two drawbacks.
>
> The first one: it's racy. Let's say the driver updates the MAC filter,
> VDUSE timeout occurs, the guest receives the fail, and then the device
> replies with an OK. There is no way for the device or VDUSE to update
> the driver.
There's no timeout. Kernel can guarantee executing all requests.
>
> The second one, what to do when the VDUSE cvq runs out of descriptors?
> While the driver has its descriptor returned with VIRTIO_NET_ERR, the
> VDUSE CVQ has the descriptor available. If this process repeats to
> make available all of the VDUSE CVQ descriptors, how can we proceed?
There's no reason to return VIRTIO_NET_ERR ever and cvq will not run
out of descriptors. Kernel uses cvq buffers.
> I think both of them can be solved with the DEVICE_NEEDS_RESET status
> bit, but it is not implemented in the drivers at this moment.
No need for a reset, either.
next prev parent reply other threads:[~2025-10-15 6:33 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-07 13:06 [RFC 0/2] Lift restriction about VDUSE net devices with CVQ Eugenio Pérez
2025-10-07 13:06 ` [RFC 1/2] virtio_net: timeout control virtqueue commands Eugenio Pérez
2025-10-11 7:44 ` Jason Wang
2025-10-14 7:30 ` Eugenio Perez Martin
2025-10-14 8:29 ` Michael S. Tsirkin
2025-10-14 9:14 ` Maxime Coquelin
2025-10-14 9:25 ` Michael S. Tsirkin
2025-10-14 10:21 ` Maxime Coquelin
2025-10-15 4:44 ` Jason Wang
2025-10-15 6:07 ` Michael S. Tsirkin
2025-10-15 6:08 ` Eugenio Perez Martin
2025-10-15 6:33 ` Michael S. Tsirkin [this message]
2025-10-15 6:52 ` Eugenio Perez Martin
2025-10-15 7:04 ` Michael S. Tsirkin
2025-10-15 7:45 ` Eugenio Perez Martin
2025-10-15 8:03 ` Maxime Coquelin
2025-10-15 8:09 ` Michael S. Tsirkin
2025-10-15 9:16 ` Maxime Coquelin
2025-10-15 10:36 ` Eugenio Perez Martin
2025-10-16 5:39 ` Jason Wang
2025-10-16 5:45 ` Michael S. Tsirkin
2025-10-16 6:03 ` Jason Wang
2025-10-16 6:22 ` Michael S. Tsirkin
2025-10-16 6:25 ` Eugenio Perez Martin
2025-10-17 6:36 ` Eugenio Perez Martin
2025-10-17 6:39 ` Michael S. Tsirkin
2025-10-17 7:21 ` Eugenio Perez Martin
2025-10-22 9:46 ` Eugenio Perez Martin
2025-10-22 10:06 ` Michael S. Tsirkin
2025-10-22 10:09 ` Michael S. Tsirkin
2025-10-22 10:50 ` Eugenio Perez Martin
2025-10-22 11:43 ` Michael S. Tsirkin
2025-10-22 12:55 ` Eugenio Perez Martin
2025-10-28 14:09 ` Michael S. Tsirkin
2025-10-28 14:37 ` Eugenio Perez Martin
2025-10-28 14:42 ` Michael S. Tsirkin
2025-10-28 14:57 ` Eugenio Perez Martin
2025-10-29 0:36 ` Jason Wang
2025-11-05 9:02 ` Eugenio Perez Martin
2025-11-09 21:46 ` Michael S. Tsirkin
2025-10-07 13:06 ` [RFC 2/2] vduse: lift restriction about net devices with CVQ Eugenio Pérez
2025-10-09 13:14 ` Maxime Coquelin
2025-10-15 6:11 ` Eugenio Perez Martin
2025-10-14 8:31 ` Michael S. Tsirkin
2025-10-15 6:25 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251015023020-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=dtatulea@nvidia.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mcoqueli@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xieyongji@bytedance.com \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).