From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: mst@redhat.com, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
maxime.coquelin@redhat.com, alvaro.karsz@solid-run.com
Subject: Re: [RFC PATCH 4/4] virtio-net: sleep instead of busy waiting for cvq command
Date: Fri, 23 Dec 2022 11:03:59 +0800 [thread overview]
Message-ID: <CACGkMEvs6QenyQNR0GyJ81PgT-w2fy7Rag-JkJ7xNGdNZLGSfQ@mail.gmail.com> (raw)
In-Reply-To: <CAJaqyWetutMj=GrR+ieS265_aRr7OhoP+7O5rWgPnP+ZAyxbPg@mail.gmail.com>
On Thu, Dec 22, 2022 at 5:19 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Thu, Dec 22, 2022 at 7:05 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > We used to busy waiting on the cvq command this tends to be
> > problematic since:
> >
> > 1) CPU could wait for ever on a buggy/malicous device
> > 2) There's no wait to terminate the process that triggers the cvq
> > command
> >
> > So this patch switch to use sleep with a timeout (1s) instead of busy
> > polling for the cvq command forever. This gives the scheduler a breath
> > and can let the process can respond to a signal.
> >
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > ---
> > drivers/net/virtio_net.c | 15 ++++++++-------
> > 1 file changed, 8 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 8225496ccb1e..69173049371f 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -405,6 +405,7 @@ static void disable_rx_mode_work(struct virtnet_info *vi)
> > vi->rx_mode_work_enabled = false;
> > spin_unlock_bh(&vi->rx_mode_lock);
> >
> > + virtqueue_wake_up(vi->cvq);
> > flush_work(&vi->rx_mode_work);
> > }
> >
> > @@ -1497,6 +1498,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
> > return !oom;
> > }
> >
> > +static void virtnet_cvq_done(struct virtqueue *cvq)
> > +{
> > + virtqueue_wake_up(cvq);
> > +}
> > +
> > static void skb_recv_done(struct virtqueue *rvq)
> > {
> > struct virtnet_info *vi = rvq->vdev->priv;
> > @@ -2024,12 +2030,7 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd,
> > if (unlikely(!virtqueue_kick(vi->cvq)))
> > return vi->ctrl->status == VIRTIO_NET_OK;
> >
> > - /* Spin for a response, the kick causes an ioport write, trapping
> > - * into the hypervisor, so the request should be handled immediately.
> > - */
> > - while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> > - !virtqueue_is_broken(vi->cvq))
> > - cpu_relax();
> > + virtqueue_wait_for_used(vi->cvq, &tmp);
> >
> > return vi->ctrl->status == VIRTIO_NET_OK;
> > }
> > @@ -3524,7 +3525,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >
> > /* Parameters for control virtqueue, if any */
> > if (vi->has_cvq) {
> > - callbacks[total_vqs - 1] = NULL;
> > + callbacks[total_vqs - 1] = virtnet_cvq_done;
>
> If we're using CVQ callback, what is the actual use of the timeout?
Because we can't sleep forever since locks could be held like RTNL_LOCK.
>
> I'd say there is no right choice neither in the right timeout value
> nor in the action to take.
In the next version, I tend to put BAD_RING() to prevent future requests.
> Why not simply trigger the cmd and do all
> the changes at command return?
I don't get this, sorry.
>
> I suspect the reason is that it complicates the code. For example,
> having the possibility of many in flight commands, races between their
> completion, etc.
Actually the cvq command was serialized through RTNL_LOCK, so we don't
need to worry about this.
In the next version I can add ASSERT_RTNL().
Thanks
> The virtio standard does not even cover unordered
> used commands if I'm not wrong.
>
> Is there any other fundamental reason?
>
> Thanks!
>
> > names[total_vqs - 1] = "control";
> > }
> >
> > --
> > 2.25.1
> >
>
next prev parent reply other threads:[~2022-12-23 3:05 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-22 6:04 [RFC PATCH 0/4] virtio-net: don't busy poll for cvq command Jason Wang
2022-12-22 6:04 ` [RFC PATCH 1/4] virtio-net: convert rx mode setting to use workqueue Jason Wang
2022-12-22 6:04 ` [RFC PATCH 2/4] virtio_ring: switch to use BAD_RING() Jason Wang
2022-12-22 6:04 ` [RFC PATCH 3/4] virtio_ring: introduce a per virtqueue waitqueue Jason Wang
2022-12-22 6:04 ` [RFC PATCH 4/4] virtio-net: sleep instead of busy waiting for cvq command Jason Wang
2022-12-22 6:44 ` Alvaro Karsz
2022-12-22 8:43 ` Jason Wang
2022-12-22 15:54 ` Alvaro Karsz
2022-12-23 3:00 ` Jason Wang
2022-12-23 7:38 ` Alvaro Karsz
2022-12-26 3:45 ` Jason Wang
2022-12-22 9:19 ` Eugenio Perez Martin
2022-12-23 3:03 ` Jason Wang [this message]
2022-12-23 8:04 ` Eugenio Perez Martin
2022-12-26 3:44 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACGkMEvs6QenyQNR0GyJ81PgT-w2fy7Rag-JkJ7xNGdNZLGSfQ@mail.gmail.com \
--to=jasowang@redhat.com \
--cc=alvaro.karsz@solid-run.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).