From: "Michael S. Tsirkin" <mst@redhat.com>
To: Andrey Ryabinin <arbn@yandex-team.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
netdev <netdev@vger.kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
Alexei Starovoitov <ast@kernel.org>,
virtualization <virtualization@lists.linux-foundation.org>,
Stefan Hajnoczi <stefanha@redhat.com>, kvm <kvm@vger.kernel.org>,
Jakub Kicinski <kuba@kernel.org>,
bpf@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 6/6] vhost_net: use RCU callbacks instead of synchronize_rcu()
Date: Mon, 22 Nov 2021 04:37:47 -0500 [thread overview]
Message-ID: <20211122043620-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <b163233f-090f-baaf-4460-37978cab4d55@yandex-team.com>
On Fri, Nov 19, 2021 at 02:32:05PM +0300, Andrey Ryabinin wrote:
>
>
> On 11/16/21 8:00 AM, Jason Wang wrote:
> > On Mon, Nov 15, 2021 at 11:32 PM Andrey Ryabinin <arbn@yandex-team.com> wrote:
> >>
> >> Currently vhost_net_release() uses synchronize_rcu() to synchronize
> >> freeing with vhost_zerocopy_callback(). However synchronize_rcu()
> >> is quite costly operation. It take more than 10 seconds
> >> to shutdown qemu launched with couple net devices like this:
> >> -netdev tap,id=tap0,..,vhost=on,queues=80
> >> because we end up calling synchronize_rcu() netdev_count*queues times.
> >>
> >> Free vhost net structures in rcu callback instead of using
> >> synchronize_rcu() to fix the problem.
> >
> > I admit the release code is somehow hard to understand. But I wonder
> > if the following case can still happen with this:
> >
> > CPU 0 (vhost_dev_cleanup) CPU1
> > (vhost_net_zerocopy_callback()->vhost_work_queue())
> > if (!dev->worker)
> > dev->worker = NULL
> >
> > wake_up_process(dev->worker)
> >
> > If this is true. It seems the fix is to move RCU synchronization stuff
> > in vhost_net_ubuf_put_and_wait()?
> >
>
> It all depends whether vhost_zerocopy_callback() can be called outside of vhost
> thread context or not. If it can run after vhost thread stopped, than the race you
> describe seems possible and the fix in commit b0c057ca7e83 ("vhost: fix a theoretical race in device cleanup")
> wasn't complete. I would fix it by calling synchronize_rcu() after vhost_net_flush()
> and before vhost_dev_cleanup().
>
> As for the performance problem, it can be solved by replacing synchronize_rcu() with synchronize_rcu_expedited().
expedited causes a stop of IPIs though, so it's problematic to
do it upon a userspace syscall.
> But now I'm not sure that this race is actually exists and that synchronize_rcu() needed at all.
> I did a bit of testing and I only see callback being called from vhost thread:
>
> vhost-3724 3733 [002] 2701.768731: probe:vhost_zerocopy_callback: (ffffffff81af8c10)
> ffffffff81af8c11 vhost_zerocopy_callback+0x1 ([kernel.kallsyms])
> ffffffff81bb34f6 skb_copy_ubufs+0x256 ([kernel.kallsyms])
> ffffffff81bce621 __netif_receive_skb_core.constprop.0+0xac1 ([kernel.kallsyms])
> ffffffff81bd062d __netif_receive_skb_one_core+0x3d ([kernel.kallsyms])
> ffffffff81bd0748 netif_receive_skb+0x38 ([kernel.kallsyms])
> ffffffff819a2a1e tun_get_user+0xdce ([kernel.kallsyms])
> ffffffff819a2cf4 tun_sendmsg+0xa4 ([kernel.kallsyms])
> ffffffff81af9229 handle_tx_zerocopy+0x149 ([kernel.kallsyms])
> ffffffff81afaf05 handle_tx+0xc5 ([kernel.kallsyms])
> ffffffff81afce86 vhost_worker+0x76 ([kernel.kallsyms])
> ffffffff811581e9 kthread+0x169 ([kernel.kallsyms])
> ffffffff810018cf ret_from_fork+0x1f ([kernel.kallsyms])
> 0 [unknown] ([unknown])
>
> This means that the callback can't run after kthread_stop() in vhost_dev_cleanup() and no synchronize_rcu() needed.
>
> I'm not confident that my quite limited testing cover all possible vhost_zerocopy_callback() callstacks.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-11-22 9:38 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20211115153003.9140-1-arbn@yandex-team.com>
[not found] ` <20211115153003.9140-6-arbn@yandex-team.com>
2021-11-16 5:00 ` [PATCH 6/6] vhost_net: use RCU callbacks instead of synchronize_rcu() Jason Wang
[not found] ` <b163233f-090f-baaf-4460-37978cab4d55@yandex-team.com>
2021-11-22 2:48 ` Jason Wang
2021-11-22 9:37 ` Michael S. Tsirkin [this message]
2021-11-16 14:33 ` [PATCH 1/6] vhost: get rid of vhost_poll_flush() wrapper Stefano Garzarella
2021-12-03 17:45 ` Mike Christie
[not found] ` <20211115153003.9140-2-arbn@yandex-team.com>
2021-11-16 14:33 ` [PATCH 2/6] vhost_net: get rid of vhost_net_flush_vq() and extra flush calls Stefano Garzarella
[not found] ` <20211115153003.9140-4-arbn@yandex-team.com>
2021-11-16 14:35 ` [PATCH 4/6] vhost_vsock: simplify vhost_vsock_flush() Stefano Garzarella
2021-11-16 14:41 ` [PATCH 1/6] vhost: get rid of vhost_poll_flush() wrapper Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211122043620-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=arbn@yandex-team.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).