From: Tiwei Bie <tiwei.bie@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: jasowang@redhat.com, alex.williamson@redhat.com,
pbonzini@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org,
virtio-dev@lists.oasis-open.org, cunming.liang@intel.com,
dan.daly@intel.com, jianfeng.tan@intel.com,
zhihong.wang@intel.com, xiao.w.wang@intel.com
Subject: Re: [Qemu-devel] [PATCH v3 2/6] vhost-user: introduce shared vhost-user state
Date: Thu, 24 May 2018 10:24:40 +0800 [thread overview]
Message-ID: <20180524022440.GA20792@debian> (raw)
In-Reply-To: <20180523232101.GB15604@debian>
On Thu, May 24, 2018 at 07:21:01AM +0800, Tiwei Bie wrote:
> On Wed, May 23, 2018 at 06:43:29PM +0300, Michael S. Tsirkin wrote:
> > On Wed, May 23, 2018 at 06:36:05PM +0300, Michael S. Tsirkin wrote:
> > > On Wed, May 23, 2018 at 04:44:51PM +0300, Michael S. Tsirkin wrote:
> > > > On Thu, Apr 12, 2018 at 11:12:28PM +0800, Tiwei Bie wrote:
> > > > > When multi queue is enabled e.g. for a virtio-net device,
> > > > > each queue pair will have a vhost_dev, and the only thing
> > > > > shared between vhost devs currently is the chardev. This
> > > > > patch introduces a vhost-user state structure which will
> > > > > be shared by all vhost devs of the same virtio device.
> > > > >
> > > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > >
> > > > Unfortunately this patch seems to cause crashes.
> > > > To reproduce, simply run
> > > > make check-qtest-x86_64
> > > >
> > > > Sorry that it took me a while to find - it triggers 90% of runs but not
> > > > 100% which complicates bisection somewhat.
>
> It's my fault to not notice this bug before.
> I'm very sorry. Thank you so much for finding
> the root cause!
>
> > > >
> > > > > ---
> > > > > backends/cryptodev-vhost-user.c | 20 ++++++++++++++++++-
> > > > > hw/block/vhost-user-blk.c | 22 +++++++++++++++++++-
> > > > > hw/scsi/vhost-user-scsi.c | 20 ++++++++++++++++++-
> > > > > hw/virtio/Makefile.objs | 2 +-
> > > > > hw/virtio/vhost-stub.c | 10 ++++++++++
> > > > > hw/virtio/vhost-user.c | 31 +++++++++++++++++++---------
> > > > > include/hw/virtio/vhost-user-blk.h | 2 ++
> > > > > include/hw/virtio/vhost-user-scsi.h | 2 ++
> > > > > include/hw/virtio/vhost-user.h | 20 +++++++++++++++++++
> > > > > net/vhost-user.c | 40 ++++++++++++++++++++++++++++++-------
> > > > > 10 files changed, 149 insertions(+), 20 deletions(-)
> > > > > create mode 100644 include/hw/virtio/vhost-user.h
> [...]
> > > > > qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
> > > > > net_vhost_user_event, NULL, nc0->name, NULL,
> > > > > @@ -319,6 +336,15 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > > > assert(s->vhost_net);
> > > > >
> > > > > return 0;
> > > > > +
> > > > > +err:
> > > > > + if (user) {
> > > > > + vhost_user_cleanup(user);
> > > > > + g_free(user);
> > > > > + s->vhost_user = NULL;
> > > > > + }
> > > > > +
> > > > > + return -1;
> > > > > }
> > > > >
> > > > > static Chardev *net_vhost_claim_chardev(
> > > > > --
> > > > > 2.11.0
> > >
> > > So far I figured out that commenting the free of
> > > the structure removes the crash, so we seem to
> > > be dealing with a use-after free here.
> > > I suspect that in a MQ situation, one queue gets
> > > closed and attempts to free the structure
> > > while others still use it.
> > >
> > > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > > index 525a061..6a1573b 100644
> > > --- a/net/vhost-user.c
> > > +++ b/net/vhost-user.c
> > > @@ -157,8 +157,8 @@ static void net_vhost_user_cleanup(NetClientState *nc)
> > > s->vhost_net = NULL;
> > > }
> > > if (s->vhost_user) {
> > > - vhost_user_cleanup(s->vhost_user);
> > > - g_free(s->vhost_user);
> > > + //vhost_user_cleanup(s->vhost_user);
> > > + //g_free(s->vhost_user);
> > > s->vhost_user = NULL;
> > > }
> > > if (nc->queue_index == 0) {
> > > @@ -339,8 +339,8 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
> > >
> > > err:
> > > if (user) {
> > > - vhost_user_cleanup(user);
> > > - g_free(user);
> > > + //vhost_user_cleanup(user);
> > > + //g_free(user);
> > > s->vhost_user = NULL;
> > > }
> > >
> >
> >
> > So the following at least gets rid of the crashes.
> > I am not sure it does not leak memory though,
> > and not sure there aren't any configurations where
> > the 1st queue gets cleaned up first.
> >
> > Thoughts?
>
> Thank you so much for catching it and fixing
> it! I'll keep your SoB there. Thank you so
> much! I do appreciate it!
>
> You are right. This structure is freed multiple
> times when multi-queue is enabled.
After a deeper digging, I got your point now..
It could be a use-after-free instead of a double
free.. As it's safe to deinit the char which is
shared by all queue pairs when cleanup the 1st
queue pair, it should be safe to free vhost-user
structure there too.
>
> I think it's safe to let the first queue pair
> free the vhost-user structure, because it won't
> be touched by other queue pairs during cleanup.
>
> Best regards,
> Tiwei Bie
>
>
> >
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > ---
> >
> > diff --git a/net/vhost-user.c b/net/vhost-user.c
> > index 525a061..7549d25 100644
> > --- a/net/vhost-user.c
> > +++ b/net/vhost-user.c
> > @@ -156,19 +156,20 @@ static void net_vhost_user_cleanup(NetClientState *nc)
> > g_free(s->vhost_net);
> > s->vhost_net = NULL;
> > }
> > - if (s->vhost_user) {
> > - vhost_user_cleanup(s->vhost_user);
> > - g_free(s->vhost_user);
> > - s->vhost_user = NULL;
> > - }
> > if (nc->queue_index == 0) {
> > if (s->watch) {
> > g_source_remove(s->watch);
> > s->watch = 0;
> > }
> > qemu_chr_fe_deinit(&s->chr, true);
> > + if (s->vhost_user) {
> > + vhost_user_cleanup(s->vhost_user);
> > + g_free(s->vhost_user);
> > + }
> > }
> >
> > + s->vhost_user = NULL;
Maybe we should move above line, like:
if (nc->queue_index == 0) {
if (s->watch) {
g_source_remove(s->watch);
s->watch = 0;
}
qemu_chr_fe_deinit(&s->chr, true);
+ if (s->vhost_user) {
+ vhost_user_cleanup(s->vhost_user);
+ g_free(s->vhost_user);
+ s->vhost_user = NULL;
+ }
}
otherwise s->vhost_user may not be freed.
> > +
> > qemu_purge_queued_packets(nc);
> > }
> >
> > @@ -341,7 +342,6 @@ err:
> > if (user) {
> > vhost_user_cleanup(user);
> > g_free(user);
> > - s->vhost_user = NULL;
I don't get why cannot zero it in this case.
> > }
> >
> > return -1;
Best regards,
Tiwei Bie
next prev parent reply other threads:[~2018-05-24 2:24 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-12 15:12 [Qemu-devel] [PATCH v3 0/6] Extend vhost-user to support registering external host notifiers Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 1/6] vhost-user: add Net prefix to internal state structure Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 2/6] vhost-user: introduce shared vhost-user state Tiwei Bie
2018-05-23 13:44 ` Michael S. Tsirkin
2018-05-23 15:36 ` Michael S. Tsirkin
2018-05-23 15:43 ` Michael S. Tsirkin
2018-05-23 23:21 ` Tiwei Bie
2018-05-24 2:24 ` Tiwei Bie [this message]
2018-05-24 7:03 ` Tiwei Bie
2018-05-24 10:59 ` Tiwei Bie
2018-05-24 13:55 ` Michael S. Tsirkin
2018-05-24 14:54 ` [Qemu-devel] [virtio-dev] " Tiwei Bie
2018-05-24 14:30 ` [Qemu-devel] " Michael S. Tsirkin
2018-05-24 15:22 ` Tiwei Bie
2018-05-24 13:50 ` Michael S. Tsirkin
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 3/6] vhost-user: support receiving file descriptors in slave_read Tiwei Bie
2018-05-23 21:25 ` Michael S. Tsirkin
2018-05-23 23:12 ` [Qemu-devel] [virtio-dev] " Tiwei Bie
2018-05-24 13:48 ` Michael S. Tsirkin
2018-05-24 14:56 ` Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 4/6] virtio: support setting memory region based host notifier Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 5/6] vhost: allow backends to filter memory sections Tiwei Bie
2018-04-12 15:12 ` [Qemu-devel] [PATCH v3 6/6] vhost-user: support registering external host notifiers Tiwei Bie
2018-04-18 16:34 ` Michael S. Tsirkin
2018-04-19 11:14 ` Tiwei Bie
2018-04-19 12:43 ` Liang, Cunming
2018-04-19 13:02 ` [Qemu-devel] [virtio-dev] " Paolo Bonzini
2018-04-19 15:19 ` Michael S. Tsirkin
2018-04-19 15:51 ` Paolo Bonzini
2018-04-19 15:59 ` Michael S. Tsirkin
2018-04-19 16:07 ` Paolo Bonzini
2018-04-19 16:48 ` Michael S. Tsirkin
2018-04-19 16:24 ` Liang, Cunming
2018-04-19 16:55 ` Michael S. Tsirkin
2018-04-20 3:01 ` Liang, Cunming
2018-04-19 15:42 ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 15:52 ` Paolo Bonzini
2018-04-19 16:34 ` Michael S. Tsirkin
2018-04-19 16:52 ` Liang, Cunming
2018-04-19 16:59 ` [Qemu-devel] [virtio-dev] " Paolo Bonzini
2018-04-19 17:27 ` Michael S. Tsirkin
2018-04-19 17:35 ` Paolo Bonzini
2018-04-19 17:39 ` Michael S. Tsirkin
2018-04-19 17:00 ` [Qemu-devel] " Michael S. Tsirkin
2018-04-19 23:05 ` Liang, Cunming
2018-04-19 16:27 ` Liang, Cunming
2018-05-02 10:32 ` Tiwei Bie
2018-05-16 1:41 ` [Qemu-devel] [PATCH v3 0/6] Extend vhost-user to " Michael S. Tsirkin
2018-05-16 1:56 ` Tiwei Bie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180524022440.GA20792@debian \
--to=tiwei.bie@intel.com \
--cc=alex.williamson@redhat.com \
--cc=cunming.liang@intel.com \
--cc=dan.daly@intel.com \
--cc=jasowang@redhat.com \
--cc=jianfeng.tan@intel.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).