qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: pbonzini@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH for-2.4 v3 1/3] vhost: add vhost_has_free_slot() interface
Date: Thu, 30 Jul 2015 18:51:59 +0300	[thread overview]
Message-ID: <20150730184727-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <20150730174533.29a9b81f@nial.brq.redhat.com>

On Thu, Jul 30, 2015 at 05:45:33PM +0200, Igor Mammedov wrote:
> On Thu, 30 Jul 2015 18:28:05 +0300
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > On Thu, Jul 30, 2015 at 12:11:57PM +0200, Igor Mammedov wrote:
> > > it will allow for other parts of QEMU check if it's safe
> > > to map memory region during hotplug/runtime.
> > > That way hotplug path will have a chance to cancel
> > > hotplug operation instead of crashing in vhost_commit().
> > > 
> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > > ---
> > > v3:
> > >   * add refcountin of limit by # vhost devices,
> > >     and make vhost_has_free_slot() return true if no vhost devices
> > >     are present at current moment
> > >   * move limit initialization to vhost_dev_init()
> > > v2:
> > >   * replace probbing with checking for
> > >     /sys/module/vhost/parameters/max_mem_regions and
> > >     if it's missing has non wrong value return
> > >     hardcoded legacy limit (64 slots).
> > > ---
> > >  hw/virtio/vhost-backend.c         | 21 ++++++++++++++++++++-
> > >  hw/virtio/vhost-user.c            |  8 +++++++-
> > >  hw/virtio/vhost.c                 | 23 +++++++++++++++++++++++
> > >  include/hw/virtio/vhost-backend.h |  2 ++
> > >  include/hw/virtio/vhost.h         |  1 +
> > >  stubs/Makefile.objs               |  1 +
> > >  stubs/vhost.c                     |  6 ++++++
> > >  7 files changed, 60 insertions(+), 2 deletions(-)
> > >  create mode 100644 stubs/vhost.c
> > > 
> > > diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c
> > > index 4d68a27..11ec669 100644
> > > --- a/hw/virtio/vhost-backend.c
> > > +++ b/hw/virtio/vhost-backend.c
> > > @@ -11,6 +11,7 @@
> > >  #include "hw/virtio/vhost.h"
> > >  #include "hw/virtio/vhost-backend.h"
> > >  #include "qemu/error-report.h"
> > > +#include "linux/vhost.h"
> > >  
> > >  #include <sys/ioctl.h>
> > >  
> > > @@ -42,11 +43,29 @@ static int vhost_kernel_cleanup(struct vhost_dev *dev)
> > >      return close(fd);
> > >  }
> > >  
> > > +static int vhost_kernel_memslots_limit(struct vhost_dev *dev)
> > > +{
> > > +    int limit = 64;
> > > +    char *s;
> > > +
> > > +    if (g_file_get_contents("/sys/module/vhost/parameters/max_mem_regions",
> > > +                            &s, NULL, NULL)) {
> > > +        uint64_t val = g_ascii_strtoull(s, NULL, 10);
> > > +        if (!((val == G_MAXUINT64 || !val) && errno)) {
> > > +            return val;
> > > +        }
> > > +        error_report("ignoring invalid max_mem_regions value in vhost module:"
> > > +                     " %s", s);
> > > +    }
> > > +    return limit;
> > > +}
> > > +
> > >  static const VhostOps kernel_ops = {
> > >          .backend_type = VHOST_BACKEND_TYPE_KERNEL,
> > >          .vhost_call = vhost_kernel_call,
> > >          .vhost_backend_init = vhost_kernel_init,
> > > -        .vhost_backend_cleanup = vhost_kernel_cleanup
> > > +        .vhost_backend_cleanup = vhost_kernel_cleanup,
> > > +        .vhost_backend_memslots_limit = vhost_kernel_memslots_limit
> > >  };
> > >  
> > >  int vhost_set_backend_type(struct vhost_dev *dev, VhostBackendType backend_type)
> > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > index e7ab829..acdfd04 100644
> > > --- a/hw/virtio/vhost-user.c
> > > +++ b/hw/virtio/vhost-user.c
> > > @@ -343,9 +343,15 @@ static int vhost_user_cleanup(struct vhost_dev *dev)
> > >      return 0;
> > >  }
> > >  
> > > +static int vhost_user_memslots_limit(struct vhost_dev *dev)
> > > +{
> > > +    return VHOST_MEMORY_MAX_NREGIONS;
> > > +}
> > > +
> > >  const VhostOps user_ops = {
> > >          .backend_type = VHOST_BACKEND_TYPE_USER,
> > >          .vhost_call = vhost_user_call,
> > >          .vhost_backend_init = vhost_user_init,
> > > -        .vhost_backend_cleanup = vhost_user_cleanup
> > > +        .vhost_backend_cleanup = vhost_user_cleanup,
> > > +        .vhost_backend_memslots_limit = vhost_user_memslots_limit
> > >          };
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 2712c6f..bcbad48 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -26,6 +26,18 @@
> > >  
> > >  static struct vhost_log *vhost_log;
> > >  
> > > +static struct {
> > > +    int used_memslots;
> > > +    int memslots_limit;
> > > +    int refcount;
> > > +} slots_limit;
> > > +
> > > +bool vhost_has_free_slot(void)
> > > +{
> > > +    return slots_limit.refcount ?
> > > +        slots_limit.memslots_limit > slots_limit.used_memslots : true;
> > > +}
> > > +
> > >  static void vhost_dev_sync_region(struct vhost_dev *dev,
> > >                                    MemoryRegionSection *section,
> > >                                    uint64_t mfirst, uint64_t mlast,
> > > @@ -457,6 +469,7 @@ static void vhost_set_memory(MemoryListener *listener,
> > >      dev->mem_changed_start_addr = MIN(dev->mem_changed_start_addr, start_addr);
> > >      dev->mem_changed_end_addr = MAX(dev->mem_changed_end_addr, start_addr + size - 1);
> > >      dev->memory_changed = true;
> > > +    slots_limit.used_memslots = dev->mem->nregions;
> > >  }
> > >  
> > >  static bool vhost_section(MemoryRegionSection *section)
> > > @@ -916,6 +929,14 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
> > >          return -errno;
> > >      }
> > >  
> > > +    r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
> > > +    if (slots_limit.refcount > 0) {
> > > +        slots_limit.memslots_limit = MIN(slots_limit.memslots_limit, r);
> > > +    } else {
> > > +        slots_limit.memslots_limit = r;
> > > +    }
> > > +    slots_limit.refcount++;
> > > +
> > >      r = hdev->vhost_ops->vhost_call(hdev, VHOST_SET_OWNER, NULL);
> > >      if (r < 0) {
> > >          goto fail;
> > > @@ -972,6 +993,7 @@ fail_vq:
> > >  fail:
> > >      r = -errno;
> > >      hdev->vhost_ops->vhost_backend_cleanup(hdev);
> > > +    slots_limit.refcount--;
> > >      return r;
> > >  }
> > >  
> > > @@ -989,6 +1011,7 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > >      g_free(hdev->mem);
> > >      g_free(hdev->mem_sections);
> > >      hdev->vhost_ops->vhost_backend_cleanup(hdev);
> > > +    slots_limit.refcount--;
> > >  }
> > >  
> > >  /* Stop processing guest IO notifications in qemu.
> > 
> > This is not enough in case one is mixing vhost user and vhost kernel devices.
> > One has to scan all devices.
> why it's not enough?
> vhost-user as vhost-kernel backends call
>  - vhost_dev_init() on init path so slots_limit.refcount++
>    should count all added backends
>  - vhost_dev_cleanup() on deinit path so slots_limit.refcount--
>    should take care of unplugged device counting

Because if one tweaks the module parameter, the limit on vhost user is
lower than on vhost kernel.  Once we remove vhost user, the limit should
go up even if refcount > 0.
Same thing if vhost kernel limit is lower than the one on vhost user.

> > 
> > 
> > > diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> > > index e472f29..28b6714 100644
> > > --- a/include/hw/virtio/vhost-backend.h
> > > +++ b/include/hw/virtio/vhost-backend.h
> > > @@ -24,12 +24,14 @@ typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
> > >               void *arg);
> > >  typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
> > >  typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
> > > +typedef int (*vhost_backend_memslots_limit)(struct vhost_dev *dev);
> > >  
> > >  typedef struct VhostOps {
> > >      VhostBackendType backend_type;
> > >      vhost_call vhost_call;
> > >      vhost_backend_init vhost_backend_init;
> > >      vhost_backend_cleanup vhost_backend_cleanup;
> > > +    vhost_backend_memslots_limit vhost_backend_memslots_limit;
> > >  } VhostOps;
> > >  
> > >  extern const VhostOps user_ops;
> > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > > index dd51050..17ff7b6 100644
> > > --- a/include/hw/virtio/vhost.h
> > > +++ b/include/hw/virtio/vhost.h
> > > @@ -81,4 +81,5 @@ uint64_t vhost_get_features(struct vhost_dev *hdev, const int *feature_bits,
> > >                              uint64_t features);
> > >  void vhost_ack_features(struct vhost_dev *hdev, const int *feature_bits,
> > >                          uint64_t features);
> > > +bool vhost_has_free_slot(void);
> > >  #endif
> > > diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> > > index 9937a12..d2f1b21 100644
> > > --- a/stubs/Makefile.objs
> > > +++ b/stubs/Makefile.objs
> > > @@ -38,3 +38,4 @@ stub-obj-$(CONFIG_WIN32) += fd-register.o
> > >  stub-obj-y += cpus.o
> > >  stub-obj-y += kvm.o
> > >  stub-obj-y += qmp_pc_dimm_device_list.o
> > > +stub-obj-y += vhost.o
> > > diff --git a/stubs/vhost.c b/stubs/vhost.c
> > > new file mode 100644
> > > index 0000000..d346b85
> > > --- /dev/null
> > > +++ b/stubs/vhost.c
> > > @@ -0,0 +1,6 @@
> > > +#include "hw/virtio/vhost.h"
> > > +
> > > +bool vhost_has_free_slot(void)
> > > +{
> > > +    return true;
> > > +}
> > > -- 
> > > 1.8.3.1

  reply	other threads:[~2015-07-30 15:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-30 10:11 [Qemu-devel] [PATCH for-2.4 v3 0/3] vhost: check if vhost has capacity for hotplugged memory Igor Mammedov
2015-07-30 10:11 ` [Qemu-devel] [PATCH for-2.4 v3 1/3] vhost: add vhost_has_free_slot() interface Igor Mammedov
2015-07-30 15:28   ` Michael S. Tsirkin
2015-07-30 15:45     ` Igor Mammedov
2015-07-30 15:51       ` Michael S. Tsirkin [this message]
2015-07-30 10:11 ` [Qemu-devel] [PATCH for-2.4 v3 2/3] pc-dimm: add vhost slots limit check before commiting to hotplug Igor Mammedov
2015-07-30 10:11 ` [Qemu-devel] [PATCH for-2.4 v3 3/3] vhost: fail backend intialization if memslots number is more than its supported limit Igor Mammedov
2015-07-30 15:29   ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150730184727-mutt-send-email-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).