From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 303D2C55179 for ; Fri, 30 Oct 2020 08:40:44 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 70E1C2076D for ; Fri, 30 Oct 2020 08:40:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aad0miu3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70E1C2076D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40704 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kYPxh-0005gA-TM for qemu-devel@archiver.kernel.org; Fri, 30 Oct 2020 04:40:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49596) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kYPwf-0005Bt-K4 for qemu-devel@nongnu.org; Fri, 30 Oct 2020 04:39:37 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:24870) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kYPwc-0003fB-PY for qemu-devel@nongnu.org; Fri, 30 Oct 2020 04:39:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1604047173; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=F0yzB1x7DfCdLDELam6AszyjwL21pO9d/B8pgwTosLs=; b=aad0miu35FZHJfQjI9j3NEjhW0torDqdE/DwU0aolyEFoQ+kRLPiZlgHkIXaypUQhf3x55 k56rRkAlDeML2//NZtSmWgel0zA4NbLqCNQ2KnN8+yeTWHvBVyqmmxSDk7YgiaiARyVhT5 qQwLtWDYeMJXX5/oRL26oOyuqApP6jQ= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-368-fMKQZl9iOB2xVrBMKeXi_g-1; Fri, 30 Oct 2020 04:39:32 -0400 X-MC-Unique: fMKQZl9iOB2xVrBMKeXi_g-1 Received: by mail-wm1-f70.google.com with SMTP id c204so973438wmd.5 for ; Fri, 30 Oct 2020 01:39:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=F0yzB1x7DfCdLDELam6AszyjwL21pO9d/B8pgwTosLs=; b=RvPWbGbX8D3YkG9al4JMgQ6o9K0eiEtIj9VMJxcSoor+3KHrwfSvS34tdhAFEW+nVU h+fY1g9avRlvM75VkE3cLzFTk6uD32xu/8Lfkrgl/kFrxT2UMrbA3vDALh/97xwMbEh0 q4i5FVM+tUGw91qEUITBEPRZ71zasRO0WNX+wnx1RHleGDtX9XyslxsWg9+SUayj2vP0 zLUMnYuZVOgKO662qvSTSRMML9SQvur/8nfXnhU0+SshTr6JNpAD/YQAwWfqgekopzWW JItzawXTtKQRrn1UIGGcd5rCHV1RJP6DRXPWzIIg2Mm1TpP5pT/gfuWZeWia+pg9eJqN 4jxQ== X-Gm-Message-State: AOAM531tMG4xAM2LL4gRmg8k9Smp+FgyWE/H5pKb6nESf6GINiFBqA7I /4OomhV6OqeKFBg31+TVdR0BDtRfEYOi6Pks608QG4ABACO4TG3l7qTxnt6J2VWWQ9J1dE3vxLg MXykWST57oOgJxoo= X-Received: by 2002:a5d:6287:: with SMTP id k7mr1602549wru.402.1604047169626; Fri, 30 Oct 2020 01:39:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxSM4pf/DG2309/raTC17cngJ0eADuz6iqrL7iqu839N/AtQDEKHlLkp952OWeGNnBwX41obw== X-Received: by 2002:a5d:6287:: with SMTP id k7mr1602520wru.402.1604047169302; Fri, 30 Oct 2020 01:39:29 -0700 (PDT) Received: from redhat.com (bzq-79-176-118-93.red.bezeqint.net. [79.176.118.93]) by smtp.gmail.com with ESMTPSA id b63sm4047865wme.9.2020.10.30.01.39.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Oct 2020 01:39:27 -0700 (PDT) Date: Fri, 30 Oct 2020 04:39:24 -0400 From: "Michael S. Tsirkin" To: Jiajun Chen Subject: Re: [PATCH] vhost-user: add separate memslot counter for vhost-user Message-ID: <20201030043826-mutt-send-email-mst@kernel.org> References: <20200928131731.69684-1-chenjiajun8@huawei.com> MIME-Version: 1.0 In-Reply-To: <20200928131731.69684-1-chenjiajun8@huawei.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Received-SPF: pass client-ip=63.128.21.124; envelope-from=mst@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/10/30 01:22:25 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: raphael.s.norwitz@gmail.com, zhang.zhanghailiang@huawei.com, jasowang@redhat.com, qemu-devel@nongnu.org, xiexiangyou@huawei.com, imammedo@redhat.com, marcandre.lureau@redhat.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On Mon, Sep 28, 2020 at 09:17:31PM +0800, Jiajun Chen wrote: > Used_memslots is equal to dev->mem->nregions now, it is true for > vhost kernel, but not for vhost user, which uses the memory regions > that have file descriptor. In fact, not all of the memory regions > have file descriptor. > It is usefully in some scenarios, e.g. used_memslots is 8, and only > 5 memory slots can be used by vhost user, it is failed to hot plug > a new memory RAM because vhost_has_free_slot just returned false, > but we can hot plug it safely in fact. At this point I dropped this, if you are going to resubmit pls include data on qemu invocation that manifests the problem. > -- > ChangeList: > v3: > -make used_memslots a member of struct vhost_dev instead of a global static value > > v2: > -eliminating useless used_memslots_exceeded variable and used_memslots_is_exceeded() API > > v1: > -vhost-user: add separate memslot counter for vhost-user > > Signed-off-by: Jiajun Chen > Signed-off-by: Jianjay Zhou > --- > hw/virtio/vhost-backend.c | 12 ++++++++++ > hw/virtio/vhost-user.c | 25 +++++++++++++++++++++ > hw/virtio/vhost.c | 37 +++++++++++++++++++++++-------- > include/hw/virtio/vhost-backend.h | 5 +++++ > include/hw/virtio/vhost.h | 1 + > net/vhost-user.c | 7 ++++++ > 6 files changed, 78 insertions(+), 9 deletions(-) > > diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c > index 782b1d67d9..7016f23ec5 100644 > --- a/hw/virtio/vhost-backend.c > +++ b/hw/virtio/vhost-backend.c > @@ -238,6 +238,16 @@ static void vhost_kernel_set_iotlb_callback(struct vhost_dev *dev, > qemu_set_fd_handler((uintptr_t)dev->opaque, NULL, NULL, NULL); > } > > +static void vhost_kernel_set_used_memslots(struct vhost_dev *dev) > +{ > + dev->used_memslots = dev->mem->nregions; > +} > + > +static unsigned int vhost_kernel_get_used_memslots(struct vhost_dev *dev) > +{ > + return dev->used_memslots; > +} > + > static const VhostOps kernel_ops = { > .backend_type = VHOST_BACKEND_TYPE_KERNEL, > .vhost_backend_init = vhost_kernel_init, > @@ -269,6 +279,8 @@ static const VhostOps kernel_ops = { > #endif /* CONFIG_VHOST_VSOCK */ > .vhost_set_iotlb_callback = vhost_kernel_set_iotlb_callback, > .vhost_send_device_iotlb_msg = vhost_kernel_send_device_iotlb_msg, > + .vhost_set_used_memslots = vhost_kernel_set_used_memslots, > + .vhost_get_used_memslots = vhost_kernel_get_used_memslots, > }; > #endif > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c > index 31231218dc..5dea64d8a8 100644 > --- a/hw/virtio/vhost-user.c > +++ b/hw/virtio/vhost-user.c > @@ -2354,6 +2354,29 @@ void vhost_user_cleanup(VhostUserState *user) > user->chr = NULL; > } > > +static void vhost_user_set_used_memslots(struct vhost_dev *dev) > +{ > + int i; > + dev->used_memslots = 0; > + > + for (i = 0; i < dev->mem->nregions; ++i) { > + struct vhost_memory_region *reg = dev->mem->regions + i; > + ram_addr_t offset; > + MemoryRegion *mr; > + int fd; > + > + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); > + if (mr && fd > 0) { > + dev->used_memslots++; > + } > + } > +} > + > +static unsigned int vhost_user_get_used_memslots(struct vhost_dev *dev) > +{ > + return dev->used_memslots; > +} > + > const VhostOps user_ops = { > .backend_type = VHOST_BACKEND_TYPE_USER, > .vhost_backend_init = vhost_user_backend_init, > @@ -2387,4 +2410,6 @@ const VhostOps user_ops = { > .vhost_backend_mem_section_filter = vhost_user_mem_section_filter, > .vhost_get_inflight_fd = vhost_user_get_inflight_fd, > .vhost_set_inflight_fd = vhost_user_set_inflight_fd, > + .vhost_set_used_memslots = vhost_user_set_used_memslots, > + .vhost_get_used_memslots = vhost_user_get_used_memslots, > }; > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > index 1a1384e7a6..98b967669b 100644 > --- a/hw/virtio/vhost.c > +++ b/hw/virtio/vhost.c > @@ -45,20 +45,20 @@ > static struct vhost_log *vhost_log; > static struct vhost_log *vhost_log_shm; > > -static unsigned int used_memslots; > static QLIST_HEAD(, vhost_dev) vhost_devices = > QLIST_HEAD_INITIALIZER(vhost_devices); > > bool vhost_has_free_slot(void) > { > - unsigned int slots_limit = ~0U; > struct vhost_dev *hdev; > > QLIST_FOREACH(hdev, &vhost_devices, entry) { > - unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev); > - slots_limit = MIN(slots_limit, r); > + if (hdev->vhost_ops->vhost_get_used_memslots(hdev) >= > + hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) { > + return false; > + } > } > - return slots_limit > used_memslots; > + return true; > } > > static void vhost_dev_sync_region(struct vhost_dev *dev, > @@ -502,7 +502,6 @@ static void vhost_commit(MemoryListener *listener) > dev->n_mem_sections * sizeof dev->mem->regions[0]; > dev->mem = g_realloc(dev->mem, regions_size); > dev->mem->nregions = dev->n_mem_sections; > - used_memslots = dev->mem->nregions; > for (i = 0; i < dev->n_mem_sections; i++) { > struct vhost_memory_region *cur_vmr = dev->mem->regions + i; > struct MemoryRegionSection *mrs = dev->mem_sections + i; > @@ -678,6 +677,7 @@ static void vhost_region_add_section(struct vhost_dev *dev, > dev->tmp_sections[dev->n_tmp_sections - 1].fv = NULL; > memory_region_ref(section->mr); > } > + dev->vhost_ops->vhost_set_used_memslots(dev); > } > > /* Used for both add and nop callbacks */ > @@ -693,6 +693,17 @@ static void vhost_region_addnop(MemoryListener *listener, > vhost_region_add_section(dev, section); > } > > +static void vhost_region_del(MemoryListener *listener, > + MemoryRegionSection *section) > +{ > + struct vhost_dev *dev = container_of(listener, struct vhost_dev, > + memory_listener); > + if (!vhost_section(dev, section)) { > + return; > + } > + dev->vhost_ops->vhost_set_used_memslots(dev); > +} > + > static void vhost_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) > { > struct vhost_iommu *iommu = container_of(n, struct vhost_iommu, n); > @@ -1300,6 +1311,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque, > hdev->memory_listener = (MemoryListener) { > .begin = vhost_begin, > .commit = vhost_commit, > + .region_del = vhost_region_del, > .region_add = vhost_region_addnop, > .region_nop = vhost_region_addnop, > .log_start = vhost_log_start, > @@ -1346,9 +1358,16 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque, > memory_listener_register(&hdev->memory_listener, &address_space_memory); > QLIST_INSERT_HEAD(&vhost_devices, hdev, entry); > > - if (used_memslots > hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) { > - error_report("vhost backend memory slots limit is less" > - " than current number of present memory slots"); > + /* > + * If we started a VM without any vhost device, > + * for the first time vhost device hot-plug > + * (vhost_get_used_memslots is always 0), > + * so it needs to double check here. > + */ > + if (hdev->vhost_ops->vhost_get_used_memslots(hdev) > > + hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) { > + error_report("vhost backend memory slots limit is less than" > + " current number of present memory slots"); > r = -1; > if (busyloop_timeout) { > goto fail_busyloop; > diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h > index 8825bd278f..6569c95a43 100644 > --- a/include/hw/virtio/vhost-backend.h > +++ b/include/hw/virtio/vhost-backend.h > @@ -124,6 +124,9 @@ typedef int (*vhost_get_device_id_op)(struct vhost_dev *dev, uint32_t *dev_id); > > typedef bool (*vhost_force_iommu_op)(struct vhost_dev *dev); > > +typedef void (*vhost_set_used_memslots_op)(struct vhost_dev *dev); > +typedef unsigned int (*vhost_get_used_memslots_op)(struct vhost_dev *dev); > + > typedef struct VhostOps { > VhostBackendType backend_type; > vhost_backend_init vhost_backend_init; > @@ -168,6 +171,8 @@ typedef struct VhostOps { > vhost_vq_get_addr_op vhost_vq_get_addr; > vhost_get_device_id_op vhost_get_device_id; > vhost_force_iommu_op vhost_force_iommu; > + vhost_set_used_memslots_op vhost_set_used_memslots; > + vhost_get_used_memslots_op vhost_get_used_memslots; > } VhostOps; > > extern const VhostOps user_ops; > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h > index 767a95ec0b..5ded21f86d 100644 > --- a/include/hw/virtio/vhost.h > +++ b/include/hw/virtio/vhost.h > @@ -90,6 +90,7 @@ struct vhost_dev { > QLIST_HEAD(, vhost_iommu) iommu_list; > IOMMUNotifier n; > const VhostDevConfigOps *config_ops; > + unsigned int used_memslots; > }; > > struct vhost_net { > diff --git a/net/vhost-user.c b/net/vhost-user.c > index 17532daaf3..7e93955537 100644 > --- a/net/vhost-user.c > +++ b/net/vhost-user.c > @@ -20,6 +20,7 @@ > #include "qemu/error-report.h" > #include "qemu/option.h" > #include "trace.h" > +#include "include/hw/virtio/vhost.h" > > typedef struct NetVhostUserState { > NetClientState nc; > @@ -347,6 +348,12 @@ static int net_vhost_user_init(NetClientState *peer, const char *device, > qemu_chr_fe_set_handlers(&s->chr, NULL, NULL, > net_vhost_user_event, NULL, nc0->name, NULL, > true); > + > + if (!vhost_has_free_slot()) { > + error_report("used memslots exceeded the backend limit, quit " > + "loop"); > + goto err; > + } > } while (!s->started); > > assert(s->vhost_net); > -- > 2.27.0.dirty