From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 20FF2EB64D9 for ; Wed, 12 Jul 2023 06:55:32 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qJTk3-0002aZ-4d; Wed, 12 Jul 2023 02:54:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qJTk0-0002a2-Pf for qemu-devel@nongnu.org; Wed, 12 Jul 2023 02:54:24 -0400 Received: from mga12.intel.com ([192.55.52.136]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qJTjx-0000J6-ME for qemu-devel@nongnu.org; Wed, 12 Jul 2023 02:54:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689144861; x=1720680861; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to; bh=46VBqph2BK7VydwH9YFsHZfv2JV3P88g7zPsZJf0tUw=; b=f729tNZ516hjfaWNoKd/UhQf/kI0iSz/ZKLnyhhIk0MEBna2iFhef2C9 T8tyLevOq4qF/KaXtUH57GjaZCqEo+lzflHbYnpnNXbTd1WxAS4g+WeYW UrI8QZMOtXMg0ig56KCDy0XiTLi3bxBHD+KECdMon5xXsUxjyD1hpCswr PVCdwjBB0cGElmodcuEnh6EbRo6eRSiZqpiSN8gme2B8ebqdq0q/9lE0a T6WQekzk0/zlyTPLDwzYXRHUYWAlA6g0m4B5WHPg5WO4VMqCMen+Lk/iS INKkQrKRSEpoOpe2vx8jolpFPNJPKjdBUyYI1lrN4a3d9X1x/c7YftnbC g==; X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="344418941" X-IronPort-AV: E=Sophos;i="6.01,198,1684825200"; d="scan'208,217";a="344418941" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2023 23:54:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="791502308" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208,217";a="791502308" Received: from lingshan-mobl.ccr.corp.intel.com (HELO [10.93.29.0]) ([10.93.29.0]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2023 23:54:08 -0700 Content-Type: multipart/alternative; boundary="------------coLaKL00wnpI8UAIcfsp6glz" Message-ID: Date: Wed, 12 Jul 2023 14:54:06 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 Thunderbird/102.13.0 Subject: Re: [PATCH V2] vhost_vdpa: no need to fetch vring base when poweroff Content-Language: en-US To: Jason Wang , Eugenio Perez Martin Cc: mst@redhat.com, qemu-devel@nongnu.org References: <20230710165333.17506-1-lingshan.zhu@intel.com> <23e1b6fe-2f87-47d3-b66c-71fa30e6421b@intel.com> From: "Zhu, Lingshan" In-Reply-To: Received-SPF: pass client-ip=192.55.52.136; envelope-from=lingshan.zhu@intel.com; helo=mga12.intel.com X-Spam_score_int: -44 X-Spam_score: -4.5 X-Spam_bar: ---- X-Spam_report: (-4.5 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, NICE_REPLY_A=-0.089, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This is a multi-part message in MIME format. --------------coLaKL00wnpI8UAIcfsp6glz Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 7/11/2023 3:34 PM, Jason Wang wrote: > > > On Tue, Jul 11, 2023 at 3:25 PM Eugenio Perez Martin > wrote: > > On Tue, Jul 11, 2023 at 9:05 AM Jason Wang > wrote: > > > > On Tue, Jul 11, 2023 at 12:09 PM Zhu, Lingshan > wrote: > > > > > > > > > > > > On 7/11/2023 10:50 AM, Jason Wang wrote: > > > > On Mon, Jul 10, 2023 at 4:53 PM Zhu Lingshan > wrote: > > > >> In the poweroff routine, no need to fetch last available index. > > > >> > > > > This is because there's no concept of shutdown in the vhost > layer, it > > > > only knows start and stop. > > > > > > > >> This commit also provides a better debug message in the vhost > > > >> caller vhost_virtqueue_stop, > > > > A separate patch is better. > > > OK > > > > > > > >> because if vhost does not fetch > > > >> the last avail idx successfully, maybe the device does not > > > >> suspend, vhost will sync last avail idx to vring used idx as a > > > >> work around, not a failure. > > > > This only happens if we return a negative value? > > > Yes > > > > > > > >> Signed-off-by: Zhu Lingshan > > > >> --- > > > >>   hw/virtio/vhost-vdpa.c | 10 ++++++++++ > > > >>   hw/virtio/vhost.c      |  2 +- > > > >>   2 files changed, 11 insertions(+), 1 deletion(-) > > > >> > > > >> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > > > >> index 3c575a9a6e..10b445f64e 100644 > > > >> --- a/hw/virtio/vhost-vdpa.c > > > >> +++ b/hw/virtio/vhost-vdpa.c > > > >> @@ -26,6 +26,7 @@ > > > >>   #include "cpu.h" > > > >>   #include "trace.h" > > > >>   #include "qapi/error.h" > > > >> +#include "sysemu/runstate.h" > > > >> > > > >>   /* > > > >>    * Return one past the end of the end of section. Be > careful with uint64_t > > > >> @@ -1391,6 +1392,15 @@ static int > vhost_vdpa_get_vring_base(struct vhost_dev *dev, > > > >>       struct vhost_vdpa *v = dev->opaque; > > > >>       int ret; > > > >> > > > >> +    if (runstate_check(RUN_STATE_SHUTDOWN)) { > > > >> +        /* > > > >> +         * Some devices do not support this call properly, > > > >> +         * and we don't need to retrieve the indexes > > > >> +         * if it is shutting down > > > >> +         */ > > > >> +        return 0; > > > > Checking runstate in the vhost code seems like a layer > violation. > > > > > > > > What happens without this patch? > > > vhost tries to fetch vring base, > > > vhost_vdpa needs suspend the device before retrieving > last_avail_idx. > > > However not all devices can support .suspend properly so this call > > > may fail. > > > > I think this is where I'm lost. If the device doesn't support > > suspending, any reason we only try to fix the case of shutdown? > > > > Btw, the fail is intended: > > > >     if (!v->suspended) { > >         /* > >          * Cannot trust in value returned by device, let vhost > recover used > >          * idx from guest. > >          */ > >         return -1; > >     } > > > > The fail is intended, but to recover the last used idx, either from > device or from the guest, is only useful in the case of migration. > > > Note that we had userspace devices like VDUSE now, so it could be > useful in the case of a VDUSE daemon crash or reconnect. This code blcok is for vhost_vdpa backend, and I think vduse is another code path. Return a guest used idx may be a good idea but as Eugenio pointed out that may duplicate the code. > > > I think the main problem is the error message, actually. But I think > it has no use either to recover last_avail_idx or print a debug > message if we're not migrating. Another solution would be to recover > it from the guest at vhost_vdpa_get_vring_base, but I don't like the > duplication. > > > And if we return to success here, will we go to set an uninitialized > > last avail idx? > > > > It will be either the default value (is set to 0 at > __virtio_queue_reset) or the one received from a migration (at > virtio_load). > > > 0 is even sub-optimal than the index used. Anyhow, VHOST_DEBUG should > not be enabled for production environments. Returning 0 sounds like a queue reset, yes we can reset a queue if failed to suspend it, I am not sure whther 0 is better than guest used idx. I think we are not able to disable VHOST_DEBUG because customers can build QEMU by their own. Thanks > > Thanks > > > Thanks! > > >     r = dev->vhost_ops->vhost_get_vring_base(dev, &state); > >     if (r < 0) { > >     ... > >     }.else { > >         virtio_queue_set_last_avail_idx(vdev, idx, state.num); > >     } > > > > Thanks > > > > > Then vhost will print an error shows something failed. > > > > > > The error msg is confused, as stated in the commit log, restoring > > > last_avail_idx with guest used idx > > > is a workaround rather than a failure. And no needs to fetch > last_avail_idx > > > when power off. > > > > > > Thanks > > > > > > > > Thanks > > > > > > > >> +    } > > > >> + > > > >>       if (v->shadow_vqs_enabled) { > > > >>           ring->num = > virtio_queue_get_last_avail_idx(dev->vdev, ring->index); > > > >>           return 0; > > > >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > > > >> index 82394331bf..7dd90cff3a 100644 > > > >> --- a/hw/virtio/vhost.c > > > >> +++ b/hw/virtio/vhost.c > > > >> @@ -1262,7 +1262,7 @@ void vhost_virtqueue_stop(struct > vhost_dev *dev, > > > >> > > > >>       r = dev->vhost_ops->vhost_get_vring_base(dev, &state); > > > >>       if (r < 0) { > > > >> -        VHOST_OPS_DEBUG(r, "vhost VQ %u ring restore > failed: %d", idx, r); > > > >> +        VHOST_OPS_DEBUG(r, "sync last avail idx to the > guest used idx for vhost VQ %u", idx); > > > >>           /* Connection to the backend is broken, so let's > sync internal > > > >>            * last avail idx to the device used idx. > > > >>            */ > > > >> -- > > > >> 2.39.3 > > > >> > > > > > > --------------coLaKL00wnpI8UAIcfsp6glz Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit

On 7/11/2023 3:34 PM, Jason Wang wrote:


On Tue, Jul 11, 2023 at 3:25 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
On Tue, Jul 11, 2023 at 9:05 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Jul 11, 2023 at 12:09 PM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:
> >
> >
> >
> > On 7/11/2023 10:50 AM, Jason Wang wrote:
> > > On Mon, Jul 10, 2023 at 4:53 PM Zhu Lingshan <lingshan.zhu@intel.com> wrote:
> > >> In the poweroff routine, no need to fetch last available index.
> > >>
> > > This is because there's no concept of shutdown in the vhost layer, it
> > > only knows start and stop.
> > >
> > >> This commit also provides a better debug message in the vhost
> > >> caller vhost_virtqueue_stop,
> > > A separate patch is better.
> > OK
> > >
> > >> because if vhost does not fetch
> > >> the last avail idx successfully, maybe the device does not
> > >> suspend, vhost will sync last avail idx to vring used idx as a
> > >> work around, not a failure.
> > > This only happens if we return a negative value?
> > Yes
> > >
> > >> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> > >> ---
> > >>   hw/virtio/vhost-vdpa.c | 10 ++++++++++
> > >>   hw/virtio/vhost.c      |  2 +-
> > >>   2 files changed, 11 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > >> index 3c575a9a6e..10b445f64e 100644
> > >> --- a/hw/virtio/vhost-vdpa.c
> > >> +++ b/hw/virtio/vhost-vdpa.c
> > >> @@ -26,6 +26,7 @@
> > >>   #include "cpu.h"
> > >>   #include "trace.h"
> > >>   #include "qapi/error.h"
> > >> +#include "sysemu/runstate.h"
> > >>
> > >>   /*
> > >>    * Return one past the end of the end of section. Be careful with uint64_t
> > >> @@ -1391,6 +1392,15 @@ static int vhost_vdpa_get_vring_base(struct vhost_dev *dev,
> > >>       struct vhost_vdpa *v = dev->opaque;
> > >>       int ret;
> > >>
> > >> +    if (runstate_check(RUN_STATE_SHUTDOWN)) {
> > >> +        /*
> > >> +         * Some devices do not support this call properly,
> > >> +         * and we don't need to retrieve the indexes
> > >> +         * if it is shutting down
> > >> +         */
> > >> +        return 0;
> > > Checking runstate in the vhost code seems like a layer violation.
> > >
> > > What happens without this patch?
> > vhost tries to fetch vring base,
> > vhost_vdpa needs suspend the device before retrieving last_avail_idx.
> > However not all devices can support .suspend properly so this call
> > may fail.
>
> I think this is where I'm lost. If the device doesn't support
> suspending, any reason we only try to fix the case of shutdown?
>
> Btw, the fail is intended:
>
>     if (!v->suspended) {
>         /*
>          * Cannot trust in value returned by device, let vhost recover used
>          * idx from guest.
>          */
>         return -1;
>     }
>

The fail is intended, but to recover the last used idx, either from
device or from the guest, is only useful in the case of migration.

Note that we had userspace devices like VDUSE now, so it could be useful in the case of a VDUSE daemon crash or reconnect.
This code blcok is for vhost_vdpa backend, and I think vduse is another code path.
Return a guest used idx may be a good idea but as Eugenio pointed out that may duplicate the code.


I think the main problem is the error message, actually. But I think
it has no use either to recover last_avail_idx or print a debug
message if we're not migrating. Another solution would be to recover
it from the guest at vhost_vdpa_get_vring_base, but I don't like the
duplication.

> And if we return to success here, will we go to set an uninitialized
> last avail idx?
>

It will be either the default value (is set to 0 at
__virtio_queue_reset) or the one received from a migration (at
virtio_load).

0 is even sub-optimal than the index used. Anyhow, VHOST_DEBUG should not be enabled for production environments.
Returning 0 sounds like a queue reset, yes we can reset a queue if failed to suspend it, I am not sure whther
0 is better than guest used idx.

I think we are not able to disable VHOST_DEBUG because customers can build QEMU by their own.

Thanks

Thanks
 

Thanks!

>     r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
>     if (r < 0) {
>     ...
>     }.else {
>         virtio_queue_set_last_avail_idx(vdev, idx, state.num);
>     }
>
> Thanks
>
> > Then vhost will print an error shows something failed.
> >
> > The error msg is confused, as stated in the commit log, restoring
> > last_avail_idx with guest used idx
> > is a workaround rather than a failure. And no needs to fetch last_avail_idx
> > when power off.
> >
> > Thanks
> > >
> > > Thanks
> > >
> > >> +    }
> > >> +
> > >>       if (v->shadow_vqs_enabled) {
> > >>           ring->num = virtio_queue_get_last_avail_idx(dev->vdev, ring->index);
> > >>           return 0;
> > >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > >> index 82394331bf..7dd90cff3a 100644
> > >> --- a/hw/virtio/vhost.c
> > >> +++ b/hw/virtio/vhost.c
> > >> @@ -1262,7 +1262,7 @@ void vhost_virtqueue_stop(struct vhost_dev *dev,
> > >>
> > >>       r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
> > >>       if (r < 0) {
> > >> -        VHOST_OPS_DEBUG(r, "vhost VQ %u ring restore failed: %d", idx, r);
> > >> +        VHOST_OPS_DEBUG(r, "sync last avail idx to the guest used idx for vhost VQ %u", idx);
> > >>           /* Connection to the backend is broken, so let's sync internal
> > >>            * last avail idx to the device used idx.
> > >>            */
> > >> --
> > >> 2.39.3
> > >>
> >
>


--------------coLaKL00wnpI8UAIcfsp6glz--