From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF4F1EB64D9 for ; Wed, 12 Jul 2023 10:14:59 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qJWrb-0003iS-U1; Wed, 12 Jul 2023 06:14:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qJWra-0003iJ-DW for qemu-devel@nongnu.org; Wed, 12 Jul 2023 06:14:26 -0400 Received: from mga18.intel.com ([134.134.136.126]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qJWrV-0004lO-0Q for qemu-devel@nongnu.org; Wed, 12 Jul 2023 06:14:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689156861; x=1720692861; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to; bh=wx3QZO+8z9pGaT46vMR6SHNO5OPCFQJcbfQhGvrDuWs=; b=BKh6xViqhgUqXWfWvrkvh1IRZzaPb3GhhenaZ3vaHb0wv/b7OdIBuUZ/ LO9xMwIpfvBwPVv171fbY7SB022FvbovMBTGTDusOMvLadAnqkYWIhjFF AjH2vlS0pXgYKaSfkKJHHycYg8x1MWO6PG67s1g4jRvvoWvF6WRhuqeFJ Umfdjj87FF0JFyI1ZJ7yGnPv4fhaW92sGBhAbUS3SCREr5SMsKvwKTO1f r48zc/hppyzK3BDCO+Cya483UOUbQ+rgA51+snxhtdKhZuw/SW9lutEwQ TD1K/gKbT4Wy4GYOR9u4y3wJIFRLOzkcF3P8jkUqHAqeRX0SAB0cNx7IT Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="349706869" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208,217";a="349706869" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 03:14:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="786986993" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208,217";a="786986993" Received: from lingshan-mobl.ccr.corp.intel.com (HELO [10.93.29.0]) ([10.93.29.0]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 03:14:16 -0700 Content-Type: multipart/alternative; boundary="------------iW9uvd6Cl0zPZlw4Ma0gsbDc" Message-ID: <7a268b23-0832-8caf-f792-ee1b389d2b70@intel.com> Date: Wed, 12 Jul 2023 18:14:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 Thunderbird/102.13.0 Subject: Re: [PATCH V2] vhost_vdpa: no need to fetch vring base when poweroff Content-Language: en-US To: Jason Wang Cc: Eugenio Perez Martin , mst@redhat.com, qemu-devel@nongnu.org References: <20230710165333.17506-1-lingshan.zhu@intel.com> <23e1b6fe-2f87-47d3-b66c-71fa30e6421b@intel.com> From: "Zhu, Lingshan" In-Reply-To: Received-SPF: pass client-ip=134.134.136.126; envelope-from=lingshan.zhu@intel.com; helo=mga18.intel.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, NICE_REPLY_A=-0.089, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This is a multi-part message in MIME format. --------------iW9uvd6Cl0zPZlw4Ma0gsbDc Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 7/12/2023 3:22 PM, Jason Wang wrote: > > > On Wed, Jul 12, 2023 at 2:54 PM Zhu, Lingshan > wrote: > > > > On 7/11/2023 3:34 PM, Jason Wang wrote: >> >> >> On Tue, Jul 11, 2023 at 3:25 PM Eugenio Perez Martin >> wrote: >> >> On Tue, Jul 11, 2023 at 9:05 AM Jason Wang >> wrote: >> > >> > On Tue, Jul 11, 2023 at 12:09 PM Zhu, Lingshan >> wrote: >> > > >> > > >> > > >> > > On 7/11/2023 10:50 AM, Jason Wang wrote: >> > > > On Mon, Jul 10, 2023 at 4:53 PM Zhu Lingshan >> wrote: >> > > >> In the poweroff routine, no need to fetch last >> available index. >> > > >> >> > > > This is because there's no concept of shutdown in the >> vhost layer, it >> > > > only knows start and stop. >> > > > >> > > >> This commit also provides a better debug message in >> the vhost >> > > >> caller vhost_virtqueue_stop, >> > > > A separate patch is better. >> > > OK >> > > > >> > > >> because if vhost does not fetch >> > > >> the last avail idx successfully, maybe the device does not >> > > >> suspend, vhost will sync last avail idx to vring used >> idx as a >> > > >> work around, not a failure. >> > > > This only happens if we return a negative value? >> > > Yes >> > > > >> > > >> Signed-off-by: Zhu Lingshan >> > > >> --- >> > > >>   hw/virtio/vhost-vdpa.c | 10 ++++++++++ >> > > >>   hw/virtio/vhost.c      |  2 +- >> > > >>   2 files changed, 11 insertions(+), 1 deletion(-) >> > > >> >> > > >> diff --git a/hw/virtio/vhost-vdpa.c >> b/hw/virtio/vhost-vdpa.c >> > > >> index 3c575a9a6e..10b445f64e 100644 >> > > >> --- a/hw/virtio/vhost-vdpa.c >> > > >> +++ b/hw/virtio/vhost-vdpa.c >> > > >> @@ -26,6 +26,7 @@ >> > > >>   #include "cpu.h" >> > > >>   #include "trace.h" >> > > >>   #include "qapi/error.h" >> > > >> +#include "sysemu/runstate.h" >> > > >> >> > > >>   /* >> > > >>    * Return one past the end of the end of section. Be >> careful with uint64_t >> > > >> @@ -1391,6 +1392,15 @@ static int >> vhost_vdpa_get_vring_base(struct vhost_dev *dev, >> > > >>       struct vhost_vdpa *v = dev->opaque; >> > > >>       int ret; >> > > >> >> > > >> +    if (runstate_check(RUN_STATE_SHUTDOWN)) { >> > > >> +        /* >> > > >> +         * Some devices do not support this call >> properly, >> > > >> +         * and we don't need to retrieve the indexes >> > > >> +         * if it is shutting down >> > > >> +         */ >> > > >> +        return 0; >> > > > Checking runstate in the vhost code seems like a layer >> violation. >> > > > >> > > > What happens without this patch? >> > > vhost tries to fetch vring base, >> > > vhost_vdpa needs suspend the device before retrieving >> last_avail_idx. >> > > However not all devices can support .suspend properly so >> this call >> > > may fail. >> > >> > I think this is where I'm lost. If the device doesn't support >> > suspending, any reason we only try to fix the case of shutdown? >> > >> > Btw, the fail is intended: >> > >> >     if (!v->suspended) { >> >         /* >> >          * Cannot trust in value returned by device, let >> vhost recover used >> >          * idx from guest. >> >          */ >> >         return -1; >> >     } >> > >> >> The fail is intended, but to recover the last used idx, >> either from >> device or from the guest, is only useful in the case of >> migration. >> >> >> Note that we had userspace devices like VDUSE now, so it could be >> useful in the case of a VDUSE daemon crash or reconnect. > This code blcok is for vhost_vdpa backend, and I think vduse is > another code path. > > > I'm not sure I understand here, I meant vhost_vdpa + vduse. It works > similar to vhost-user. OK, so do you suggest we set vring state == 0 and return 0 if failed to suspend the device? No matter shutdown or other cases. > > Return a guest used idx may be a good idea but as Eugenio pointed > out that may duplicate the code. >> >> >> I think the main problem is the error message, actually. But >> I think >> it has no use either to recover last_avail_idx or print a debug >> message if we're not migrating. Another solution would be to >> recover >> it from the guest at vhost_vdpa_get_vring_base, but I don't >> like the >> duplication. >> >> > And if we return to success here, will we go to set an >> uninitialized >> > last avail idx? >> > >> >> It will be either the default value (is set to 0 at >> __virtio_queue_reset) or the one received from a migration (at >> virtio_load). >> >> >> 0 is even sub-optimal than the index used. Anyhow, VHOST_DEBUG >> should not be enabled for production environments. > Returning 0 sounds like a queue reset, yes we can reset a queue if > failed to suspend it, I am not sure whther > 0 is better than guest used idx. > > I think we are not able to disable VHOST_DEBUG because customers > can build QEMU by their own. > > > Well, disabling debug information is a common practice in any > distribution. > > Or if you worry about the default, let's have a patch to undef > VHOST_DEBUG by defualt. > I can do this in the next version Thanks > Thanks > > > Thanks >> >> Thanks >> >> >> Thanks! >> >> >     r = dev->vhost_ops->vhost_get_vring_base(dev, &state); >> >     if (r < 0) { >> >     ... >> >     }.else { >> >  virtio_queue_set_last_avail_idx(vdev, idx, state.num); >> >     } >> > >> > Thanks >> > >> > > Then vhost will print an error shows something failed. >> > > >> > > The error msg is confused, as stated in the commit log, >> restoring >> > > last_avail_idx with guest used idx >> > > is a workaround rather than a failure. And no needs to >> fetch last_avail_idx >> > > when power off. >> > > >> > > Thanks >> > > > >> > > > Thanks >> > > > >> > > >> +    } >> > > >> + >> > > >>       if (v->shadow_vqs_enabled) { >> > > >>           ring->num = >> virtio_queue_get_last_avail_idx(dev->vdev, ring->index); >> > > >>           return 0; >> > > >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c >> > > >> index 82394331bf..7dd90cff3a 100644 >> > > >> --- a/hw/virtio/vhost.c >> > > >> +++ b/hw/virtio/vhost.c >> > > >> @@ -1262,7 +1262,7 @@ void vhost_virtqueue_stop(struct >> vhost_dev *dev, >> > > >> >> > > >>       r = dev->vhost_ops->vhost_get_vring_base(dev, >> &state); >> > > >>       if (r < 0) { >> > > >> -        VHOST_OPS_DEBUG(r, "vhost VQ %u ring restore >> failed: %d", idx, r); >> > > >> +        VHOST_OPS_DEBUG(r, "sync last avail idx to >> the guest used idx for vhost VQ %u", idx); >> > > >>           /* Connection to the backend is broken, so >> let's sync internal >> > > >>            * last avail idx to the device used idx. >> > > >>            */ >> > > >> -- >> > > >> 2.39.3 >> > > >> >> > > >> > >> > --------------iW9uvd6Cl0zPZlw4Ma0gsbDc Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit

On 7/12/2023 3:22 PM, Jason Wang wrote:


On Wed, Jul 12, 2023 at 2:54 PM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:


On 7/11/2023 3:34 PM, Jason Wang wrote:


On Tue, Jul 11, 2023 at 3:25 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
On Tue, Jul 11, 2023 at 9:05 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Jul 11, 2023 at 12:09 PM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:
> >
> >
> >
> > On 7/11/2023 10:50 AM, Jason Wang wrote:
> > > On Mon, Jul 10, 2023 at 4:53 PM Zhu Lingshan <lingshan.zhu@intel.com> wrote:
> > >> In the poweroff routine, no need to fetch last available index.
> > >>
> > > This is because there's no concept of shutdown in the vhost layer, it
> > > only knows start and stop.
> > >
> > >> This commit also provides a better debug message in the vhost
> > >> caller vhost_virtqueue_stop,
> > > A separate patch is better.
> > OK
> > >
> > >> because if vhost does not fetch
> > >> the last avail idx successfully, maybe the device does not
> > >> suspend, vhost will sync last avail idx to vring used idx as a
> > >> work around, not a failure.
> > > This only happens if we return a negative value?
> > Yes
> > >
> > >> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> > >> ---
> > >>   hw/virtio/vhost-vdpa.c | 10 ++++++++++
> > >>   hw/virtio/vhost.c      |  2 +-
> > >>   2 files changed, 11 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > >> index 3c575a9a6e..10b445f64e 100644
> > >> --- a/hw/virtio/vhost-vdpa.c
> > >> +++ b/hw/virtio/vhost-vdpa.c
> > >> @@ -26,6 +26,7 @@
> > >>   #include "cpu.h"
> > >>   #include "trace.h"
> > >>   #include "qapi/error.h"
> > >> +#include "sysemu/runstate.h"
> > >>
> > >>   /*
> > >>    * Return one past the end of the end of section. Be careful with uint64_t
> > >> @@ -1391,6 +1392,15 @@ static int vhost_vdpa_get_vring_base(struct vhost_dev *dev,
> > >>       struct vhost_vdpa *v = dev->opaque;
> > >>       int ret;
> > >>
> > >> +    if (runstate_check(RUN_STATE_SHUTDOWN)) {
> > >> +        /*
> > >> +         * Some devices do not support this call properly,
> > >> +         * and we don't need to retrieve the indexes
> > >> +         * if it is shutting down
> > >> +         */
> > >> +        return 0;
> > > Checking runstate in the vhost code seems like a layer violation.
> > >
> > > What happens without this patch?
> > vhost tries to fetch vring base,
> > vhost_vdpa needs suspend the device before retrieving last_avail_idx.
> > However not all devices can support .suspend properly so this call
> > may fail.
>
> I think this is where I'm lost. If the device doesn't support
> suspending, any reason we only try to fix the case of shutdown?
>
> Btw, the fail is intended:
>
>     if (!v->suspended) {
>         /*
>          * Cannot trust in value returned by device, let vhost recover used
>          * idx from guest.
>          */
>         return -1;
>     }
>

The fail is intended, but to recover the last used idx, either from
device or from the guest, is only useful in the case of migration.

Note that we had userspace devices like VDUSE now, so it could be useful in the case of a VDUSE daemon crash or reconnect.
This code blcok is for vhost_vdpa backend, and I think vduse is another code path.

I'm not sure I understand here, I meant vhost_vdpa + vduse. It works similar to vhost-user.
OK, so do you suggest we set vring state == 0 and return 0 if failed to suspend the device?
No matter shutdown or other cases.
 
Return a guest used idx may be a good idea but as Eugenio pointed out that may duplicate the code.


I think the main problem is the error message, actually. But I think
it has no use either to recover last_avail_idx or print a debug
message if we're not migrating. Another solution would be to recover
it from the guest at vhost_vdpa_get_vring_base, but I don't like the
duplication.

> And if we return to success here, will we go to set an uninitialized
> last avail idx?
>

It will be either the default value (is set to 0 at
__virtio_queue_reset) or the one received from a migration (at
virtio_load).

0 is even sub-optimal than the index used. Anyhow, VHOST_DEBUG should not be enabled for production environments.
Returning 0 sounds like a queue reset, yes we can reset a queue if failed to suspend it, I am not sure whther
0 is better than guest used idx.

I think we are not able to disable VHOST_DEBUG because customers can build QEMU by their own.

Well, disabling debug information is a common practice in any distribution.

Or if you worry about the default, let's have a patch to undef VHOST_DEBUG by defualt.

I can do this in the next version

Thanks
Thanks
 

Thanks

Thanks
 

Thanks!

>     r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
>     if (r < 0) {
>     ...
>     }.else {
>         virtio_queue_set_last_avail_idx(vdev, idx, state.num);
>     }
>
> Thanks
>
> > Then vhost will print an error shows something failed.
> >
> > The error msg is confused, as stated in the commit log, restoring
> > last_avail_idx with guest used idx
> > is a workaround rather than a failure. And no needs to fetch last_avail_idx
> > when power off.
> >
> > Thanks
> > >
> > > Thanks
> > >
> > >> +    }
> > >> +
> > >>       if (v->shadow_vqs_enabled) {
> > >>           ring->num = virtio_queue_get_last_avail_idx(dev->vdev, ring->index);
> > >>           return 0;
> > >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > >> index 82394331bf..7dd90cff3a 100644
> > >> --- a/hw/virtio/vhost.c
> > >> +++ b/hw/virtio/vhost.c
> > >> @@ -1262,7 +1262,7 @@ void vhost_virtqueue_stop(struct vhost_dev *dev,
> > >>
> > >>       r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
> > >>       if (r < 0) {
> > >> -        VHOST_OPS_DEBUG(r, "vhost VQ %u ring restore failed: %d", idx, r);
> > >> +        VHOST_OPS_DEBUG(r, "sync last avail idx to the guest used idx for vhost VQ %u", idx);
> > >>           /* Connection to the backend is broken, so let's sync internal
> > >>            * last avail idx to the device used idx.
> > >>            */
> > >> --
> > >> 2.39.3
> > >>
> >
>



--------------iW9uvd6Cl0zPZlw4Ma0gsbDc--