qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: qemu-devel@nongnu.org, si-wei.liu@oracle.com,
	Liuxiangdong <liuxiangdong5@huawei.com>,
	Zhu Lingshan <lingshan.zhu@intel.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	alvaro.karsz@solid-run.com, Shannon Nelson <snelson@pensando.io>,
	Laurent Vivier <lvivier@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Gautam Dawar <gdawar@xilinx.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>, Cindy Lu <lulu@redhat.com>,
	Eli Cohen <eli@mellanox.com>, Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Parav Pandit <parav@mellanox.com>
Subject: Re: [RFC v2 02/13] vdpa net: move iova tree creation from init to start
Date: Mon, 16 Jan 2023 11:05:16 +0800	[thread overview]
Message-ID: <147cd500-abf3-8d2c-ae7a-02a9672ba61d@redhat.com> (raw)
In-Reply-To: <CAJaqyWdioFYZPJT+xfHUxgsZzwVVGKfSbmieuKvhEodgK99c=Q@mail.gmail.com>


在 2023/1/13 15:28, Eugenio Perez Martin 写道:
> On Fri, Jan 13, 2023 at 4:53 AM Jason Wang <jasowang@redhat.com> wrote:
>> On Fri, Jan 13, 2023 at 1:24 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>>> Only create iova_tree if and when it is needed.
>>>
>>> The cleanup keeps being responsability of last VQ but this change allows
>>> to merge both cleanup functions.
>>>
>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>>> ---
>>>   net/vhost-vdpa.c | 101 +++++++++++++++++++++++++++++++++--------------
>>>   1 file changed, 71 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index de5ed8ff22..75cca497c8 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -178,13 +178,9 @@ err_init:
>>>   static void vhost_vdpa_cleanup(NetClientState *nc)
>>>   {
>>>       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
>>> -    struct vhost_dev *dev = &s->vhost_net->dev;
>>>
>>>       qemu_vfree(s->cvq_cmd_out_buffer);
>>>       qemu_vfree(s->status);
>>> -    if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
>>> -        g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
>>> -    }
>>>       if (s->vhost_net) {
>>>           vhost_net_cleanup(s->vhost_net);
>>>           g_free(s->vhost_net);
>>> @@ -234,10 +230,64 @@ static ssize_t vhost_vdpa_receive(NetClientState *nc, const uint8_t *buf,
>>>       return size;
>>>   }
>>>
>>> +/** From any vdpa net client, get the netclient of first queue pair */
>>> +static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
>>> +{
>>> +    NICState *nic = qemu_get_nic(s->nc.peer);
>>> +    NetClientState *nc0 = qemu_get_peer(nic->ncs, 0);
>>> +
>>> +    return DO_UPCAST(VhostVDPAState, nc, nc0);
>>> +}
>>> +
>>> +static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
>>> +{
>>> +    struct vhost_vdpa *v = &s->vhost_vdpa;
>>> +
>>> +    if (v->shadow_vqs_enabled) {
>>> +        v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
>>> +                                           v->iova_range.last);
>>> +    }
>>> +}
>>> +
>>> +static int vhost_vdpa_net_data_start(NetClientState *nc)
>>> +{
>>> +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
>>> +    struct vhost_vdpa *v = &s->vhost_vdpa;
>>> +
>>> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>>> +
>>> +    if (v->index == 0) {
>>> +        vhost_vdpa_net_data_start_first(s);
>>> +        return 0;
>>> +    }
>>> +
>>> +    if (v->shadow_vqs_enabled) {
>>> +        VhostVDPAState *s0 = vhost_vdpa_net_first_nc_vdpa(s);
>>> +        v->iova_tree = s0->vhost_vdpa.iova_tree;
>>> +    }
>> It looks to me the logic here is somehow the same as
>> vhost_vdpa_net_cvq_start(), can we unify the them?
>>
> It depends on what you mean by unify :). But we can explore it for sure.
>
> We can call vhost_vdpa_net_data_start, but the steps to do if
> s0->vhost_vdpa.iova_tree == NULL are different. Data queues must do
> nothing, but CVQ must create a new iova tree.
>
> So one possibility is to convert this part of vhost_vdpa_net_cvq_start:
>      s0 = vhost_vdpa_net_first_nc_vdpa(s);
>      if (s0->vhost_vdpa.iova_tree) {
>          /* SVQ is already configured for all virtqueues */
>          v->iova_tree = s0->vhost_vdpa.iova_tree;
>      } else {
>          v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
>                                             v->iova_range.last);
>      }
>
> into:
>      vhost_vdpa_net_data_start(nc);
>      if (!v->iova_tree) {
>          v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
>                                             v->iova_range.last);
>      }
>
> I'm ok with the change but it's less clear in my opinion: it's not
> obvious that net_data_start is in charge of setting v->iova_tree to
> me.


Ok.


>
> Another possibility is to abstract something like
> first_nc_iova_tree(), but we need to check more fields of s0 later
> (shadow_data) so I'm not sure about the benefit.
>
> Is that what you have in mind?


Kind of, but I think we can leave the code as is.

In the future, as discussed, we need to introduce something like a 
parent or opaque structure for NetClientState structure, it can simply a 
lot of things: we can have one same common parent for all queues, then 
there's no need for the trick like first_nc_iova_tree() and other 
similar tricks.

Thanks

>
> Thanks!
>
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static void vhost_vdpa_net_client_stop(NetClientState *nc)
>>> +{
>>> +    VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
>>> +    struct vhost_dev *dev;
>>> +
>>> +    assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>>> +
>>> +    dev = s->vhost_vdpa.dev;
>>> +    if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
>>> +        g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
>>> +    }
>>> +}
>>> +
>>>   static NetClientInfo net_vhost_vdpa_info = {
>>>           .type = NET_CLIENT_DRIVER_VHOST_VDPA,
>>>           .size = sizeof(VhostVDPAState),
>>>           .receive = vhost_vdpa_receive,
>>> +        .start = vhost_vdpa_net_data_start,
>>> +        .stop = vhost_vdpa_net_client_stop,
>>>           .cleanup = vhost_vdpa_cleanup,
>>>           .has_vnet_hdr = vhost_vdpa_has_vnet_hdr,
>>>           .has_ufo = vhost_vdpa_has_ufo,
>>> @@ -351,7 +401,7 @@ dma_map_err:
>>>
>>>   static int vhost_vdpa_net_cvq_start(NetClientState *nc)
>>>   {
>>> -    VhostVDPAState *s;
>>> +    VhostVDPAState *s, *s0;
>>>       struct vhost_vdpa *v;
>>>       uint64_t backend_features;
>>>       int64_t cvq_group;
>>> @@ -415,8 +465,6 @@ static int vhost_vdpa_net_cvq_start(NetClientState *nc)
>>>           return r;
>>>       }
>>>
>>> -    v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
>>> -                                       v->iova_range.last);
>>>       v->shadow_vqs_enabled = true;
>>>       s->vhost_vdpa.address_space_id = VHOST_VDPA_NET_CVQ_ASID;
>>>
>>> @@ -425,6 +473,15 @@ out:
>>>           return 0;
>>>       }
>>>
>>> +    s0 = vhost_vdpa_net_first_nc_vdpa(s);
>>> +    if (s0->vhost_vdpa.iova_tree) {
>>> +        /* SVQ is already configured for all virtqueues */
>>> +        v->iova_tree = s0->vhost_vdpa.iova_tree;
>>> +    } else {
>>> +        v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
>>> +                                           v->iova_range.last);
>>> +    }
>>> +
>>>       r = vhost_vdpa_cvq_map_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer,
>>>                                  vhost_vdpa_net_cvq_cmd_page_len(), false);
>>>       if (unlikely(r < 0)) {
>>> @@ -449,15 +506,9 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
>>>       if (s->vhost_vdpa.shadow_vqs_enabled) {
>>>           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->cvq_cmd_out_buffer);
>>>           vhost_vdpa_cvq_unmap_buf(&s->vhost_vdpa, s->status);
>>> -        if (!s->always_svq) {
>>> -            /*
>>> -             * If only the CVQ is shadowed we can delete this safely.
>>> -             * If all the VQs are shadows this will be needed by the time the
>>> -             * device is started again to register SVQ vrings and similar.
>>> -             */
>>> -            g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
>>> -        }
>>>       }
>>> +
>>> +    vhost_vdpa_net_client_stop(nc);
>>>   }
>>>
>>>   static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
>>> @@ -667,8 +718,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>>>                                          int nvqs,
>>>                                          bool is_datapath,
>>>                                          bool svq,
>>> -                                       struct vhost_vdpa_iova_range iova_range,
>>> -                                       VhostIOVATree *iova_tree)
>>> +                                       struct vhost_vdpa_iova_range iova_range)
>>>   {
>>>       NetClientState *nc = NULL;
>>>       VhostVDPAState *s;
>>> @@ -690,7 +740,6 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>>>       s->vhost_vdpa.shadow_vqs_enabled = svq;
>>>       s->vhost_vdpa.iova_range = iova_range;
>>>       s->vhost_vdpa.shadow_data = svq;
>>> -    s->vhost_vdpa.iova_tree = iova_tree;
>>>       if (!is_datapath) {
>>>           s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
>>>                                               vhost_vdpa_net_cvq_cmd_page_len());
>>> @@ -760,7 +809,6 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>>       uint64_t features;
>>>       int vdpa_device_fd;
>>>       g_autofree NetClientState **ncs = NULL;
>>> -    g_autoptr(VhostIOVATree) iova_tree = NULL;
>>>       struct vhost_vdpa_iova_range iova_range;
>>>       NetClientState *nc;
>>>       int queue_pairs, r, i = 0, has_cvq = 0;
>>> @@ -812,12 +860,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>>           goto err;
>>>       }
>>>
>>> -    if (opts->x_svq) {
>>> -        if (!vhost_vdpa_net_valid_svq_features(features, errp)) {
>>> -            goto err_svq;
>>> -        }
>>> -
>>> -        iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
>>> +    if (opts->x_svq && !vhost_vdpa_net_valid_svq_features(features, errp)) {
>>> +        goto err;
>>>       }
>>>
>>>       ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
>>> @@ -825,7 +869,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>>       for (i = 0; i < queue_pairs; i++) {
>>>           ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>>>                                        vdpa_device_fd, i, 2, true, opts->x_svq,
>>> -                                     iova_range, iova_tree);
>>> +                                     iova_range);
>>>           if (!ncs[i])
>>>               goto err;
>>>       }
>>> @@ -833,13 +877,11 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>>       if (has_cvq) {
>>>           nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>>>                                    vdpa_device_fd, i, 1, false,
>>> -                                 opts->x_svq, iova_range, iova_tree);
>>> +                                 opts->x_svq, iova_range);
>>>           if (!nc)
>>>               goto err;
>>>       }
>>>
>>> -    /* iova_tree ownership belongs to last NetClientState */
>>> -    g_steal_pointer(&iova_tree);
>>>       return 0;
>>>
>>>   err:
>>> @@ -849,7 +891,6 @@ err:
>>>           }
>>>       }
>>>
>>> -err_svq:
>>>       qemu_close(vdpa_device_fd);
>>>
>>>       return -1;
>>> --
>>> 2.31.1
>>>



  reply	other threads:[~2023-01-16  3:05 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12 17:24 [RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration Eugenio Pérez
2023-01-12 17:24 ` [RFC v2 01/13] vdpa: fix VHOST_BACKEND_F_IOTLB_ASID flag check Eugenio Pérez
2023-01-13  3:12   ` Jason Wang
2023-01-13  6:42     ` Eugenio Perez Martin
2023-01-16  3:01       ` Jason Wang
2023-01-12 17:24 ` [RFC v2 02/13] vdpa net: move iova tree creation from init to start Eugenio Pérez
2023-01-13  3:53   ` Jason Wang
2023-01-13  7:28     ` Eugenio Perez Martin
2023-01-16  3:05       ` Jason Wang [this message]
2023-01-16  9:14         ` Eugenio Perez Martin
2023-01-17  4:30           ` Jason Wang
2023-01-12 17:24 ` [RFC v2 03/13] vdpa: copy cvq shadow_data from data vqs, not from x-svq Eugenio Pérez
2023-01-12 17:24 ` [RFC v2 04/13] vdpa: rewind at get_base, not set_base Eugenio Pérez
2023-01-13  4:09   ` Jason Wang
2023-01-13  7:40     ` Eugenio Perez Martin
2023-01-16  3:32       ` Jason Wang
2023-01-16  9:53         ` Eugenio Perez Martin
2023-01-17  4:38           ` Jason Wang
2023-01-17  6:57             ` Eugenio Perez Martin
2023-01-12 17:24 ` [RFC v2 05/13] vdpa net: add migration blocker if cannot migrate cvq Eugenio Pérez
2023-01-13  4:24   ` Jason Wang
2023-01-13  7:46     ` Eugenio Perez Martin
2023-01-16  3:34       ` Jason Wang
2023-01-16  5:23         ` Michael S. Tsirkin
2023-01-16  9:33           ` Eugenio Perez Martin
2023-01-17  5:42             ` Jason Wang
2023-01-12 17:24 ` [RFC v2 06/13] vhost: delay set_vring_ready after DRIVER_OK Eugenio Pérez
2023-01-13  4:36   ` Jason Wang
2023-01-13  8:19     ` Eugenio Perez Martin
2023-01-13  9:51       ` Stefano Garzarella
2023-01-13 10:03         ` Eugenio Perez Martin
2023-01-13 10:37           ` Stefano Garzarella
2023-01-17 15:15           ` Maxime Coquelin
2023-01-16  6:36       ` Jason Wang
2023-01-16 16:16         ` Eugenio Perez Martin
2023-01-17  5:36           ` Jason Wang
2023-01-12 17:24 ` [RFC v2 07/13] vdpa: " Eugenio Pérez
2023-01-12 17:24 ` [RFC v2 08/13] vdpa: Negotiate _F_SUSPEND feature Eugenio Pérez
2023-01-13  4:39   ` Jason Wang
2023-01-13  8:45     ` Eugenio Perez Martin
2023-01-16  6:48       ` Jason Wang
2023-01-16 16:17         ` Eugenio Perez Martin
2023-01-12 17:24 ` [RFC v2 09/13] vdpa: add feature_log parameter to vhost_vdpa Eugenio Pérez
2023-01-12 17:24 ` [RFC v2 10/13] vdpa net: allow VHOST_F_LOG_ALL Eugenio Pérez
2023-01-13  4:42   ` Jason Wang
2023-01-12 17:24 ` [RFC v2 11/13] vdpa: add vdpa net migration state notifier Eugenio Pérez
2023-01-13  4:54   ` Jason Wang
2023-01-13  9:00     ` Eugenio Perez Martin
2023-01-16  6:51       ` Jason Wang
2023-01-16 15:21         ` Eugenio Perez Martin
2023-01-17  9:58       ` Dr. David Alan Gilbert
2023-01-17 10:23         ` Eugenio Perez Martin
2023-01-17 12:54           ` Dr. David Alan Gilbert
2023-02-02  1:52   ` Si-Wei Liu
2023-02-02 15:28     ` Eugenio Perez Martin
2023-02-04  2:03       ` Si-Wei Liu
2023-02-13  9:47         ` Eugenio Perez Martin
2023-02-13 22:36           ` Si-Wei Liu
2023-02-14 18:51             ` Eugenio Perez Martin
2023-02-12 14:31     ` Eli Cohen
2023-01-12 17:24 ` [RFC v2 12/13] vdpa: preemptive kick at enable Eugenio Pérez
2023-01-13  2:31   ` Jason Wang
2023-01-13  3:25     ` Zhu, Lingshan
2023-01-13  3:39       ` Jason Wang
2023-01-13  9:06         ` Eugenio Perez Martin
2023-01-16  7:02           ` Jason Wang
2023-02-02 16:55             ` Eugenio Perez Martin
2023-02-02  0:56           ` Si-Wei Liu
2023-02-02 16:53             ` Eugenio Perez Martin
2023-02-04 11:04               ` Si-Wei Liu
2023-02-05 10:00                 ` Michael S. Tsirkin
2023-02-06  5:08                   ` Si-Wei Liu
2023-01-12 17:24 ` [RFC v2 13/13] vdpa: Conditionally expose _F_LOG in vhost_net devices Eugenio Pérez
2023-02-02  1:00 ` [RFC v2 00/13] Dinamycally switch to vhost shadow virtqueues at vdpa net migration Si-Wei Liu
2023-02-02 11:27   ` Eugenio Perez Martin
2023-02-03  5:08     ` Si-Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=147cd500-abf3-8d2c-ae7a-02a9672ba61d@redhat.com \
    --to=jasowang@redhat.com \
    --cc=alvaro.karsz@solid-run.com \
    --cc=arei.gonglei@huawei.com \
    --cc=cohuck@redhat.com \
    --cc=eli@mellanox.com \
    --cc=eperezma@redhat.com \
    --cc=gdawar@xilinx.com \
    --cc=hanand@xilinx.com \
    --cc=lingshan.zhu@intel.com \
    --cc=liuxiangdong5@huawei.com \
    --cc=lulu@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=si-wei.liu@oracle.com \
    --cc=snelson@pensando.io \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).