netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Si-Wei Liu <siwliu.kernel@gmail.com>
To: Eli Cohen <elic@nvidia.com>
Cc: mst@redhat.com, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	lulu@redhat.com
Subject: Re: [PATCH] vdpa/mlx5: Restore the hardware used index after change map
Date: Tue, 2 Feb 2021 09:14:02 -0800	[thread overview]
Message-ID: <CAPWQSg3Z1aCZc7kX2x_4NLtAzkrZ+eO5ABBF0bAQfaLc=++Y2Q@mail.gmail.com> (raw)
In-Reply-To: <20210202142901.7131-1-elic@nvidia.com>

On Tue, Feb 2, 2021 at 6:34 AM Eli Cohen <elic@nvidia.com> wrote:
>
> When a change of memory map occurs, the hardware resources are destroyed
> and then re-created again with the new memory map. In such case, we need
> to restore the hardware available and used indices. The driver failed to
> restore the used index which is added here.
>
> Fixes 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
> This patch is being sent again a single patch the fixes hot memory
> addtion to a qemy process.
>
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 88dde3455bfd..839f57c64a6f 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -87,6 +87,7 @@ struct mlx5_vq_restore_info {
>         u64 device_addr;
>         u64 driver_addr;
>         u16 avail_index;
> +       u16 used_index;
>         bool ready;
>         struct vdpa_callback cb;
>         bool restore;
> @@ -121,6 +122,7 @@ struct mlx5_vdpa_virtqueue {
>         u32 virtq_id;
>         struct mlx5_vdpa_net *ndev;
>         u16 avail_idx;
> +       u16 used_idx;
>         int fw_state;
>
>         /* keep last in the struct */
> @@ -804,6 +806,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>
>         obj_context = MLX5_ADDR_OF(create_virtio_net_q_in, in, obj_context);
>         MLX5_SET(virtio_net_q_object, obj_context, hw_available_index, mvq->avail_idx);
> +       MLX5_SET(virtio_net_q_object, obj_context, hw_used_index, mvq->used_idx);

The saved indexes will apply to the new virtqueue object whenever it
is created. In virtio spec, these indexes will reset back to zero when
the virtio device is reset. But I don't see how it's done today. IOW,
I don't see where avail_idx and used_idx get cleared from the mvq for
device reset via set_status().

-Siwei


>         MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_12_3,
>                  get_features_12_3(ndev->mvdev.actual_features));
>         vq_ctx = MLX5_ADDR_OF(virtio_net_q_object, obj_context, virtio_q_context);
> @@ -1022,6 +1025,7 @@ static int connect_qps(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
>  struct mlx5_virtq_attr {
>         u8 state;
>         u16 available_index;
> +       u16 used_index;
>  };
>
>  static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq,
> @@ -1052,6 +1056,7 @@ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueu
>         memset(attr, 0, sizeof(*attr));
>         attr->state = MLX5_GET(virtio_net_q_object, obj_context, state);
>         attr->available_index = MLX5_GET(virtio_net_q_object, obj_context, hw_available_index);
> +       attr->used_index = MLX5_GET(virtio_net_q_object, obj_context, hw_used_index);
>         kfree(out);
>         return 0;
>
> @@ -1610,6 +1615,7 @@ static int save_channel_info(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqu
>                 return err;
>
>         ri->avail_index = attr.available_index;
> +       ri->used_index = attr.used_index;
>         ri->ready = mvq->ready;
>         ri->num_ent = mvq->num_ent;
>         ri->desc_addr = mvq->desc_addr;
> @@ -1654,6 +1660,7 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
>                         continue;
>
>                 mvq->avail_idx = ri->avail_index;
> +               mvq->used_idx = ri->used_index;
>                 mvq->ready = ri->ready;
>                 mvq->num_ent = ri->num_ent;
>                 mvq->desc_addr = ri->desc_addr;
> --
> 2.29.2
>

  reply	other threads:[~2021-02-02 17:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-02 14:29 [PATCH] vdpa/mlx5: Restore the hardware used index after change map Eli Cohen
2021-02-02 17:14 ` Si-Wei Liu [this message]
2021-02-03  6:48   ` Eli Cohen
2021-02-03 20:33     ` Si-Wei Liu
2021-02-04  7:06       ` Eli Cohen
2021-02-04  7:19       ` Eli Cohen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPWQSg3Z1aCZc7kX2x_4NLtAzkrZ+eO5ABBF0bAQfaLc=++Y2Q@mail.gmail.com' \
    --to=siwliu.kernel@gmail.com \
    --cc=elic@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lulu@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).