virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v3 2/2] vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default
       [not found] ` <20230320114930.8457-3-elic@nvidia.com>
@ 2023-03-20 20:02   ` Michael S. Tsirkin
       [not found]     ` <ff359e29-8249-8a6f-7cd3-77c5c43ff96c@nvidia.com>
  0 siblings, 1 reply; 3+ messages in thread
From: Michael S. Tsirkin @ 2023-03-20 20:02 UTC (permalink / raw)
  To: Eli Cohen; +Cc: eperezma, parav, virtualization

On Mon, Mar 20, 2023 at 01:49:30PM +0200, Eli Cohen wrote:
> One can still enable it when creating the vdpa device using vdpa tool by
> providing features that include it.
> 
> For example:
> $ vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 device_features 0x300cb982b
> 
> Please be aware that this feature was not supported before the previous
> patch in this list was introduced so we don't change user experience.

so I would  say the patches have to be reordered to avoid a regression?

> Current firmware versions show degradation in packet rate when using
> MRG_RXBUF. Users who favor memory saving over packet rate could enable
> this feature but we want to keep it off by default.
> 
> Signed-off-by: Eli Cohen <elic@nvidia.com>

OK and when future firmware will (maybe) fix this up how
will you know it's ok to enable by default?
Some version check I guess?
It would be better if firmware specified flags to enable
by default ...


> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 5285dd76c793..24397a71d6f3 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -3146,6 +3146,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>  			return -EINVAL;
>  		}
>  		device_features &= add_config->device_features;
> +	} else {
> +		device_features &= ~BIT_ULL(VIRTIO_NET_F_MRG_RXBUF);
>  	}
>  	if (!(device_features & BIT_ULL(VIRTIO_F_VERSION_1) &&
>  	      device_features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) {
> -- 
> 2.38.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v3 1/2] vdpa/mlx5: Extend driver support for new features
       [not found] ` <20230320114930.8457-2-elic@nvidia.com>
@ 2023-03-21  4:49   ` Si-Wei Liu
  0 siblings, 0 replies; 3+ messages in thread
From: Si-Wei Liu @ 2023-03-21  4:49 UTC (permalink / raw)
  To: Eli Cohen, mst, jasowang, eperezma, virtualization; +Cc: parav



On 3/20/2023 4:49 AM, Eli Cohen wrote:
> Extend the possible list for features that can be supported by firmware.
> Note that different versions of firmware may or may not support these
> features. The driver is made aware of them by querying the firmware.
>
> While doing this, improve the code so we use enum names instead of hard
> coded numerical values.
>
> The new features supported by the driver are the following:
>
> VIRTIO_NET_F_MRG_RXBUF
> VIRTIO_NET_F_HOST_ECN
> VIRTIO_NET_F_GUEST_ECN
> VIRTIO_NET_F_GUEST_TSO6
> VIRTIO_NET_F_GUEST_TSO4
>
> Signed-off-by: Eli Cohen <elic@nvidia.com>
> ---
>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 56 ++++++++++++++++++++++---------
>   1 file changed, 40 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 520646ae7fa0..5285dd76c793 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -778,12 +778,28 @@ static bool vq_is_tx(u16 idx)
>   	return idx % 2;
>   }
>   
> -static u16 get_features_12_3(u64 features)
> +enum {
> +	MLX5_VIRTIO_NET_F_MRG_RXBUF = 2,
> +	MLX5_VIRTIO_NET_F_HOST_ECN = 4,
> +	MLX5_VIRTIO_NET_F_GUEST_ECN = 6,
> +	MLX5_VIRTIO_NET_F_GUEST_TSO6 = 7,
> +	MLX5_VIRTIO_NET_F_GUEST_TSO4 = 8,
> +	MLX5_VIRTIO_NET_F_GUEST_CSUM = 9,
> +	MLX5_VIRTIO_NET_F_CSUM = 10,
> +	MLX5_VIRTIO_NET_F_HOST_TSO6 = 11,
> +	MLX5_VIRTIO_NET_F_HOST_TSO4 = 12,
> +};
> +
> +static u16 get_features(u64 features)
>   {
> -	return (!!(features & BIT_ULL(VIRTIO_NET_F_HOST_TSO4)) << 9) |
> -	       (!!(features & BIT_ULL(VIRTIO_NET_F_HOST_TSO6)) << 8) |
> -	       (!!(features & BIT_ULL(VIRTIO_NET_F_CSUM)) << 7) |
> -	       (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_CSUM)) << 6);
> +	return (!!(features & BIT_ULL(VIRTIO_NET_F_MRG_RXBUF)) << MLX5_VIRTIO_NET_F_MRG_RXBUF) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_HOST_ECN)) << MLX5_VIRTIO_NET_F_HOST_ECN) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_ECN)) << MLX5_VIRTIO_NET_F_GUEST_ECN) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_TSO6)) << MLX5_VIRTIO_NET_F_GUEST_TSO6) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_TSO4)) << MLX5_VIRTIO_NET_F_GUEST_TSO4) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_CSUM)) << MLX5_VIRTIO_NET_F_CSUM) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_HOST_TSO6)) << MLX5_VIRTIO_NET_F_HOST_TSO6) |
> +	       (!!(features & BIT_ULL(VIRTIO_NET_F_HOST_TSO4)) << MLX5_VIRTIO_NET_F_HOST_TSO4);
>   }
>   
>   static bool counters_supported(const struct mlx5_vdpa_dev *mvdev)
> @@ -797,6 +813,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>   	int inlen = MLX5_ST_SZ_BYTES(create_virtio_net_q_in);
>   	u32 out[MLX5_ST_SZ_DW(create_virtio_net_q_out)] = {};
>   	void *obj_context;
> +	u16 mlx_features;
>   	void *cmd_hdr;
>   	void *vq_ctx;
>   	void *in;
> @@ -812,6 +829,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>   		goto err_alloc;
>   	}
>   
> +	mlx_features = get_features(ndev->mvdev.actual_features);
>   	cmd_hdr = MLX5_ADDR_OF(create_virtio_net_q_in, in, general_obj_in_cmd_hdr);
>   
>   	MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
> @@ -822,7 +840,9 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
>   	MLX5_SET(virtio_net_q_object, obj_context, hw_available_index, mvq->avail_idx);
>   	MLX5_SET(virtio_net_q_object, obj_context, hw_used_index, mvq->used_idx);
>   	MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_12_3,
> -		 get_features_12_3(ndev->mvdev.actual_features));
> +		 mlx_features >> 3);
> +	MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_2_0,
> +		 mlx_features & 7);
>   	vq_ctx = MLX5_ADDR_OF(virtio_net_q_object, obj_context, virtio_q_context);
>   	MLX5_SET(virtio_q, vq_ctx, virtio_q_type, get_queue_type(ndev));
>   
> @@ -2171,23 +2191,27 @@ static u32 mlx5_vdpa_get_vq_group(struct vdpa_device *vdev, u16 idx)
>   	return MLX5_VDPA_DATAVQ_GROUP;
>   }
>   
> -enum { MLX5_VIRTIO_NET_F_GUEST_CSUM = 1 << 9,
> -	MLX5_VIRTIO_NET_F_CSUM = 1 << 10,
> -	MLX5_VIRTIO_NET_F_HOST_TSO6 = 1 << 11,
> -	MLX5_VIRTIO_NET_F_HOST_TSO4 = 1 << 12,
> -};
> -
>   static u64 mlx_to_vritio_features(u16 dev_features)
>   {
>   	u64 result = 0;
>   
> -	if (dev_features & MLX5_VIRTIO_NET_F_GUEST_CSUM)
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_MRG_RXBUF))
> +		result += BIT_ULL(VIRTIO_NET_F_MRG_RXBUF);
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_HOST_ECN))
> +		result += BIT_ULL(VIRTIO_NET_F_HOST_ECN);
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_GUEST_ECN))
> +		result += BIT_ULL(VIRTIO_NET_F_GUEST_ECN);
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_GUEST_TSO6))
> +		result += BIT_ULL(VIRTIO_NET_F_GUEST_TSO6);
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_GUEST_TSO4))
> +		result += BIT_ULL(VIRTIO_NET_F_GUEST_TSO4);
It would be more consistent to use OR rather than ADD.

With that,

Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>

> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_GUEST_CSUM))
>   		result |= BIT_ULL(VIRTIO_NET_F_GUEST_CSUM);
> -	if (dev_features & MLX5_VIRTIO_NET_F_CSUM)
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_CSUM))
>   		result |= BIT_ULL(VIRTIO_NET_F_CSUM);
> -	if (dev_features & MLX5_VIRTIO_NET_F_HOST_TSO6)
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_HOST_TSO6))
>   		result |= BIT_ULL(VIRTIO_NET_F_HOST_TSO6);
> -	if (dev_features & MLX5_VIRTIO_NET_F_HOST_TSO4)
> +	if (dev_features & BIT_ULL(MLX5_VIRTIO_NET_F_HOST_TSO4))
>   		result |= BIT_ULL(VIRTIO_NET_F_HOST_TSO4);
>   
>   	return result;

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v3 2/2] vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default
       [not found]     ` <ff359e29-8249-8a6f-7cd3-77c5c43ff96c@nvidia.com>
@ 2023-03-21  9:29       ` Michael S. Tsirkin
  0 siblings, 0 replies; 3+ messages in thread
From: Michael S. Tsirkin @ 2023-03-21  9:29 UTC (permalink / raw)
  To: Eli Cohen; +Cc: eperezma, parav, virtualization

On Tue, Mar 21, 2023 at 11:24:34AM +0200, Eli Cohen wrote:
> 
> On 20/03/2023 22:02, Michael S. Tsirkin wrote:
> > On Mon, Mar 20, 2023 at 01:49:30PM +0200, Eli Cohen wrote:
> > > One can still enable it when creating the vdpa device using vdpa tool by
> > > providing features that include it.
> > > 
> > > For example:
> > > $ vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 device_features 0x300cb982b
> > > 
> > > Please be aware that this feature was not supported before the previous
> > > patch in this list was introduced so we don't change user experience.
> > so I would  say the patches have to be reordered to avoid a regression?
> Yes, I can do that.
> > 
> > > Current firmware versions show degradation in packet rate when using
> > > MRG_RXBUF. Users who favor memory saving over packet rate could enable
> > > this feature but we want to keep it off by default.
> > > 
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> > OK and when future firmware will (maybe) fix this up how
> > will you know it's ok to enable by default?
> > Some version check I guess?
> > It would be better if firmware specified flags to enable
> > by default ...
> Currently there are no flags for that so let's just keep the default off.

Right but I mean, why are mrg buffers slow? It's a firmware bug right?

> > 
> > > ---
> > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
> > >   1 file changed, 2 insertions(+)
> > > 
> > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > index 5285dd76c793..24397a71d6f3 100644
> > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > @@ -3146,6 +3146,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
> > >   			return -EINVAL;
> > >   		}
> > >   		device_features &= add_config->device_features;
> > > +	} else {
> > > +		device_features &= ~BIT_ULL(VIRTIO_NET_F_MRG_RXBUF);
> > >   	}
> > >   	if (!(device_features & BIT_ULL(VIRTIO_F_VERSION_1) &&
> > >   	      device_features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) {
> > > -- 
> > > 2.38.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-03-21  9:29 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20230320114930.8457-1-elic@nvidia.com>
     [not found] ` <20230320114930.8457-3-elic@nvidia.com>
2023-03-20 20:02   ` [PATCH v3 2/2] vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default Michael S. Tsirkin
     [not found]     ` <ff359e29-8249-8a6f-7cd3-77c5c43ff96c@nvidia.com>
2023-03-21  9:29       ` Michael S. Tsirkin
     [not found] ` <20230320114930.8457-2-elic@nvidia.com>
2023-03-21  4:49   ` [PATCH v3 1/2] vdpa/mlx5: Extend driver support for new features Si-Wei Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).