virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Eli Cohen <elic@nvidia.com>
Cc: eperezma@redhat.com, parav@mellanox.com,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH v3 2/2] vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default
Date: Mon, 20 Mar 2023 16:02:44 -0400	[thread overview]
Message-ID: <20230320155938-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20230320114930.8457-3-elic@nvidia.com>

On Mon, Mar 20, 2023 at 01:49:30PM +0200, Eli Cohen wrote:
> One can still enable it when creating the vdpa device using vdpa tool by
> providing features that include it.
> 
> For example:
> $ vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 device_features 0x300cb982b
> 
> Please be aware that this feature was not supported before the previous
> patch in this list was introduced so we don't change user experience.

so I would  say the patches have to be reordered to avoid a regression?

> Current firmware versions show degradation in packet rate when using
> MRG_RXBUF. Users who favor memory saving over packet rate could enable
> this feature but we want to keep it off by default.
> 
> Signed-off-by: Eli Cohen <elic@nvidia.com>

OK and when future firmware will (maybe) fix this up how
will you know it's ok to enable by default?
Some version check I guess?
It would be better if firmware specified flags to enable
by default ...


> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 5285dd76c793..24397a71d6f3 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -3146,6 +3146,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
>  			return -EINVAL;
>  		}
>  		device_features &= add_config->device_features;
> +	} else {
> +		device_features &= ~BIT_ULL(VIRTIO_NET_F_MRG_RXBUF);
>  	}
>  	if (!(device_features & BIT_ULL(VIRTIO_F_VERSION_1) &&
>  	      device_features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) {
> -- 
> 2.38.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

       reply	other threads:[~2023-03-20 20:02 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230320114930.8457-1-elic@nvidia.com>
     [not found] ` <20230320114930.8457-3-elic@nvidia.com>
2023-03-20 20:02   ` Michael S. Tsirkin [this message]
     [not found]     ` <ff359e29-8249-8a6f-7cd3-77c5c43ff96c@nvidia.com>
2023-03-21  9:29       ` [PATCH v3 2/2] vdpa/mlx5: Make VIRTIO_NET_F_MRG_RXBUF off by default Michael S. Tsirkin
     [not found] ` <20230320114930.8457-2-elic@nvidia.com>
2023-03-21  4:49   ` [PATCH v3 1/2] vdpa/mlx5: Extend driver support for new features Si-Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230320155938-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=elic@nvidia.com \
    --cc=eperezma@redhat.com \
    --cc=parav@mellanox.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).