virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
       [not found] <20230703142514.363256-1-eperezma@redhat.com>
@ 2023-07-03 15:46 ` Michael S. Tsirkin
  2023-07-04  0:26   ` Si-Wei Liu
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2023-07-03 15:46 UTC (permalink / raw)
  To: Eugenio Pérez; +Cc: Xuan Zhuo, linux-kernel, virtualization, leiyang

On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> Offer this backend feature as mlx5 is compatible with it. It allows it
> to do live migration with CVQ, dynamically switching between passthrough
> and shadow virtqueue.
> 
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Same comment.

> ---
>  drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 9138ef2fb2c8..5f309a16b9dc 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -7,6 +7,7 @@
>  #include <uapi/linux/virtio_net.h>
>  #include <uapi/linux/virtio_ids.h>
>  #include <uapi/linux/vdpa.h>
> +#include <uapi/linux/vhost_types.h>
>  #include <linux/virtio_config.h>
>  #include <linux/auxiliary_bus.h>
>  #include <linux/mlx5/cq.h>
> @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
>  		flush_workqueue(ndev->mvdev.wq);
>  }
>  
> +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> +{
> +	return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> +}
> +
>  static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
>  {
>  	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
>  	.get_vq_align = mlx5_vdpa_get_vq_align,
>  	.get_vq_group = mlx5_vdpa_get_vq_group,
>  	.get_device_features = mlx5_vdpa_get_device_features,
> +	.get_backend_features = mlx5_vdpa_get_backend_features,
>  	.set_driver_features = mlx5_vdpa_set_driver_features,
>  	.get_driver_features = mlx5_vdpa_get_driver_features,
>  	.set_config_cb = mlx5_vdpa_set_config_cb,
> -- 
> 2.39.3

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-03 15:46 ` [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK Michael S. Tsirkin
@ 2023-07-04  0:26   ` Si-Wei Liu
  2023-07-04 10:16     ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Si-Wei Liu @ 2023-07-04  0:26 UTC (permalink / raw)
  To: Michael S. Tsirkin, Eugenio Pérez
  Cc: Xuan Zhuo, linux-kernel, virtualization, leiyang



On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
>> Offer this backend feature as mlx5 is compatible with it. It allows it
>> to do live migration with CVQ, dynamically switching between passthrough
>> and shadow virtqueue.
>>
>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> Same comment.
to which?

-Siwei

>
>> ---
>>   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> index 9138ef2fb2c8..5f309a16b9dc 100644
>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>> @@ -7,6 +7,7 @@
>>   #include <uapi/linux/virtio_net.h>
>>   #include <uapi/linux/virtio_ids.h>
>>   #include <uapi/linux/vdpa.h>
>> +#include <uapi/linux/vhost_types.h>
>>   #include <linux/virtio_config.h>
>>   #include <linux/auxiliary_bus.h>
>>   #include <linux/mlx5/cq.h>
>> @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
>>   		flush_workqueue(ndev->mvdev.wq);
>>   }
>>   
>> +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
>> +{
>> +	return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
>> +}
>> +
>>   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
>>   {
>>   	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
>> @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
>>   	.get_vq_align = mlx5_vdpa_get_vq_align,
>>   	.get_vq_group = mlx5_vdpa_get_vq_group,
>>   	.get_device_features = mlx5_vdpa_get_device_features,
>> +	.get_backend_features = mlx5_vdpa_get_backend_features,
>>   	.set_driver_features = mlx5_vdpa_set_driver_features,
>>   	.get_driver_features = mlx5_vdpa_get_driver_features,
>>   	.set_config_cb = mlx5_vdpa_set_config_cb,
>> -- 
>> 2.39.3

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-04  0:26   ` Si-Wei Liu
@ 2023-07-04 10:16     ` Michael S. Tsirkin
  2023-07-05  5:11       ` Jason Wang
       [not found]       ` <CAJaqyWf7DzJMEUT0NcPeDLQyPkthEJZydnSSPztoCxF6PWEu1w@mail.gmail.com>
  0 siblings, 2 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2023-07-04 10:16 UTC (permalink / raw)
  To: Si-Wei Liu
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> 
> 
> On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > to do live migration with CVQ, dynamically switching between passthrough
> > > and shadow virtqueue.
> > > 
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > Same comment.
> to which?
> 
> -Siwei

VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
as a kernel/userspace ABI: what if one wants to start rings in some
other specific order?
As was discussed on list, a better promise is not to access ring
until the 1st kick. vdpa can then do a kick when it wants
the device to start accessing rings.

> > 
> > > ---
> > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > >   1 file changed, 7 insertions(+)
> > > 
> > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > @@ -7,6 +7,7 @@
> > >   #include <uapi/linux/virtio_net.h>
> > >   #include <uapi/linux/virtio_ids.h>
> > >   #include <uapi/linux/vdpa.h>
> > > +#include <uapi/linux/vhost_types.h>
> > >   #include <linux/virtio_config.h>
> > >   #include <linux/auxiliary_bus.h>
> > >   #include <linux/mlx5/cq.h>
> > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > >   		flush_workqueue(ndev->mvdev.wq);
> > >   }
> > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > +{
> > > +	return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > +}
> > > +
> > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > >   {
> > >   	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > >   	.get_vq_align = mlx5_vdpa_get_vq_align,
> > >   	.get_vq_group = mlx5_vdpa_get_vq_group,
> > >   	.get_device_features = mlx5_vdpa_get_device_features,
> > > +	.get_backend_features = mlx5_vdpa_get_backend_features,
> > >   	.set_driver_features = mlx5_vdpa_set_driver_features,
> > >   	.get_driver_features = mlx5_vdpa_get_driver_features,
> > >   	.set_config_cb = mlx5_vdpa_set_config_cb,
> > > -- 
> > > 2.39.3

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-04 10:16     ` Michael S. Tsirkin
@ 2023-07-05  5:11       ` Jason Wang
  2023-07-05  5:31         ` Michael S. Tsirkin
       [not found]       ` <CAJaqyWf7DzJMEUT0NcPeDLQyPkthEJZydnSSPztoCxF6PWEu1w@mail.gmail.com>
  1 sibling, 1 reply; 9+ messages in thread
From: Jason Wang @ 2023-07-05  5:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> >
> >
> > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > and shadow virtqueue.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > Same comment.
> > to which?
> >
> > -Siwei
>
> VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> as a kernel/userspace ABI: what if one wants to start rings in some
> other specific order?

Just enable a queue by writing e.g 1 to queue_enable in a specific order?

> As was discussed on list, a better promise is not to access ring
> until the 1st kick. vdpa can then do a kick when it wants
> the device to start accessing rings.

Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
to allow queue_enable after DRIVER_OK, but it seems to have
distanvages:

A busy polling software device may disable notifications and never
respond to register to any kick notifiers. ACCESS_AFTER_KICK will
introduce overhead to those implementations.

Thanks

>
> > >
> > > > ---
> > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > >   1 file changed, 7 insertions(+)
> > > >
> > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > @@ -7,6 +7,7 @@
> > > >   #include <uapi/linux/virtio_net.h>
> > > >   #include <uapi/linux/virtio_ids.h>
> > > >   #include <uapi/linux/vdpa.h>
> > > > +#include <uapi/linux/vhost_types.h>
> > > >   #include <linux/virtio_config.h>
> > > >   #include <linux/auxiliary_bus.h>
> > > >   #include <linux/mlx5/cq.h>
> > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > >                   flush_workqueue(ndev->mvdev.wq);
> > > >   }
> > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > +{
> > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > +}
> > > > +
> > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > >   {
> > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > --
> > > > 2.39.3
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-05  5:11       ` Jason Wang
@ 2023-07-05  5:31         ` Michael S. Tsirkin
  2023-07-05  5:47           ` Jason Wang
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2023-07-05  5:31 UTC (permalink / raw)
  To: Jason Wang
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Wed, Jul 05, 2023 at 01:11:37PM +0800, Jason Wang wrote:
> On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > >
> > >
> > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > and shadow virtqueue.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > Same comment.
> > > to which?
> > >
> > > -Siwei
> >
> > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > as a kernel/userspace ABI: what if one wants to start rings in some
> > other specific order?
> 
> Just enable a queue by writing e.g 1 to queue_enable in a specific order?


But then at driver ok time we don't know how many queues are there.

> > As was discussed on list, a better promise is not to access ring
> > until the 1st kick. vdpa can then do a kick when it wants
> > the device to start accessing rings.
> 
> Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
> to allow queue_enable after DRIVER_OK, but it seems to have
> distanvages:
> 
> A busy polling software device may disable notifications and never
> respond to register to any kick notifiers. ACCESS_AFTER_KICK will
> introduce overhead to those implementations.
> 
> Thanks

It's just the 1st kick, then you can disable. No?

> >
> > > >
> > > > > ---
> > > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > >   1 file changed, 7 insertions(+)
> > > > >
> > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > @@ -7,6 +7,7 @@
> > > > >   #include <uapi/linux/virtio_net.h>
> > > > >   #include <uapi/linux/virtio_ids.h>
> > > > >   #include <uapi/linux/vdpa.h>
> > > > > +#include <uapi/linux/vhost_types.h>
> > > > >   #include <linux/virtio_config.h>
> > > > >   #include <linux/auxiliary_bus.h>
> > > > >   #include <linux/mlx5/cq.h>
> > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > >                   flush_workqueue(ndev->mvdev.wq);
> > > > >   }
> > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > +{
> > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > +}
> > > > > +
> > > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > >   {
> > > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > --
> > > > > 2.39.3
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-05  5:31         ` Michael S. Tsirkin
@ 2023-07-05  5:47           ` Jason Wang
  2023-07-05  6:15             ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Wang @ 2023-07-05  5:47 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Wed, Jul 5, 2023 at 1:31 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, Jul 05, 2023 at 01:11:37PM +0800, Jason Wang wrote:
> > On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > > >
> > > >
> > > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > > and shadow virtqueue.
> > > > > >
> > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > Same comment.
> > > > to which?
> > > >
> > > > -Siwei
> > >
> > > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > > as a kernel/userspace ABI: what if one wants to start rings in some
> > > other specific order?
> >
> > Just enable a queue by writing e.g 1 to queue_enable in a specific order?
>
>
> But then at driver ok time we don't know how many queues are there.

There should be a device specific interface for this, for example,
num_queue_paris. So the device knows at most how many queues there
are. Or anything I miss?

>
> > > As was discussed on list, a better promise is not to access ring
> > > until the 1st kick. vdpa can then do a kick when it wants
> > > the device to start accessing rings.
> >
> > Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
> > to allow queue_enable after DRIVER_OK, but it seems to have
> > distanvages:
> >
> > A busy polling software device may disable notifications and never
> > respond to register to any kick notifiers. ACCESS_AFTER_KICK will
> > introduce overhead to those implementations.
> >
> > Thanks
>
> It's just the 1st kick, then you can disable. No?

Yes, but:

1) adding hooks for queue_enable
2) adding new codes to register event notifier and toggle the notifier

1) seems much easier? Or for most devices, it already behaves like this.

Thanks

>
> > >
> > > > >
> > > > > > ---
> > > > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > > >   1 file changed, 7 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > @@ -7,6 +7,7 @@
> > > > > >   #include <uapi/linux/virtio_net.h>
> > > > > >   #include <uapi/linux/virtio_ids.h>
> > > > > >   #include <uapi/linux/vdpa.h>
> > > > > > +#include <uapi/linux/vhost_types.h>
> > > > > >   #include <linux/virtio_config.h>
> > > > > >   #include <linux/auxiliary_bus.h>
> > > > > >   #include <linux/mlx5/cq.h>
> > > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > > >                   flush_workqueue(ndev->mvdev.wq);
> > > > > >   }
> > > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > > +{
> > > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > > +}
> > > > > > +
> > > > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > > >   {
> > > > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > > --
> > > > > > 2.39.3
> > >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-05  5:47           ` Jason Wang
@ 2023-07-05  6:15             ` Michael S. Tsirkin
  2023-07-05  7:32               ` Jason Wang
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2023-07-05  6:15 UTC (permalink / raw)
  To: Jason Wang
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Wed, Jul 05, 2023 at 01:47:44PM +0800, Jason Wang wrote:
> On Wed, Jul 5, 2023 at 1:31 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Wed, Jul 05, 2023 at 01:11:37PM +0800, Jason Wang wrote:
> > > On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > > > >
> > > > >
> > > > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > > > and shadow virtqueue.
> > > > > > >
> > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > > Same comment.
> > > > > to which?
> > > > >
> > > > > -Siwei
> > > >
> > > > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > > > as a kernel/userspace ABI: what if one wants to start rings in some
> > > > other specific order?
> > >
> > > Just enable a queue by writing e.g 1 to queue_enable in a specific order?
> >
> >
> > But then at driver ok time we don't know how many queues are there.
> 
> There should be a device specific interface for this, for example,
> num_queue_paris. So the device knows at most how many queues there
> are. Or anything I miss?

That's a device limitations. Does not tell the device how much is used.

> >
> > > > As was discussed on list, a better promise is not to access ring
> > > > until the 1st kick. vdpa can then do a kick when it wants
> > > > the device to start accessing rings.
> > >
> > > Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
> > > to allow queue_enable after DRIVER_OK, but it seems to have
> > > distanvages:
> > >
> > > A busy polling software device may disable notifications and never
> > > respond to register to any kick notifiers. ACCESS_AFTER_KICK will
> > > introduce overhead to those implementations.
> > >
> > > Thanks
> >
> > It's just the 1st kick, then you can disable. No?
> 
> Yes, but:
> 
> 1) adding hooks for queue_enable
> 2) adding new codes to register event notifier and toggle the notifier
> 
> 1) seems much easier? Or for most devices, it already behaves like this.
> 
> Thanks

Well libvhostuser checks enabled queues at DRIVER_OK, does it not?

> >
> > > >
> > > > > >
> > > > > > > ---
> > > > > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > > > >   1 file changed, 7 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > @@ -7,6 +7,7 @@
> > > > > > >   #include <uapi/linux/virtio_net.h>
> > > > > > >   #include <uapi/linux/virtio_ids.h>
> > > > > > >   #include <uapi/linux/vdpa.h>
> > > > > > > +#include <uapi/linux/vhost_types.h>
> > > > > > >   #include <linux/virtio_config.h>
> > > > > > >   #include <linux/auxiliary_bus.h>
> > > > > > >   #include <linux/mlx5/cq.h>
> > > > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > > > >                   flush_workqueue(ndev->mvdev.wq);
> > > > > > >   }
> > > > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > > > +{
> > > > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > > > +}
> > > > > > > +
> > > > > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > > > >   {
> > > > > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > > > --
> > > > > > > 2.39.3
> > > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
  2023-07-05  6:15             ` Michael S. Tsirkin
@ 2023-07-05  7:32               ` Jason Wang
  0 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2023-07-05  7:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Xuan Zhuo, linux-kernel, virtualization, Eugenio Pérez,
	leiyang

On Wed, Jul 5, 2023 at 2:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, Jul 05, 2023 at 01:47:44PM +0800, Jason Wang wrote:
> > On Wed, Jul 5, 2023 at 1:31 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Wed, Jul 05, 2023 at 01:11:37PM +0800, Jason Wang wrote:
> > > > On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > >
> > > > > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > > > > >
> > > > > >
> > > > > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > > > > and shadow virtqueue.
> > > > > > > >
> > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > > > Same comment.
> > > > > > to which?
> > > > > >
> > > > > > -Siwei
> > > > >
> > > > > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > > > > as a kernel/userspace ABI: what if one wants to start rings in some
> > > > > other specific order?
> > > >
> > > > Just enable a queue by writing e.g 1 to queue_enable in a specific order?
> > >
> > >
> > > But then at driver ok time we don't know how many queues are there.
> >
> > There should be a device specific interface for this, for example,
> > num_queue_paris. So the device knows at most how many queues there
> > are. Or anything I miss?
>
> That's a device limitations. Does not tell the device how much is used.

I think I miss something, how kick differs from queue_enable in this way?

>
> > >
> > > > > As was discussed on list, a better promise is not to access ring
> > > > > until the 1st kick. vdpa can then do a kick when it wants
> > > > > the device to start accessing rings.
> > > >
> > > > Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
> > > > to allow queue_enable after DRIVER_OK, but it seems to have
> > > > distanvages:
> > > >
> > > > A busy polling software device may disable notifications and never
> > > > respond to register to any kick notifiers. ACCESS_AFTER_KICK will
> > > > introduce overhead to those implementations.
> > > >
> > > > Thanks
> > >
> > > It's just the 1st kick, then you can disable. No?
> >
> > Yes, but:
> >
> > 1) adding hooks for queue_enable
> > 2) adding new codes to register event notifier and toggle the notifier
> >
> > 1) seems much easier? Or for most devices, it already behaves like this.
> >
> > Thanks
>
> Well libvhostuser checks enabled queues at DRIVER_OK, does it not?

Probably, but I meant:

1) This behaviour has been supported by some device (e.g MLX)
2) This is the current behaviour of Qemu for vhost-net devices:

static void virtio_net_queue_enable(VirtIODevice *vdev, uint32_t queue_index)
{
    VirtIONet *n = VIRTIO_NET(vdev);
    NetClientState *nc;
    int r;

    ....

    if (get_vhost_net(nc->peer) &&
        nc->peer->info->type == NET_CLIENT_DRIVER_TAP) {
        r = vhost_net_virtqueue_restart(vdev, nc, queue_index);
        if (r < 0) {
            error_report("unable to restart vhost net virtqueue: %d, "
                            "when resetting the queue", queue_index);
        }
    }
}

Thanks

>
> > >
> > > > >
> > > > > > >
> > > > > > > > ---
> > > > > > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > > > > >   1 file changed, 7 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > @@ -7,6 +7,7 @@
> > > > > > > >   #include <uapi/linux/virtio_net.h>
> > > > > > > >   #include <uapi/linux/virtio_ids.h>
> > > > > > > >   #include <uapi/linux/vdpa.h>
> > > > > > > > +#include <uapi/linux/vhost_types.h>
> > > > > > > >   #include <linux/virtio_config.h>
> > > > > > > >   #include <linux/auxiliary_bus.h>
> > > > > > > >   #include <linux/mlx5/cq.h>
> > > > > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > > > > >                   flush_workqueue(ndev->mvdev.wq);
> > > > > > > >   }
> > > > > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > > > > +{
> > > > > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > > > > +}
> > > > > > > > +
> > > > > > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > > > > >   {
> > > > > > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > > > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > > > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > > > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > > > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > > > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > > > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > > > > --
> > > > > > > > 2.39.3
> > > > >
> > >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
       [not found]       ` <CAJaqyWf7DzJMEUT0NcPeDLQyPkthEJZydnSSPztoCxF6PWEu1w@mail.gmail.com>
@ 2023-10-04 16:27         ` Michael S. Tsirkin
  0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2023-10-04 16:27 UTC (permalink / raw)
  To: Eugenio Perez Martin; +Cc: Xuan Zhuo, linux-kernel, virtualization, leiyang

On Wed, Oct 04, 2023 at 02:56:53PM +0200, Eugenio Perez Martin wrote:
> On Tue, Jul 4, 2023 at 12:16 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > >
> > >
> > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > and shadow virtqueue.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > Same comment.
> > > to which?
> > >
> > > -Siwei
> >
> > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > as a kernel/userspace ABI: what if one wants to start rings in some
> > other specific order?
> > As was discussed on list, a better promise is not to access ring
> > until the 1st kick. vdpa can then do a kick when it wants
> > the device to start accessing rings.
> >
> 
> Friendly ping about this series,
> 
> Now that VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK has been merged for
> vdpa_sim, does it make sense for mlx too?
> 
> Thanks!

For sure. I was just busy with a qemu pull, will handle this next.

> > > >
> > > > > ---
> > > > >   drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > >   1 file changed, 7 insertions(+)
> > > > >
> > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > @@ -7,6 +7,7 @@
> > > > >   #include <uapi/linux/virtio_net.h>
> > > > >   #include <uapi/linux/virtio_ids.h>
> > > > >   #include <uapi/linux/vdpa.h>
> > > > > +#include <uapi/linux/vhost_types.h>
> > > > >   #include <linux/virtio_config.h>
> > > > >   #include <linux/auxiliary_bus.h>
> > > > >   #include <linux/mlx5/cq.h>
> > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > >                   flush_workqueue(ndev->mvdev.wq);
> > > > >   }
> > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > +{
> > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > +}
> > > > > +
> > > > >   static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > >   {
> > > > >           struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > >           .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > >           .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > >           .get_device_features = mlx5_vdpa_get_device_features,
> > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > >           .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > >           .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > >           .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > --
> > > > > 2.39.3
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-10-04 16:27 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20230703142514.363256-1-eperezma@redhat.com>
2023-07-03 15:46 ` [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK Michael S. Tsirkin
2023-07-04  0:26   ` Si-Wei Liu
2023-07-04 10:16     ` Michael S. Tsirkin
2023-07-05  5:11       ` Jason Wang
2023-07-05  5:31         ` Michael S. Tsirkin
2023-07-05  5:47           ` Jason Wang
2023-07-05  6:15             ` Michael S. Tsirkin
2023-07-05  7:32               ` Jason Wang
     [not found]       ` <CAJaqyWf7DzJMEUT0NcPeDLQyPkthEJZydnSSPztoCxF6PWEu1w@mail.gmail.com>
2023-10-04 16:27         ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).