qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* RE: VIRTIO_NET_F_MTU not negotiated
       [not found]               ` <CAJaqyWczrvaaookrQE5=6mTABS-VmJKF6iY+aO3ZD8OB4FumRA@mail.gmail.com>
@ 2022-07-27  6:51                 ` Eli Cohen
  2022-07-27  7:25                   ` Michael S. Tsirkin
  2022-07-28  2:09                   ` Jason Wang
  0 siblings, 2 replies; 14+ messages in thread
From: Eli Cohen @ 2022-07-27  6:51 UTC (permalink / raw)
  To: Eugenio Perez Martin, qemu-devel@nongnu.org
  Cc: Michael S. Tsirkin, Jason Wang,
	virtualization@lists.linux-foundation.org

I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device (e.g. through libvirt <mtu size="9000"/>).
Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do it using its copy of virtio_net_config.

The code in qemu that is responsible to allow to consider the device MTU restriction is here:

static void virtio_net_device_realize(DeviceState *dev, Error **errp)
{
    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
    VirtIONet *n = VIRTIO_NET(dev);
    NetClientState *nc;
    int i;

    if (n->net_conf.mtu) {
        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
    }

The above code can be interpreted as follows:
if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the device (that actual value is ignored).

I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that we should read the actual limitation for the device.

If this makes sense I can send a patch to fix this.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-27  6:51                 ` VIRTIO_NET_F_MTU not negotiated Eli Cohen
@ 2022-07-27  7:25                   ` Michael S. Tsirkin
  2022-07-27  9:04                     ` Eli Cohen
  2022-07-28  2:09                   ` Jason Wang
  1 sibling, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27  7:25 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device (e.g. through libvirt <mtu size="9000"/>).
> Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do it using its copy of virtio_net_config.
> 
> The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> 
> static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> {
>     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>     VirtIONet *n = VIRTIO_NET(dev);
>     NetClientState *nc;
>     int i;
> 
>     if (n->net_conf.mtu) {
>         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
>     }
> 
> The above code can be interpreted as follows:
> if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the device (that actual value is ignored).
> 
> I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that we should read the actual limitation for the device.
> 
> If this makes sense I can send a patch to fix this.

Well it will then either have to be for vdpa only, or have
compat machinery to avoid breaking migration.

-- 
MST



^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: VIRTIO_NET_F_MTU not negotiated
  2022-07-27  7:25                   ` Michael S. Tsirkin
@ 2022-07-27  9:04                     ` Eli Cohen
  2022-07-27  9:34                       ` Michael S. Tsirkin
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Cohen @ 2022-07-27  9:04 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

> -----Original Message-----
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Wednesday, July 27, 2022 10:25 AM
> To: Eli Cohen <elic@nvidia.com>
> Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> virtualization@lists.linux-foundation.org
> Subject: Re: VIRTIO_NET_F_MTU not negotiated
> 
> On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> (e.g. through libvirt <mtu size="9000"/>).
> > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> it using its copy of virtio_net_config.
> >
> > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> >
> > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > {
> >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> >     VirtIONet *n = VIRTIO_NET(dev);
> >     NetClientState *nc;
> >     int i;
> >
> >     if (n->net_conf.mtu) {
> >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> >     }
> >
> > The above code can be interpreted as follows:
> > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> device (that actual value is ignored).
> >
> > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> we should read the actual limitation for the device.
> >
> > If this makes sense I can send a patch to fix this.
> 
> Well it will then either have to be for vdpa only, or have
> compat machinery to avoid breaking migration.
> 

How about this one:

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1067e72b3975..e464e4645c79 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
 static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
 {
     virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
+    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);

     n->config_size = virtio_feature_get_config_size(feature_sizes,
                                                     host_features);
@@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)

    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
         struct virtio_net_config netcfg = {};
+        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
         memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
         vhost_net_set_config(get_vhost_net(nc->peer),
             (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-27  9:04                     ` Eli Cohen
@ 2022-07-27  9:34                       ` Michael S. Tsirkin
  2022-07-27 10:16                         ` Eli Cohen
  0 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27  9:34 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 10:25 AM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > 
> > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > (e.g. through libvirt <mtu size="9000"/>).
> > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > it using its copy of virtio_net_config.
> > >
> > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > >
> > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > {
> > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > >     VirtIONet *n = VIRTIO_NET(dev);
> > >     NetClientState *nc;
> > >     int i;
> > >
> > >     if (n->net_conf.mtu) {
> > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > >     }
> > >
> > > The above code can be interpreted as follows:
> > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > device (that actual value is ignored).
> > >
> > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > we should read the actual limitation for the device.
> > >
> > > If this makes sense I can send a patch to fix this.
> > 
> > Well it will then either have to be for vdpa only, or have
> > compat machinery to avoid breaking migration.
> > 
> 
> How about this one:
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 1067e72b3975..e464e4645c79 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
>  {
>      virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> +    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> 
>      n->config_size = virtio_feature_get_config_size(feature_sizes,
>                                                      host_features);

Seems to increase config size unconditionally?

> @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> 
>     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
>          struct virtio_net_config netcfg = {};
> +        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);

And the point is vdpa does not support migration anyway ATM, right?

-- 
MST



^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: VIRTIO_NET_F_MTU not negotiated
  2022-07-27  9:34                       ` Michael S. Tsirkin
@ 2022-07-27 10:16                         ` Eli Cohen
  2022-07-27 15:44                           ` Michael S. Tsirkin
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Cohen @ 2022-07-27 10:16 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

> -----Original Message-----
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Wednesday, July 27, 2022 12:35 PM
> To: Eli Cohen <elic@nvidia.com>
> Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> virtualization@lists.linux-foundation.org
> Subject: Re: VIRTIO_NET_F_MTU not negotiated
> 
> On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > -----Original Message-----
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > To: Eli Cohen <elic@nvidia.com>
> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > virtualization@lists.linux-foundation.org
> > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > >
> > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> device
> > > (e.g. through libvirt <mtu size="9000"/>).
> > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom
> to do
> > > it using its copy of virtio_net_config.
> > > >
> > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > >
> > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > {
> > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > >     NetClientState *nc;
> > > >     int i;
> > > >
> > > >     if (n->net_conf.mtu) {
> > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > >     }
> > > >
> > > > The above code can be interpreted as follows:
> > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > device (that actual value is ignored).
> > > >
> > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> that
> > > we should read the actual limitation for the device.
> > > >
> > > > If this makes sense I can send a patch to fix this.
> > >
> > > Well it will then either have to be for vdpa only, or have
> > > compat machinery to avoid breaking migration.
> > >
> >
> > How about this one:
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 1067e72b3975..e464e4645c79 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> >  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> >  {
> >      virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > +    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> >
> >      n->config_size = virtio_feature_get_config_size(feature_sizes,
> >                                                      host_features);
> 
> Seems to increase config size unconditionally?

Right but you pay for reading two more bytes. Is that such a high price to pay?

> 
> > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >
> >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> >          struct virtio_net_config netcfg = {};
> > +        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> >          vhost_net_set_config(get_vhost_net(nc->peer),
> >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> 
> And the point is vdpa does not support migration anyway ATM, right?
> 

I don't see how this can affect vdpa live migration. Am I missing something?


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-27 10:16                         ` Eli Cohen
@ 2022-07-27 15:44                           ` Michael S. Tsirkin
  2022-07-28  5:51                             ` Eli Cohen
  0 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27 15:44 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 12:35 PM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > 
> > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > > -----Original Message-----
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> > device
> > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom
> > to do
> > > > it using its copy of virtio_net_config.
> > > > >
> > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > >
> > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > {
> > > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > > >     NetClientState *nc;
> > > > >     int i;
> > > > >
> > > > >     if (n->net_conf.mtu) {
> > > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > >     }
> > > > >
> > > > > The above code can be interpreted as follows:
> > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > > device (that actual value is ignored).
> > > > >
> > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> > that
> > > > we should read the actual limitation for the device.
> > > > >
> > > > > If this makes sense I can send a patch to fix this.
> > > >
> > > > Well it will then either have to be for vdpa only, or have
> > > > compat machinery to avoid breaking migration.
> > > >
> > >
> > > How about this one:
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 1067e72b3975..e464e4645c79 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > >  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> > >  {
> > >      virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > > +    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> > >
> > >      n->config_size = virtio_feature_get_config_size(feature_sizes,
> > >                                                      host_features);
> > 
> > Seems to increase config size unconditionally?
> 
> Right but you pay for reading two more bytes. Is that such a high price to pay?


That's not a performance question. The issue compatibility, size
should not change for a given machine type.


> > 
> > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > >
> > >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > >          struct virtio_net_config netcfg = {};
> > > +        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> > >          vhost_net_set_config(get_vhost_net(nc->peer),
> > >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > 
> > And the point is vdpa does not support migration anyway ATM, right?
> > 
> 
> I don't see how this can affect vdpa live migration. Am I missing something?

config size affects things like pci BAR size. This must not change
during migration.

-- 
MST



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-27  6:51                 ` VIRTIO_NET_F_MTU not negotiated Eli Cohen
  2022-07-27  7:25                   ` Michael S. Tsirkin
@ 2022-07-28  2:09                   ` Jason Wang
  2022-07-28  5:39                     ` Eli Cohen
  1 sibling, 1 reply; 14+ messages in thread
From: Jason Wang @ 2022-07-28  2:09 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Michael S. Tsirkin,
	virtualization@lists.linux-foundation.org

On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
>
> I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device (e.g. through libvirt <mtu size="9000"/>).
> Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do it using its copy of virtio_net_config.
>
> The code in qemu that is responsible to allow to consider the device MTU restriction is here:
>
> static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> {
>     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>     VirtIONet *n = VIRTIO_NET(dev);
>     NetClientState *nc;
>     int i;
>
>     if (n->net_conf.mtu) {
>         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
>     }
>
> The above code can be interpreted as follows:
> if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the device (that actual value is ignored).
>
> I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that we should read the actual limitation for the device.
>
> If this makes sense I can send a patch to fix this.

I wonder whether it's worth to bother:

1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
2) it's not specific to MTU, we had other features work like, for
example, the multiqueue?

Thanks



^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  2:09                   ` Jason Wang
@ 2022-07-28  5:39                     ` Eli Cohen
  2022-07-28  5:51                       ` Jason Wang
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Cohen @ 2022-07-28  5:39 UTC (permalink / raw)
  To: Jason Wang
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Michael S. Tsirkin,
	virtualization@lists.linux-foundation.org

> From: Jason Wang <jasowang@redhat.com>
> Sent: Thursday, July 28, 2022 5:09 AM
> To: Eli Cohen <elic@nvidia.com>
> Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> virtualization@lists.linux-foundation.org
> Subject: Re: VIRTIO_NET_F_MTU not negotiated
> 
> On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> >
> > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> (e.g. through libvirt <mtu size="9000"/>).
> > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> it using its copy of virtio_net_config.
> >
> > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> >
> > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > {
> >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> >     VirtIONet *n = VIRTIO_NET(dev);
> >     NetClientState *nc;
> >     int i;
> >
> >     if (n->net_conf.mtu) {
> >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> >     }
> >
> > The above code can be interpreted as follows:
> > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> device (that actual value is ignored).
> >
> > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> we should read the actual limitation for the device.
> >
> > If this makes sense I can send a patch to fix this.
> 
> I wonder whether it's worth to bother:
> 
> 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> 2) it's not specific to MTU, we had other features work like, for
> example, the multiqueue?
> 


Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.

> Thanks


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: VIRTIO_NET_F_MTU not negotiated
  2022-07-27 15:44                           ` Michael S. Tsirkin
@ 2022-07-28  5:51                             ` Eli Cohen
  2022-07-28  6:46                               ` Michael S. Tsirkin
  0 siblings, 1 reply; 14+ messages in thread
From: Eli Cohen @ 2022-07-28  5:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: Wednesday, July 27, 2022 6:44 PM
> To: Eli Cohen <elic@nvidia.com>
> Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> virtualization@lists.linux-foundation.org
> Subject: Re: VIRTIO_NET_F_MTU not negotiated
> 
> On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote:
> > > -----Original Message-----
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Wednesday, July 27, 2022 12:35 PM
> > > To: Eli Cohen <elic@nvidia.com>
> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > virtualization@lists.linux-foundation.org
> > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > >
> > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > > > -----Original Message-----
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > > > To: Eli Cohen <elic@nvidia.com>
> > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > > virtualization@lists.linux-foundation.org
> > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > > >
> > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> > > device
> > > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the
> freedom
> > > to do
> > > > > it using its copy of virtio_net_config.
> > > > > >
> > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > > >
> > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > > {
> > > > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > > > >     NetClientState *nc;
> > > > > >     int i;
> > > > > >
> > > > > >     if (n->net_conf.mtu) {
> > > > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > >     }
> > > > > >
> > > > > > The above code can be interpreted as follows:
> > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from
> the
> > > > > device (that actual value is ignored).
> > > > > >
> > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> > > that
> > > > > we should read the actual limitation for the device.
> > > > > >
> > > > > > If this makes sense I can send a patch to fix this.
> > > > >
> > > > > Well it will then either have to be for vdpa only, or have
> > > > > compat machinery to avoid breaking migration.
> > > > >
> > > >
> > > > How about this one:
> > > >
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 1067e72b3975..e464e4645c79 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > >  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> > > >  {
> > > >      virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > > > +    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> > > >
> > > >      n->config_size = virtio_feature_get_config_size(feature_sizes,
> > > >                                                      host_features);
> > >
> > > Seems to increase config size unconditionally?
> >
> > Right but you pay for reading two more bytes. Is that such a high price to pay?
> 
> 
> That's not a performance question. The issue compatibility, size
> should not change for a given machine type.
> 

Did you mean it should not change for virtio_net pci devices?
Can't management controlling the live migration process take care of this?

> 
> > >
> > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > >
> > > >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > >          struct virtio_net_config netcfg = {};
> > > > +        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> > > >          vhost_net_set_config(get_vhost_net(nc->peer),
> > > >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > >
> > > And the point is vdpa does not support migration anyway ATM, right?
> > >
> >
> > I don't see how this can affect vdpa live migration. Am I missing something?
> 
> config size affects things like pci BAR size. This must not change
> during migration.
> 

Why should this change during live migration?

> --
> MST



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  5:39                     ` Eli Cohen
@ 2022-07-28  5:51                       ` Jason Wang
  2022-07-28  6:47                         ` Michael S. Tsirkin
  2022-08-01 10:02                         ` Eugenio Perez Martin
  0 siblings, 2 replies; 14+ messages in thread
From: Jason Wang @ 2022-07-28  5:51 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Michael S. Tsirkin,
	virtualization@lists.linux-foundation.org

On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
>
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Thursday, July 28, 2022 5:09 AM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > (e.g. through libvirt <mtu size="9000"/>).
> > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > it using its copy of virtio_net_config.
> > >
> > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > >
> > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > {
> > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > >     VirtIONet *n = VIRTIO_NET(dev);
> > >     NetClientState *nc;
> > >     int i;
> > >
> > >     if (n->net_conf.mtu) {
> > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > >     }
> > >
> > > The above code can be interpreted as follows:
> > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > device (that actual value is ignored).
> > >
> > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > we should read the actual limitation for the device.
> > >
> > > If this makes sense I can send a patch to fix this.
> >
> > I wonder whether it's worth to bother:
> >
> > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > 2) it's not specific to MTU, we had other features work like, for
> > example, the multiqueue?
> >
>
>
> Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?

Probably.

> Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.

Yes, we suffer from a similar issue for "queues=". I think we should
fix qemu by failing the initialization if the value provided by cli
doesn't match what is read from config space.

E.g when mtu=9000 was set by cli but the actual mtu is 1500.

Thanks

>
> > Thanks
>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  5:51                             ` Eli Cohen
@ 2022-07-28  6:46                               ` Michael S. Tsirkin
  0 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2022-07-28  6:46 UTC (permalink / raw)
  To: Eli Cohen
  Cc: Eugenio Perez Martin, qemu-devel@nongnu.org, Jason Wang,
	virtualization@lists.linux-foundation.org

On Thu, Jul 28, 2022 at 05:51:32AM +0000, Eli Cohen wrote:
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 6:44 PM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > 
> > On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote:
> > > > -----Original Message-----
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Wednesday, July 27, 2022 12:35 PM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > > > > -----Original Message-----
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > > > > To: Eli Cohen <elic@nvidia.com>
> > > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > > > virtualization@lists.linux-foundation.org
> > > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > > > >
> > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> > > > device
> > > > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the
> > freedom
> > > > to do
> > > > > > it using its copy of virtio_net_config.
> > > > > > >
> > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > > > >
> > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > > > {
> > > > > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > > > > >     NetClientState *nc;
> > > > > > >     int i;
> > > > > > >
> > > > > > >     if (n->net_conf.mtu) {
> > > > > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > > >     }
> > > > > > >
> > > > > > > The above code can be interpreted as follows:
> > > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from
> > the
> > > > > > device (that actual value is ignored).
> > > > > > >
> > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> > > > that
> > > > > > we should read the actual limitation for the device.
> > > > > > >
> > > > > > > If this makes sense I can send a patch to fix this.
> > > > > >
> > > > > > Well it will then either have to be for vdpa only, or have
> > > > > > compat machinery to avoid breaking migration.
> > > > > >
> > > > >
> > > > > How about this one:
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 1067e72b3975..e464e4645c79 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > > >  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> > > > >  {
> > > > >      virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > > > > +    virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> > > > >
> > > > >      n->config_size = virtio_feature_get_config_size(feature_sizes,
> > > > >                                                      host_features);
> > > >
> > > > Seems to increase config size unconditionally?
> > >
> > > Right but you pay for reading two more bytes. Is that such a high price to pay?
> > 
> > 
> > That's not a performance question. The issue compatibility, size
> > should not change for a given machine type.
> > 
> 
> Did you mean it should not change for virtio_net pci devices?

yes

> Can't management controlling the live migration process take care of this?

Management does what it always did which is set flags consistently.
If we tweak them with virtio_add_feature it can do nothing.

> > 
> > > >
> > > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > >
> > > > >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > > >          struct virtio_net_config netcfg = {};
> > > > > +        n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> > > > >          vhost_net_set_config(get_vhost_net(nc->peer),
> > > > >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > > >
> > > > And the point is vdpa does not support migration anyway ATM, right?
> > > >
> > >
> > > I don't see how this can affect vdpa live migration. Am I missing something?
> > 
> > config size affects things like pci BAR size. This must not change
> > during migration.
> > 
> 
> Why should this change during live migration?

Simply put features need to match on both ends.

> > --
> > MST



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  5:51                       ` Jason Wang
@ 2022-07-28  6:47                         ` Michael S. Tsirkin
  2022-07-28  6:57                           ` Jason Wang
  2022-08-01 10:02                         ` Eugenio Perez Martin
  1 sibling, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2022-07-28  6:47 UTC (permalink / raw)
  To: Jason Wang
  Cc: Eli Cohen, Eugenio Perez Martin, qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org

On Thu, Jul 28, 2022 at 01:51:37PM +0800, Jason Wang wrote:
> On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
> >
> > > From: Jason Wang <jasowang@redhat.com>
> > > Sent: Thursday, July 28, 2022 5:09 AM
> > > To: Eli Cohen <elic@nvidia.com>
> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > > virtualization@lists.linux-foundation.org
> > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > >
> > > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > > >
> > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > > (e.g. through libvirt <mtu size="9000"/>).
> > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > > it using its copy of virtio_net_config.
> > > >
> > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > >
> > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > {
> > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > >     NetClientState *nc;
> > > >     int i;
> > > >
> > > >     if (n->net_conf.mtu) {
> > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > >     }
> > > >
> > > > The above code can be interpreted as follows:
> > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > device (that actual value is ignored).
> > > >
> > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > > we should read the actual limitation for the device.
> > > >
> > > > If this makes sense I can send a patch to fix this.
> > >
> > > I wonder whether it's worth to bother:
> > >
> > > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > > 2) it's not specific to MTU, we had other features work like, for
> > > example, the multiqueue?
> > >
> >
> >
> > Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
> 
> Probably.
> 
> > Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
> 
> Yes, we suffer from a similar issue for "queues=". I think we should
> fix qemu by failing the initialization if the value provided by cli
> doesn't match what is read from config space.
> 
> E.g when mtu=9000 was set by cli but the actual mtu is 1500.
> 
> Thanks


Jason most features are passthrough now, no?
Why do you want to make this one special?



> >
> > > Thanks
> >



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  6:47                         ` Michael S. Tsirkin
@ 2022-07-28  6:57                           ` Jason Wang
  0 siblings, 0 replies; 14+ messages in thread
From: Jason Wang @ 2022-07-28  6:57 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Eli Cohen, Eugenio Perez Martin, qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org

On Thu, Jul 28, 2022 at 2:48 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Jul 28, 2022 at 01:51:37PM +0800, Jason Wang wrote:
> > On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > > From: Jason Wang <jasowang@redhat.com>
> > > > Sent: Thursday, July 28, 2022 5:09 AM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > > > >
> > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > > > it using its copy of virtio_net_config.
> > > > >
> > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > >
> > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > {
> > > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > > >     NetClientState *nc;
> > > > >     int i;
> > > > >
> > > > >     if (n->net_conf.mtu) {
> > > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > >     }
> > > > >
> > > > > The above code can be interpreted as follows:
> > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > > device (that actual value is ignored).
> > > > >
> > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > > > we should read the actual limitation for the device.
> > > > >
> > > > > If this makes sense I can send a patch to fix this.
> > > >
> > > > I wonder whether it's worth to bother:
> > > >
> > > > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > > > 2) it's not specific to MTU, we had other features work like, for
> > > > example, the multiqueue?
> > > >
> > >
> > >
> > > Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
> >
> > Probably.
> >
> > > Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
> >
> > Yes, we suffer from a similar issue for "queues=". I think we should
> > fix qemu by failing the initialization if the value provided by cli
> > doesn't match what is read from config space.
> >
> > E.g when mtu=9000 was set by cli but the actual mtu is 1500.
> >
> > Thanks
>
>
> Jason most features are passthrough now, no?
> Why do you want to make this one special?

I don't want to make anything special, but I couldn't find a better approach.

MTU is not the only thing. It applies to all the other features whose
default value is false (MQ, RSS, etc).

Thanks

>
>
>
> > >
> > > > Thanks
> > >
>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: VIRTIO_NET_F_MTU not negotiated
  2022-07-28  5:51                       ` Jason Wang
  2022-07-28  6:47                         ` Michael S. Tsirkin
@ 2022-08-01 10:02                         ` Eugenio Perez Martin
  1 sibling, 0 replies; 14+ messages in thread
From: Eugenio Perez Martin @ 2022-08-01 10:02 UTC (permalink / raw)
  To: Jason Wang
  Cc: Eli Cohen, qemu-devel@nongnu.org, Michael S. Tsirkin,
	virtualization@lists.linux-foundation.org

On Thu, Jul 28, 2022 at 7:51 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
> >
> > > From: Jason Wang <jasowang@redhat.com>
> > > Sent: Thursday, July 28, 2022 5:09 AM
> > > To: Eli Cohen <elic@nvidia.com>
> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > > virtualization@lists.linux-foundation.org
> > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > >
> > > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > > >
> > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > > (e.g. through libvirt <mtu size="9000"/>).
> > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > > it using its copy of virtio_net_config.
> > > >
> > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > >
> > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > {
> > > >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > >     VirtIONet *n = VIRTIO_NET(dev);
> > > >     NetClientState *nc;
> > > >     int i;
> > > >
> > > >     if (n->net_conf.mtu) {
> > > >         n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > >     }
> > > >
> > > > The above code can be interpreted as follows:
> > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > device (that actual value is ignored).
> > > >
> > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > > we should read the actual limitation for the device.
> > > >
> > > > If this makes sense I can send a patch to fix this.
> > >
> > > I wonder whether it's worth to bother:
> > >
> > > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > > 2) it's not specific to MTU, we had other features work like, for
> > > example, the multiqueue?
> > >
> >
> >
> > Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
>
> Probably.
>
> > Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
>
> Yes, we suffer from a similar issue for "queues=". I think we should
> fix qemu by failing the initialization if the value provided by cli
> doesn't match what is read from config space.
>
> E.g when mtu=9000 was set by cli but the actual mtu is 1500.
>

Maybe we can be less strict? Since config space mtu is the maximum
value accepted by the device.

So the case of setting mtu=9000 by cli while the device mtu is 1500
should fail, but not the reverse one. To set a cli mtu <= device mtu
should work in my opinion.

If libvirt has the knowledge to set max mtu properly, it can do so. If
not (or not using libvirt), it should be possible to pass the device's
one.

Other features may need to do the same.

Thanks!



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-08-01 10:06 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <DM8PR12MB5400869D5921E28CE2DC7263AB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
     [not found] ` <20220719093841-mutt-send-email-mst@kernel.org>
     [not found]   ` <DM8PR12MB5400F967A710B1151AD5132CAB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
     [not found]     ` <DM8PR12MB5400AB08EE51E6BF05EEBDE2AB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
     [not found]       ` <CAJaqyWc0M4O8Rr2jR4L_myPd_VmxkYjHTnwdxQFAf3N_hZw_3g@mail.gmail.com>
     [not found]         ` <DM8PR12MB540033DA1293BA23E29148EAAB8E9@DM8PR12MB5400.namprd12.prod.outlook.com>
     [not found]           ` <CAJaqyWfOS9nCBNeborhTdOXAnmZX9XwRF=2E0aphuHbqr352CA@mail.gmail.com>
     [not found]             ` <DM8PR12MB54005AB1DE4617493645D2CBAB8E9@DM8PR12MB5400.namprd12.prod.outlook.com>
     [not found]               ` <CAJaqyWczrvaaookrQE5=6mTABS-VmJKF6iY+aO3ZD8OB4FumRA@mail.gmail.com>
2022-07-27  6:51                 ` VIRTIO_NET_F_MTU not negotiated Eli Cohen
2022-07-27  7:25                   ` Michael S. Tsirkin
2022-07-27  9:04                     ` Eli Cohen
2022-07-27  9:34                       ` Michael S. Tsirkin
2022-07-27 10:16                         ` Eli Cohen
2022-07-27 15:44                           ` Michael S. Tsirkin
2022-07-28  5:51                             ` Eli Cohen
2022-07-28  6:46                               ` Michael S. Tsirkin
2022-07-28  2:09                   ` Jason Wang
2022-07-28  5:39                     ` Eli Cohen
2022-07-28  5:51                       ` Jason Wang
2022-07-28  6:47                         ` Michael S. Tsirkin
2022-07-28  6:57                           ` Jason Wang
2022-08-01 10:02                         ` Eugenio Perez Martin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).