* Re: VIRTIO_NET_F_MTU not negotiated
[not found] <DM8PR12MB5400869D5921E28CE2DC7263AB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-19 13:40 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400F967A710B1151AD5132CAB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-19 13:40 UTC (permalink / raw)
To: Eli Cohen; +Cc: Eugenio Perez Martin, virtualization@lists.linux-foundation.org
On Tue, Jul 19, 2022 at 01:25:42PM +0000, Eli Cohen wrote:
> Hi,
>
>
>
> mlx5_vdpa is offering VIRTIO_NET_F_MTU. However the driver (is it qemu
> responsibility?) does not accept and it ends up not negotiated.
qemu is responsible for passing features to driver.
>
>
> I don't see how can the driver refuse to negotiate this. What if the hardware
> has a limitation with respect to mtu?
Then it can fail FEATURES_OK
>
>
> I noticed this when I created the device with mtu of 1000. I expected the
> netdev at the vm to have mtu of 1000 and any attempt to go beyond should fail
> but that's not the case.
>
>
>
> Comments?
Any chance mtu is too small?
if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) {
int mtu = virtio_cread16(vdev,
offsetof(struct virtio_net_config,
mtu));
if (mtu < MIN_MTU)
__virtio_clear_bit(vdev, VIRTIO_NET_F_MTU);
}
any chance it's on power or another BE system?
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB54005AB1DE4617493645D2CBAB8E9@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-20 10:14 ` Michael S. Tsirkin
[not found] ` <CAJaqyWczrvaaookrQE5=6mTABS-VmJKF6iY+aO3ZD8OB4FumRA@mail.gmail.com>
1 sibling, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-20 10:14 UTC (permalink / raw)
To: Eli Cohen; +Cc: Eugenio Perez Martin, virtualization@lists.linux-foundation.org
On Wed, Jul 20, 2022 at 08:17:29AM +0000, Eli Cohen wrote:
> > From: Eugenio Perez Martin <eperezma@redhat.com>
> > Sent: Wednesday, July 20, 2022 10:47 AM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Michael S. Tsirkin <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 20, 2022 at 8:30 AM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Eugenio Perez Martin <eperezma@redhat.com>
> > > > Sent: Tuesday, July 19, 2022 5:51 PM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Tue, Jul 19, 2022 at 4:02 PM Eli Cohen <elic@nvidia.com> wrote:
> > > > >
> > > > > > From: Eli Cohen
> > > > > > Sent: Tuesday, July 19, 2022 4:53 PM
> > > > > > To: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Cc: Jason Wang <jasowang@redhat.com>; Eugenio Perez Martin <eperezma@redhat.com>; virtualization@lists.linux-
> > > > foundation.org
> > > > > > Subject: RE: VIRTIO_NET_F_MTU not negotiated
> > > > > >
> > > > > > >
> > > > > > > On Tue, Jul 19, 2022 at 01:25:42PM +0000, Eli Cohen wrote:
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > mlx5_vdpa is offering VIRTIO_NET_F_MTU. However the driver (is it qemu
> > > > > > > > responsibility?) does not accept and it ends up not negotiated.
> > > > > > >
> > > > > > > qemu is responsible for passing features to driver.
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I don't see how can the driver refuse to negotiate this. What if the hardware
> > > > > > > > has a limitation with respect to mtu?
> > > > > > >
> > > > > > > Then it can fail FEATURES_OK
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I noticed this when I created the device with mtu of 1000. I expected the
> > > > > > > > netdev at the vm to have mtu of 1000 and any attempt to go beyond should fail
> > > > > > > > but that's not the case.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Comments?
> > > > > > >
> > > > > > >
> > > > > > > Any chance mtu is too small?
> > > > > > >
> > > > > >
> > > > > > MIN_MTU is 68 bytes and I was trying 1000. I tried also 1300 but same thing.
> > > > > >
> > > > > > > if (virtio_has_feature(vdev, VIRTIO_NET_F_MTU)) {
> > > > > > > int mtu = virtio_cread16(vdev,
> > > > > > > offsetof(struct virtio_net_config,
> > > > > > > mtu));
> > > > > > > if (mtu < MIN_MTU)
> > > > > > > __virtio_clear_bit(vdev, VIRTIO_NET_F_MTU);
> > > > > > > }
> > > > > > >
> > > > > > > any chance it's on power or another BE system?
> > > > > > >
> > > > > >
> > > > > > No, it's x86_64.
> > > > > >
> > > > > > I will test also vdpa device running on the host.
> > > > >
> > > > > vdpa running on the host (using virtio_vdpa) behaves as expected.
> > > > > Is there a quick way to check if qemu fails to pass the information to the driver on the guest?
> > > > >
> > > >
> > > > Can you trace with "-trace 'vhost_vdpa_set_features' -trace
> > > > 'vhost_vdpa_set_features'"? They're parameters of qemu.
> > >
> > > I get this:
> > > vhost_vdpa_set_features dev: 0x5595ae9751a0 features: 0x3008b1803
> > >
> > > VIRTIO_NET_F_MTU is bit 3 and it seems to be off.
> > >
> >
> > Sorry, I put trace on vhost_vdpa *_set_* features twice in my message.
> >
> > Could you try tracing vhost_vdpa_get_features too? That way we know if
> > qemu offers it to the guest.
> >
>
> Sure.
> I get these two successive prints right as the VM starts booting:
> vhost_vdpa_get_features dev: 0x55c87e4651e0 features: 0x300cb182b
> vhost_vdpa_get_features dev: 0x55c87e4627a0 features: 0x300cb182b
>
> Later on I get this:
> vhost_vdpa_set_features dev: 0x55c87e4651e0 features: 0x3008b1803
>
> # qemu-system-x86_64 --version
> QEMU emulator version 7.0.0 (v7.0.0)
> Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
>
>
>
> > Thanks!
>
so it's there but driver does not ack it.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB54001D7DFB29EF048CCD04CCAB979@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-27 7:25 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400E307D34E8B0C5D224813AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-28 2:09 ` Jason Wang
1 sibling, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27 7:25 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device (e.g. through libvirt <mtu size="9000"/>).
> Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do it using its copy of virtio_net_config.
>
> The code in qemu that is responsible to allow to consider the device MTU restriction is here:
>
> static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> {
> VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> VirtIONet *n = VIRTIO_NET(dev);
> NetClientState *nc;
> int i;
>
> if (n->net_conf.mtu) {
> n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> }
>
> The above code can be interpreted as follows:
> if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the device (that actual value is ignored).
>
> I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that we should read the actual limitation for the device.
>
> If this makes sense I can send a patch to fix this.
Well it will then either have to be for vdpa only, or have
compat machinery to avoid breaking migration.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB5400E307D34E8B0C5D224813AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-27 9:34 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400AE8A41758321E0348764AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27 9:34 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 10:25 AM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > (e.g. through libvirt <mtu size="9000"/>).
> > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > it using its copy of virtio_net_config.
> > >
> > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > >
> > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > {
> > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > VirtIONet *n = VIRTIO_NET(dev);
> > > NetClientState *nc;
> > > int i;
> > >
> > > if (n->net_conf.mtu) {
> > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > }
> > >
> > > The above code can be interpreted as follows:
> > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > device (that actual value is ignored).
> > >
> > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > we should read the actual limitation for the device.
> > >
> > > If this makes sense I can send a patch to fix this.
> >
> > Well it will then either have to be for vdpa only, or have
> > compat machinery to avoid breaking migration.
> >
>
> How about this one:
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 1067e72b3975..e464e4645c79 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> {
> virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
>
> n->config_size = virtio_feature_get_config_size(feature_sizes,
> host_features);
Seems to increase config size unconditionally?
> @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>
> if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> struct virtio_net_config netcfg = {};
> + n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> vhost_net_set_config(get_vhost_net(nc->peer),
> (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
And the point is vdpa does not support migration anyway ATM, right?
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB5400AE8A41758321E0348764AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-27 15:44 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400E23C3E7DA97E034F02BEAB969@DM8PR12MB5400.namprd12.prod.outlook.com>
0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-27 15:44 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 12:35 PM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > > -----Original Message-----
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> > device
> > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom
> > to do
> > > > it using its copy of virtio_net_config.
> > > > >
> > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > >
> > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > {
> > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > VirtIONet *n = VIRTIO_NET(dev);
> > > > > NetClientState *nc;
> > > > > int i;
> > > > >
> > > > > if (n->net_conf.mtu) {
> > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > }
> > > > >
> > > > > The above code can be interpreted as follows:
> > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > > device (that actual value is ignored).
> > > > >
> > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> > that
> > > > we should read the actual limitation for the device.
> > > > >
> > > > > If this makes sense I can send a patch to fix this.
> > > >
> > > > Well it will then either have to be for vdpa only, or have
> > > > compat machinery to avoid breaking migration.
> > > >
> > >
> > > How about this one:
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 1067e72b3975..e464e4645c79 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> > > {
> > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> > >
> > > n->config_size = virtio_feature_get_config_size(feature_sizes,
> > > host_features);
> >
> > Seems to increase config size unconditionally?
>
> Right but you pay for reading two more bytes. Is that such a high price to pay?
That's not a performance question. The issue compatibility, size
should not change for a given machine type.
> >
> > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > >
> > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > struct virtio_net_config netcfg = {};
> > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> > > vhost_net_set_config(get_vhost_net(nc->peer),
> > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> >
> > And the point is vdpa does not support migration anyway ATM, right?
> >
>
> I don't see how this can affect vdpa live migration. Am I missing something?
config size affects things like pci BAR size. This must not change
during migration.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB54001D7DFB29EF048CCD04CCAB979@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-27 7:25 ` Michael S. Tsirkin
@ 2022-07-28 2:09 ` Jason Wang
[not found] ` <DM8PR12MB5400DBF0BFFF104940BB4A45AB969@DM8PR12MB5400.namprd12.prod.outlook.com>
1 sibling, 1 reply; 10+ messages in thread
From: Jason Wang @ 2022-07-28 2:09 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, virtualization@lists.linux-foundation.org,
qemu-devel@nongnu.org, Michael S. Tsirkin
On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
>
> I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device (e.g. through libvirt <mtu size="9000"/>).
> Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do it using its copy of virtio_net_config.
>
> The code in qemu that is responsible to allow to consider the device MTU restriction is here:
>
> static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> {
> VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> VirtIONet *n = VIRTIO_NET(dev);
> NetClientState *nc;
> int i;
>
> if (n->net_conf.mtu) {
> n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> }
>
> The above code can be interpreted as follows:
> if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the device (that actual value is ignored).
>
> I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that we should read the actual limitation for the device.
>
> If this makes sense I can send a patch to fix this.
I wonder whether it's worth to bother:
1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
2) it's not specific to MTU, we had other features work like, for
example, the multiqueue?
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB5400DBF0BFFF104940BB4A45AB969@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-28 5:51 ` Jason Wang
2022-07-28 6:47 ` Michael S. Tsirkin
0 siblings, 1 reply; 10+ messages in thread
From: Jason Wang @ 2022-07-28 5:51 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, virtualization@lists.linux-foundation.org,
qemu-devel@nongnu.org, Michael S. Tsirkin
On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
>
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Thursday, July 28, 2022 5:09 AM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > (e.g. through libvirt <mtu size="9000"/>).
> > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > it using its copy of virtio_net_config.
> > >
> > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > >
> > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > {
> > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > VirtIONet *n = VIRTIO_NET(dev);
> > > NetClientState *nc;
> > > int i;
> > >
> > > if (n->net_conf.mtu) {
> > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > }
> > >
> > > The above code can be interpreted as follows:
> > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > device (that actual value is ignored).
> > >
> > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > we should read the actual limitation for the device.
> > >
> > > If this makes sense I can send a patch to fix this.
> >
> > I wonder whether it's worth to bother:
> >
> > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > 2) it's not specific to MTU, we had other features work like, for
> > example, the multiqueue?
> >
>
>
> Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
Probably.
> Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
Yes, we suffer from a similar issue for "queues=". I think we should
fix qemu by failing the initialization if the value provided by cli
doesn't match what is read from config space.
E.g when mtu=9000 was set by cli but the actual mtu is 1500.
Thanks
>
> > Thanks
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
[not found] ` <DM8PR12MB5400E23C3E7DA97E034F02BEAB969@DM8PR12MB5400.namprd12.prod.outlook.com>
@ 2022-07-28 6:46 ` Michael S. Tsirkin
0 siblings, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-28 6:46 UTC (permalink / raw)
To: Eli Cohen
Cc: Eugenio Perez Martin, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Thu, Jul 28, 2022 at 05:51:32AM +0000, Eli Cohen wrote:
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, July 27, 2022 6:44 PM
> > To: Eli Cohen <elic@nvidia.com>
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > virtualization@lists.linux-foundation.org
> > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> >
> > On Wed, Jul 27, 2022 at 10:16:19AM +0000, Eli Cohen wrote:
> > > > -----Original Message-----
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Wednesday, July 27, 2022 12:35 PM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 09:04:47AM +0000, Eli Cohen wrote:
> > > > > > -----Original Message-----
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: Wednesday, July 27, 2022 10:25 AM
> > > > > > To: Eli Cohen <elic@nvidia.com>
> > > > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Jason Wang <jasowang@redhat.com>;
> > > > > > virtualization@lists.linux-foundation.org
> > > > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > > > >
> > > > > > On Wed, Jul 27, 2022 at 06:51:56AM +0000, Eli Cohen wrote:
> > > > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net
> > > > device
> > > > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the
> > freedom
> > > > to do
> > > > > > it using its copy of virtio_net_config.
> > > > > > >
> > > > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > > > >
> > > > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > > > {
> > > > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > > > VirtIONet *n = VIRTIO_NET(dev);
> > > > > > > NetClientState *nc;
> > > > > > > int i;
> > > > > > >
> > > > > > > if (n->net_conf.mtu) {
> > > > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > > > }
> > > > > > >
> > > > > > > The above code can be interpreted as follows:
> > > > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from
> > the
> > > > > > device (that actual value is ignored).
> > > > > > >
> > > > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates
> > > > that
> > > > > > we should read the actual limitation for the device.
> > > > > > >
> > > > > > > If this makes sense I can send a patch to fix this.
> > > > > >
> > > > > > Well it will then either have to be for vdpa only, or have
> > > > > > compat machinery to avoid breaking migration.
> > > > > >
> > > > >
> > > > > How about this one:
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 1067e72b3975..e464e4645c79 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -3188,6 +3188,7 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > > > static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> > > > > {
> > > > > virtio_add_feature(&host_features, VIRTIO_NET_F_MAC);
> > > > > + virtio_add_feature(&host_features, VIRTIO_NET_F_MTU);
> > > > >
> > > > > n->config_size = virtio_feature_get_config_size(feature_sizes,
> > > > > host_features);
> > > >
> > > > Seems to increase config size unconditionally?
> > >
> > > Right but you pay for reading two more bytes. Is that such a high price to pay?
> >
> >
> > That's not a performance question. The issue compatibility, size
> > should not change for a given machine type.
> >
>
> Did you mean it should not change for virtio_net pci devices?
yes
> Can't management controlling the live migration process take care of this?
Management does what it always did which is set flags consistently.
If we tweak them with virtio_add_feature it can do nothing.
> >
> > > >
> > > > > @@ -3512,6 +3513,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > >
> > > > > if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > > > struct virtio_net_config netcfg = {};
> > > > > + n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> > > > > vhost_net_set_config(get_vhost_net(nc->peer),
> > > > > (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > > >
> > > > And the point is vdpa does not support migration anyway ATM, right?
> > > >
> > >
> > > I don't see how this can affect vdpa live migration. Am I missing something?
> >
> > config size affects things like pci BAR size. This must not change
> > during migration.
> >
>
> Why should this change during live migration?
Simply put features need to match on both ends.
> > --
> > MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
2022-07-28 5:51 ` Jason Wang
@ 2022-07-28 6:47 ` Michael S. Tsirkin
2022-07-28 6:57 ` Jason Wang
0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2022-07-28 6:47 UTC (permalink / raw)
To: Jason Wang
Cc: Eugenio Perez Martin, Eli Cohen, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Thu, Jul 28, 2022 at 01:51:37PM +0800, Jason Wang wrote:
> On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
> >
> > > From: Jason Wang <jasowang@redhat.com>
> > > Sent: Thursday, July 28, 2022 5:09 AM
> > > To: Eli Cohen <elic@nvidia.com>
> > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > > virtualization@lists.linux-foundation.org
> > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > >
> > > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > > >
> > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > > (e.g. through libvirt <mtu size="9000"/>).
> > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > > it using its copy of virtio_net_config.
> > > >
> > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > >
> > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > {
> > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > VirtIONet *n = VIRTIO_NET(dev);
> > > > NetClientState *nc;
> > > > int i;
> > > >
> > > > if (n->net_conf.mtu) {
> > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > }
> > > >
> > > > The above code can be interpreted as follows:
> > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > device (that actual value is ignored).
> > > >
> > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > > we should read the actual limitation for the device.
> > > >
> > > > If this makes sense I can send a patch to fix this.
> > >
> > > I wonder whether it's worth to bother:
> > >
> > > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > > 2) it's not specific to MTU, we had other features work like, for
> > > example, the multiqueue?
> > >
> >
> >
> > Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
>
> Probably.
>
> > Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
>
> Yes, we suffer from a similar issue for "queues=". I think we should
> fix qemu by failing the initialization if the value provided by cli
> doesn't match what is read from config space.
>
> E.g when mtu=9000 was set by cli but the actual mtu is 1500.
>
> Thanks
Jason most features are passthrough now, no?
Why do you want to make this one special?
> >
> > > Thanks
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: VIRTIO_NET_F_MTU not negotiated
2022-07-28 6:47 ` Michael S. Tsirkin
@ 2022-07-28 6:57 ` Jason Wang
0 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2022-07-28 6:57 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Eugenio Perez Martin, Eli Cohen, qemu-devel@nongnu.org,
virtualization@lists.linux-foundation.org
On Thu, Jul 28, 2022 at 2:48 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Jul 28, 2022 at 01:51:37PM +0800, Jason Wang wrote:
> > On Thu, Jul 28, 2022 at 1:39 PM Eli Cohen <elic@nvidia.com> wrote:
> > >
> > > > From: Jason Wang <jasowang@redhat.com>
> > > > Sent: Thursday, July 28, 2022 5:09 AM
> > > > To: Eli Cohen <elic@nvidia.com>
> > > > Cc: Eugenio Perez Martin <eperezma@redhat.com>; qemu-devel@nongnu.org; Michael S. Tsirkin <mst@redhat.com>;
> > > > virtualization@lists.linux-foundation.org
> > > > Subject: Re: VIRTIO_NET_F_MTU not negotiated
> > > >
> > > > On Wed, Jul 27, 2022 at 2:52 PM Eli Cohen <elic@nvidia.com> wrote:
> > > > >
> > > > > I found out that the reason why I could not enforce the mtu stems from the fact that I did not configure max mtu for the net device
> > > > (e.g. through libvirt <mtu size="9000"/>).
> > > > > Libvirt does not allow this configuration for vdpa devices and probably for a reason. The vdpa backend driver has the freedom to do
> > > > it using its copy of virtio_net_config.
> > > > >
> > > > > The code in qemu that is responsible to allow to consider the device MTU restriction is here:
> > > > >
> > > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > > {
> > > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > > VirtIONet *n = VIRTIO_NET(dev);
> > > > > NetClientState *nc;
> > > > > int i;
> > > > >
> > > > > if (n->net_conf.mtu) {
> > > > > n->host_features |= (1ULL << VIRTIO_NET_F_MTU);
> > > > > }
> > > > >
> > > > > The above code can be interpreted as follows:
> > > > > if the command line arguments of qemu indicates that mtu should be limited, then we would read this mtu limitation from the
> > > > device (that actual value is ignored).
> > > > >
> > > > > I worked around this limitation by unconditionally setting VIRTIO_NET_F_MTU in the host features. As said, it only indicates that
> > > > we should read the actual limitation for the device.
> > > > >
> > > > > If this makes sense I can send a patch to fix this.
> > > >
> > > > I wonder whether it's worth to bother:
> > > >
> > > > 1) mgmt (above libvirt) should have the knowledge to prepare the correct XML
> > > > 2) it's not specific to MTU, we had other features work like, for
> > > > example, the multiqueue?
> > > >
> > >
> > >
> > > Currently libvirt does not recognize setting the mtu through XML for vdpa device. So you mean the fix should go to libvirt?
> >
> > Probably.
> >
> > > Furthermore, even if libvirt supports MTU configuration for a vdpa device, the actual value provided will be ignored and the limitation will be taken from what the vdpa device published in its virtio_net_config structure. That makes the XML configuration binary.
> >
> > Yes, we suffer from a similar issue for "queues=". I think we should
> > fix qemu by failing the initialization if the value provided by cli
> > doesn't match what is read from config space.
> >
> > E.g when mtu=9000 was set by cli but the actual mtu is 1500.
> >
> > Thanks
>
>
> Jason most features are passthrough now, no?
> Why do you want to make this one special?
I don't want to make anything special, but I couldn't find a better approach.
MTU is not the only thing. It applies to all the other features whose
default value is false (MQ, RSS, etc).
Thanks
>
>
>
> > >
> > > > Thanks
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-07-28 6:57 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <DM8PR12MB5400869D5921E28CE2DC7263AB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-19 13:40 ` VIRTIO_NET_F_MTU not negotiated Michael S. Tsirkin
[not found] ` <DM8PR12MB5400F967A710B1151AD5132CAB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
[not found] ` <DM8PR12MB5400AB08EE51E6BF05EEBDE2AB8F9@DM8PR12MB5400.namprd12.prod.outlook.com>
[not found] ` <CAJaqyWc0M4O8Rr2jR4L_myPd_VmxkYjHTnwdxQFAf3N_hZw_3g@mail.gmail.com>
[not found] ` <DM8PR12MB540033DA1293BA23E29148EAAB8E9@DM8PR12MB5400.namprd12.prod.outlook.com>
[not found] ` <CAJaqyWfOS9nCBNeborhTdOXAnmZX9XwRF=2E0aphuHbqr352CA@mail.gmail.com>
[not found] ` <DM8PR12MB54005AB1DE4617493645D2CBAB8E9@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-20 10:14 ` Michael S. Tsirkin
[not found] ` <CAJaqyWczrvaaookrQE5=6mTABS-VmJKF6iY+aO3ZD8OB4FumRA@mail.gmail.com>
[not found] ` <DM8PR12MB54001D7DFB29EF048CCD04CCAB979@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-27 7:25 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400E307D34E8B0C5D224813AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-27 9:34 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400AE8A41758321E0348764AB979@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-27 15:44 ` Michael S. Tsirkin
[not found] ` <DM8PR12MB5400E23C3E7DA97E034F02BEAB969@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-28 6:46 ` Michael S. Tsirkin
2022-07-28 2:09 ` Jason Wang
[not found] ` <DM8PR12MB5400DBF0BFFF104940BB4A45AB969@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-07-28 5:51 ` Jason Wang
2022-07-28 6:47 ` Michael S. Tsirkin
2022-07-28 6:57 ` Jason Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).