qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>,
	"Thomas Huth" <thuth@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Yuri Benditovich" <yuri.benditovich@daynix.com>,
	eduardo@habkost.net, marcel.apfelbaum@gmail.com,
	philmd@linaro.org, wangyanan55@huawei.com,
	dmitry.fleytman@gmail.com, jasowang@redhat.com,
	sriram.yagnaraman@est.tech, sw@weilnetz.de,
	qemu-devel@nongnu.org, yan@daynix.com,
	"Fabiano Rosas" <farosas@suse.de>,
	devel@lists.libvirt.org
Subject: Re: [PATCH v2 4/4] virtio-net: Add support for USO features
Date: Sun, 4 Aug 2024 09:08:05 -0400	[thread overview]
Message-ID: <Zq99NcMkDMkDKBLv@x1n> (raw)
In-Reply-To: <8ad96f43-83ae-49ae-abc1-1e17ee15f24d@daynix.com>

On Sun, Aug 04, 2024 at 03:49:45PM +0900, Akihiko Odaki wrote:
> On 2024/08/03 1:26, Peter Xu wrote:
> > On Sat, Aug 03, 2024 at 12:54:51AM +0900, Akihiko Odaki wrote:
> > > > > > I'm not sure if I read it right.  Perhaps you meant something more generic
> > > > > > than -platform but similar?
> > > > > > 
> > > > > > For example, "-profile [PROFILE]" qemu cmdline, where PROFILE can be either
> > > > > > "perf" or "compat", while by default to "compat"?
> > > > > 
> > > > > "perf" would cover 4) and "compat" will cover 1). However neither of them
> > > > > will cover 2) because an enum is not enough to know about all hosts. I
> > > > > presented a design that will cover 2) in:
> > > > > https://lore.kernel.org/r/2da4ebcd-2058-49c3-a4ec-8e60536e5cbb@daynix.com
> > > > 
> > > > "-merge-platform" shouldn't be a QEMU parameter, but should be something
> > > > separate.
> > > 
> > > Do you mean merging platform dumps should be done with another command? I
> > > think we will want to know the QOM tree is in use when implementing
> > > -merge-platform. For example, you cannot define a "platform" when e.g., you
> > > don't know what netdev backend (e.g., user, vhost-net, vhost-vdpa) is
> > > connected to virtio-net devices. Of course we can include those information
> > > in dumps, but we don't do so for VMState.
> > 
> > What I was thinking is the generated platform dump shouldn't care about
> > what is used as backend: it should try to probe whatever is specified in
> > the qemu cmdline, and it's the user's job to make sure the exact same qemu
> > cmdline is used in other hosts to dump this information.
> > 
> > IOW, the dump will only contain the information that was based on the qemu
> > cmdline.  E.g., if it doesn't include virtio device at all, and if we only
> > support such dump for virtio, it should dump nothing.
> > 
> > Then the -merge-platform will expect all dumps to look the same too,
> > merging them with AND on each field.
> 
> I think we will still need the QOM tree in that case. I think the platform
> information will look somewhat similar to VMState, which requires the QOM
> tree to interpret.

Ah yes, I assume you meant when multiple devices can report different thing
even if with the same frontend / device type.  QOM should work, or anything
that can identify a device, e.g. with id / instance_id attached along with
the device class.

One thing that I still don't know how it works is how it interacts with new
hosts being added.

This idea is based on the fact that the cluster is known before starting
any VM.  However in reality I think it can happen when VMs started with a
small cluster but then cluster extended, when the -merge-platform has been
done on the smaller set.

> 
> > 
> > Said that, I actually am still not clear on how / whether it should work at
> > last.  At least my previous concern (1) didn't has a good answer yet, on
> > what we do when profile collisions with qemu cmdlines.  So far I actually
> > still think it more straightforward that in migration we handshake on these
> > capabilities if possible.
> > 
> > And that's why I was thinking (where I totally agree with you on this) that
> > whether we should settle a short term plan first to be on the safe side
> > that we start with migration always being compatible, then we figure the
> > other approach.  That seems easier to me, and it's also a matter of whether
> > we want to do something for 9.1, or leaving that for 9.2 for USO*.
> 
> I suggest disabling all offload features of virtio-net with 9.2.
> 
> I want to keep things consistent so I want to disable all at once. This
> change will be very uncomfortable for us, who are implementing offload
> features, but I hope it will motivate us to implement a proper solution.
> 
> That said, it will be surely a breaking change so we should wait for 9.1
> before making such a change.

Personally I don't worry too much on other offload bits besides USO* so far
if we have them ON for longer time.  My wish was that they're old good
kernel features mostly supported everywhere who runs QEMU, then we're good.

And I definitely worry about future offload features, or any feature that
may probe host like this and auto-OFF: I hope we can do them on the safe
side starting from day1.

So I don't know whether we should do that to USO* only or all.  But I agree
with you that'll definitely be cleaner.

On the details of how to turn them off properly..  Taking an example if we
want to turn off all the offload features by default (or simply we replace
that with USO-only)..

Upstream machine type is flexible to all kinds of kernels, so we may not
want to regress anyone using an existing machine type even on perf,
especially if we want to turn off all.

In that case we may need one more knob (I'm assuming this is virtio-net
specific issue, but maybe not; using it as an example) to make sure the old
machine types perfs as well, with:

  - x-virtio-net-offload-enforce

    When set, the offload features with value ON are enforced, so when
    the host doesn't support a offload feature it will fail to boot,
    showing the error that specific offload feature is not supported by the
    virtio backend.

    When clear, the offload features with value ON are not enforced, so
    these features can be automatically turned OFF when it's detected the
    backend doesn't support them.  This may bring best perf but has the
    risk of breaking migration.

With that,

  - On old machine types (compat properties):

    - set "x-virtio-net-offload-enforce" OFF
    - set all offload features ON

  - On new machine types (the default values):

    - set "x-virtio-net-offload-enforce" ON
    - set all offload features OFF

And yes, we can do that until 9.2, but with above even 9.1 should be safe
to do.  9.2 might be still easier just to think everything through again,
after all at least USO was introduced in 8.2 so not a regress in 9.1.

> 
> By the way, I am wondering perhaps the "no-cross-migrate" scenario can be
> implemented relatively easy in a way similar to compatibility properties.
> The idea is to add the "no-cross-migrate" property to machines. If the
> property is set to "on", all offload features of virtio-net will be set to
> "auto". virtio-net will then probe the offload features and enable available
> offloading features.

If it'll become a device property, there's still the trick / concern where
no-cross-migrate could conflict with the other offload feature that was
selected explicilty by an user (e.g. no-cross-migrate=ON + uso=OFF).

Thanks,

-- 
Peter Xu



  reply	other threads:[~2024-08-04 13:09 UTC|newest]

Thread overview: 107+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-31 22:31 [PATCH v2 0/4] virtio-net: add USO feature (UDP segmentation offload) Yuri Benditovich
2023-07-31 22:31 ` [PATCH v2 1/4] tap: Add USO support to tap device Yuri Benditovich
2023-07-31 22:31 ` [PATCH v2 2/4] tap: Add check for USO features Yuri Benditovich
2023-07-31 22:31 ` [PATCH v2 3/4] virtio-net: Add USO flags to vhost support Yuri Benditovich
2023-07-31 22:31 ` [PATCH v2 4/4] virtio-net: Add support for USO features Yuri Benditovich
2023-08-02  5:17   ` Akihiko Odaki
2024-07-25 22:18   ` Peter Xu
2024-07-26  2:12     ` Jason Wang
2024-07-26 15:01       ` Peter Xu
2024-07-26  6:08     ` Michael S. Tsirkin
2024-07-26  7:03       ` Thomas Huth
2024-07-26  7:25         ` Michael S. Tsirkin
2024-07-26 11:32           ` Peter Xu
2024-07-26 17:39           ` Thomas Huth
2024-07-26 20:55             ` Peter Xu
2024-08-01  5:41             ` Michael S. Tsirkin
2024-07-26  8:48         ` Daniel P. Berrangé
2024-07-26 14:43           ` Peter Xu
2024-07-26 15:17             ` Daniel P. Berrangé
2024-07-26 20:47               ` Peter Xu
2024-07-28 15:18                 ` Akihiko Odaki
2024-07-29  3:50                   ` Jason Wang
2024-07-29  4:45                     ` Akihiko Odaki
2024-07-29 14:29                       ` Peter Xu
2024-07-29 16:43                         ` Akihiko Odaki
2024-07-30  2:04                           ` Jason Wang
2024-07-30  2:57                             ` Akihiko Odaki
2024-07-30  3:03                               ` Jason Wang
2024-07-30  3:11                                 ` Akihiko Odaki
2024-07-30  3:17                                   ` Jason Wang
2024-07-30  3:28                                     ` Akihiko Odaki
2024-07-30  3:45                                       ` Jason Wang
2024-07-30 10:23                                         ` Akihiko Odaki
2024-07-30 11:52                                           ` Yuri Benditovich
2024-07-31  2:05                                           ` Jason Wang
2024-07-29 15:58                 ` Daniel P. Berrangé
2024-07-29 17:00                   ` Peter Xu
2024-07-29 17:23                     ` Akihiko Odaki
2024-07-30 18:02                       ` Peter Xu
2024-07-29 17:26                     ` Daniel P. Berrangé
2024-07-30 18:13                       ` Peter Xu
2024-07-30 18:46                         ` Daniel P. Berrangé
2024-07-30 19:11                           ` Peter Xu
2024-07-30 19:22                             ` Michael S. Tsirkin
2024-07-30 20:03                               ` Peter Xu
2024-07-30 21:32                                 ` Michael S. Tsirkin
2024-07-30 22:01                                   ` Peter Xu
2024-07-31  2:01                                   ` Jason Wang
2024-07-31  7:04                                   ` Daniel P. Berrangé
2024-07-31  7:41                                     ` Michael S. Tsirkin
2024-07-31 12:57                                       ` Peter Xu
2024-08-01  2:28                                         ` Jason Wang
2024-08-01  5:28                                           ` Akihiko Odaki
2024-08-01  5:34                                         ` Michael S. Tsirkin
2024-08-01  5:51                                         ` Michael S. Tsirkin
2024-08-01 15:36                                           ` Peter Xu
2024-08-01 15:39                                             ` Michael S. Tsirkin
2024-08-01 15:45                                           ` Daniel P. Berrangé
2024-08-01 15:50                                             ` Michael S. Tsirkin
2024-08-01 15:58                                               ` Daniel P. Berrangé
2024-08-01  5:05                             ` Akihiko Odaki
2024-08-01 15:13                               ` Peter Xu
2024-08-01 15:15                                 ` Michael S. Tsirkin
2024-08-01 15:25                                   ` Daniel P. Berrangé
2024-08-02  4:30                                 ` Akihiko Odaki
2024-08-02 13:21                                   ` Michael S. Tsirkin
2024-08-02 15:05                                   ` Peter Xu
2024-08-02 15:54                                     ` Akihiko Odaki
2024-08-02 16:26                                       ` Peter Xu
2024-08-02 16:40                                         ` Michael S. Tsirkin
2024-08-02 20:56                                           ` Peter Xu
2024-08-04  6:49                                         ` Akihiko Odaki
2024-08-04 13:08                                           ` Peter Xu [this message]
2024-08-04 13:41                                             ` Michael S. Tsirkin
2024-08-05  7:27                                             ` Akihiko Odaki
2024-08-06 20:41                                               ` Peter Xu
2024-08-08 11:43                                                 ` Akihiko Odaki
2024-08-08 13:55                                                   ` Peter Xu
2024-08-08 14:45                                                     ` Michael S. Tsirkin
2024-08-09 10:28                                                     ` Akihiko Odaki
2024-08-05  7:30                                           ` Michael S. Tsirkin
2024-08-05  7:53                                             ` Akihiko Odaki
2024-08-05  8:23                                               ` Michael S. Tsirkin
2024-08-05  9:37                                                 ` Akihiko Odaki
2024-08-05 10:08                                                   ` Michael S. Tsirkin
2024-08-06  7:35                                                     ` Akihiko Odaki
2024-08-06 13:29                                                       ` Michael S. Tsirkin
2024-08-08 10:52                                                         ` Akihiko Odaki
2024-08-08 10:54                                                           ` Michael S. Tsirkin
2024-08-08 11:03                                                             ` Akihiko Odaki
2024-08-08 11:12                                                               ` Michael S. Tsirkin
2024-08-08 11:32                                                                 ` Akihiko Odaki
2024-08-08 14:21                                                                   ` Peter Xu
2024-08-08 14:15                                                                 ` Peter Xu
2024-08-08 14:47                                                                   ` Michael S. Tsirkin
2024-08-08 15:25                                                                     ` Peter Xu
2024-08-09 12:50                                                                       ` Fabiano Rosas
2024-08-18  5:04                                                                         ` Akihiko Odaki
2024-08-18  7:03                                                                           ` Michael S. Tsirkin
2024-08-19  4:27                                                                             ` Akihiko Odaki
2024-08-11  7:35                                                                       ` Michael S. Tsirkin
2024-08-18  5:09                                                                       ` Akihiko Odaki
2024-07-29 17:02                   ` Akihiko Odaki
2024-08-01  5:38                     ` Michael S. Tsirkin
2024-07-29  3:52       ` Jason Wang
2023-08-09 20:21 ` [PATCH v2 0/4] virtio-net: add USO feature (UDP segmentation offload) Yuri Benditovich
2023-08-10  3:14   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zq99NcMkDMkDKBLv@x1n \
    --to=peterx@redhat.com \
    --cc=akihiko.odaki@daynix.com \
    --cc=berrange@redhat.com \
    --cc=devel@lists.libvirt.org \
    --cc=dmitry.fleytman@gmail.com \
    --cc=eduardo@habkost.net \
    --cc=farosas@suse.de \
    --cc=jasowang@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=sriram.yagnaraman@est.tech \
    --cc=sw@weilnetz.de \
    --cc=thuth@redhat.com \
    --cc=wangyanan55@huawei.com \
    --cc=yan@daynix.com \
    --cc=yuri.benditovich@daynix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).