From: "Michael S. Tsirkin" <mst@redhat.com>
To: Si-Wei Liu <si-wei.liu@oracle.com>
Cc: Joao Martins <joao.m.martins@oracle.com>,
qemu-devel@nongnu.org, Jason Wang <jasowang@redhat.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH] vhost: Perform memory section dirty scans once per iteration
Date: Wed, 18 Oct 2023 01:55:46 -0400 [thread overview]
Message-ID: <20231018015401-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <454ea1c5-7e77-41b5-b6e6-88efb6c437a3@oracle.com>
On Tue, Oct 17, 2023 at 05:32:34PM -0700, Si-Wei Liu wrote:
>
>
> On 10/6/2023 2:48 AM, Michael S. Tsirkin wrote:
> > On Fri, Oct 06, 2023 at 09:58:30AM +0100, Joao Martins wrote:
> > > On 03/10/2023 15:01, Michael S. Tsirkin wrote:
> > > > On Wed, Sep 27, 2023 at 12:14:28PM +0100, Joao Martins wrote:
> > > > > On setups with one or more virtio-net devices with vhost on,
> > > > > dirty tracking iteration increases cost the bigger the number
> > > > > amount of queues are set up e.g. on idle guests migration the
> > > > > following is observed with virtio-net with vhost=on:
> > > > >
> > > > > 48 queues -> 78.11% [.] vhost_dev_sync_region.isra.13
> > > > > 8 queues -> 40.50% [.] vhost_dev_sync_region.isra.13
> > > > > 1 queue -> 6.89% [.] vhost_dev_sync_region.isra.13
> > > > > 2 devices, 1 queue -> 18.60% [.] vhost_dev_sync_region.isra.14
> > > > >
> > > > > With high memory rates the symptom is lack of convergence as soon
> > > > > as it has a vhost device with a sufficiently high number of queues,
> > > > > the sufficient number of vhost devices.
> > > > >
> > > > > On every migration iteration (every 100msecs) it will redundantly
> > > > > query the *shared log* the number of queues configured with vhost
> > > > > that exist in the guest. For the virtqueue data, this is necessary,
> > > > > but not for the memory sections which are the same. So
> > > > > essentially we end up scanning the dirty log too often.
> > > > >
> > > > > To fix that, select a vhost device responsible for scanning the
> > > > > log with regards to memory sections dirty tracking. It is selected
> > > > > when we enable the logger (during migration) and cleared when we
> > > > > disable the logger.
> > > > >
> > > > > The real problem, however, is exactly that: a device per vhost worker/qp,
> > > > > when there should be a device representing a netdev (for N vhost workers).
> > > > > Given this problem exists for any Qemu these days, figured a simpler
> > > > > solution is better to increase stable tree's coverage; thus don't
> > > > > change the device model of sw vhost to fix this "over log scan" issue.
> > > > >
> > > > > Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> > > > > ---
> > > > > I am not fully sure the heuristic captures the myriad of different vhost
> > > > > devices -- I think so. IIUC, the log is always shared, it's just whether
> > > > > it's qemu head memory or via /dev/shm when other processes want to
> > > > > access it.
> > > > Thanks for working on this.
> > > >
> > > > I don't think this works like this because different types of different
> > > > vhost devices have different regions - see e.g. vhost_region_add_section
> > > > I am also not sure all devices are running at the same time - e.g.
> > > > some could be disconnected, and vhost_sync_dirty_bitmap takes this
> > > > into account.
> > > >
> > > Good point. But this all means logic in selecting the 'logger' to take into
> > > considering whether vhost_dev::log_enabled or vhost_dev::started right?
> > >
> > > With respect to regions it seems like this can only change depending on whether
> > > one of the vhost devices, backend_type is VHOST_BACKEND_TYPE_USER *and* whether
> > > the backend sets vhost_backend_can_merge?
> > >
> > > With respect to 'could be disconnected' during migration not devices can be
> > > added or removed during migration, so might not be something that occurs during
> > > migration.
> > > I placed this in log_sync exactly to just cover migration, unless
> > > there's some other way that disconnects the vhost and changes these variables
> > > during migration.
> > The *frontend* can't be added or removed (ATM - this is just because we lack
> > good ways to describe devices that can be migrated, so all we
> > came up with is passing same command line on both sides,
> > and this breaks if you add/remove things in the process).
> > We really shouldn't bake this assumption into code if we can
> > help it though.
> >
> > But I digress.
> >
> > The *backend* can disconnect at any time as this is not guest visible.
> >
> > > > But the idea is I think a good one - I just feel more refactoring is
> > > > needed.
> > > Can you expand on what refactoring you were thinking for this fix?
> > Better separate the idea of logging from device. then we can
> > have a single logger that collects data from devices to decide
> > what needs to be logged.
> Discussion. I think the troublemaker here is the vhost-user clients that
> attempt to round down&up to (huge) page boundary and then has to merge
> adjacent sections, leading to differing views between vhost devices. While I
> agree it is a great idea to separate logging from device, it isn't clear to
> me how that can help the case where there could be a mix of both vhost-user
> and vhost-kernel clients in the same qemu process, in which case it would
> need at least 2 separate vhost loggers for the specific vhost type? Or you
> would think there's value to unify the two distinct subsystems with one
> single vhost logger facility?
Yes - I think we need a logger per backend type. Reference-count them, too.
> Noted the vhost logging interface (vhost
> kernel or vhost userspace) doesn't support the notion of separate logging of
> memory buffer sections against those for VQs, all QEMU can rely on is
> various sections in the memory table and basically a single dirty bitmap for
> both guest buffers and VQs are indistinctively shared by all vhost devices.
> How does it help to just refactor QEMU part of code using today's vhost
> backend interface, I am not sure.
>
> Regardless, IMHO for fixing stable p.o.v it might be less risky and valuable
> to just limit the fix to vhost-kernel case (to be more precise,
> non-vhost-user type and without vhost_backend_can_merge defined), my 2c.
>
>
> Regards,
> -Siwei
> >
> > > My thinking on this bug was mostly to address the inneficiency with the smallest
> > > intrusive fix (if at all possible!) given that virtually all multiqueue vhost
> > > supported QEMU have this problem. And then move into a 'vhost-device for all
> > > queues' as it feels like the problem here is the 'device per queue pair' doesn't
> > > scale.
> > >
> > > At the end of the day the problem on this is the vhost object model in log_sync
> > > not scaling to amount of queues. But you could also argue that if the log is
> > > shared that you can just log once for all, plus another one for each deviation
> > > of normal behaviour, like the points you made in the earlier paragraph, and thus
> > > the thinking behind this patch would still apply?
> > The thinking is good, but not the implementation.
> >
prev parent reply other threads:[~2023-10-18 5:56 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 11:14 [PATCH] vhost: Perform memory section dirty scans once per iteration Joao Martins
2023-10-03 14:01 ` Michael S. Tsirkin
2023-10-06 8:58 ` Joao Martins
2023-10-06 9:48 ` Michael S. Tsirkin
2023-10-06 11:02 ` Joao Martins
2023-10-18 0:32 ` Si-Wei Liu
2023-10-18 5:55 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231018015401-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jasowang@redhat.com \
--cc=joao.m.martins@oracle.com \
--cc=konrad.wilk@oracle.com \
--cc=qemu-devel@nongnu.org \
--cc=si-wei.liu@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).