virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Tobias Huschle <huschle@linux.ibm.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Abel Wu <wuyun.abel@bytedance.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	kvm@vger.kernel.org, virtualization@lists.linux.dev,
	netdev@vger.kernel.org, jasowang@redhat.com
Subject: Re: Re: Re: EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement)
Date: Fri, 8 Dec 2023 12:41:38 +0100	[thread overview]
Message-ID: <ZXMA8r+T86Is8Ohv@DESKTOP-2CCOB1S.> (raw)
In-Reply-To: <20231208052150-mutt-send-email-mst@kernel.org>

On Fri, Dec 08, 2023 at 05:31:18AM -0500, Michael S. Tsirkin wrote:
> On Fri, Dec 08, 2023 at 10:24:16AM +0100, Tobias Huschle wrote:
> > On Thu, Dec 07, 2023 at 01:48:40AM -0500, Michael S. Tsirkin wrote:
> > > On Thu, Dec 07, 2023 at 07:22:12AM +0100, Tobias Huschle wrote:
> > > > 3. vhost looping endlessly, waiting for kworker to be scheduled
> > > > 
> > > > I dug a little deeper on what the vhost is doing. I'm not an expert on
> > > > virtio whatsoever, so these are just educated guesses that maybe
> > > > someone can verify/correct. Please bear with me probably messing up 
> > > > the terminology.
> > > > 
> > > > - vhost is looping through available queues.
> > > > - vhost wants to wake up a kworker to process a found queue.
> > > > - kworker does something with that queue and terminates quickly.
> > > > 
> > > > What I found by throwing in some very noisy trace statements was that,
> > > > if the kworker is not woken up, the vhost just keeps looping accross
> > > > all available queues (and seems to repeat itself). So it essentially
> > > > relies on the scheduler to schedule the kworker fast enough. Otherwise
> > > > it will just keep on looping until it is migrated off the CPU.
> > > 
> > > 
> > > Normally it takes the buffers off the queue and is done with it.
> > > I am guessing that at the same time guest is running on some other
> > > CPU and keeps adding available buffers?
> > > 
> > 
> > It seems to do just that, there are multiple other vhost instances
> > involved which might keep filling up thoses queues. 
> > 
> 
> No vhost is ever only draining queues. Guest is filling them.
> 
> > Unfortunately, this makes the problematic vhost instance to stay on
> > the CPU and prevents said kworker to get scheduled. The kworker is
> > explicitly woken up by vhost, so it wants it to do something.
> > 
> > At this point it seems that there is an assumption about the scheduler
> > in place which is no longer fulfilled by EEVDF. From the discussion so
> > far, it seems like EEVDF does what is intended to do.
> > 
> > Shouldn't there be a more explicit mechanism in use that allows the
> > kworker to be scheduled in favor of the vhost?
> > 
> > It is also concerning that the vhost seems cannot be preempted by the
> > scheduler while executing that loop.
> 
> 
> Which loop is that, exactly?

The loop continously passes translate_desc in drivers/vhost/vhost.c
That's where I put the trace statements.

The overall sequence seems to be (top to bottom):

handle_rx
get_rx_bufs
vhost_get_vq_desc
vhost_get_avail_head
vhost_get_avail
__vhost_get_user_slow
translate_desc               << trace statement in here
vhost_iotlb_itree_first

These functions show up as having increased overhead in perf.

There are multiple loops going on in there.
Again the disclaimer though, I'm not familiar with that code at all.

  reply	other threads:[~2023-12-08 11:41 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-16 18:58 EEVDF/vhost regression (bisected to 86bfbb7ce4f6 sched/fair: Add lag based placement) Tobias Huschle
2023-11-17  9:23 ` Peter Zijlstra
2023-11-17  9:58   ` Peter Zijlstra
2023-11-17 12:24   ` Tobias Huschle
2023-11-17 12:37     ` Peter Zijlstra
2023-11-17 13:07       ` Abel Wu
2023-11-21 13:17         ` Tobias Huschle
2023-11-22 10:00           ` Peter Zijlstra
2023-11-27 13:56             ` Tobias Huschle
     [not found]             ` <6564a012.c80a0220.adb78.f0e4SMTPIN_ADDED_BROKEN@mx.google.com>
2023-11-28  8:55               ` Abel Wu
2023-11-29  6:31                 ` Tobias Huschle
2023-12-07  6:22                 ` Tobias Huschle
     [not found]                 ` <07513.123120701265800278@us-mta-474.us.mimecast.lan>
2023-12-07  6:48                   ` Michael S. Tsirkin
2023-12-08  9:24                     ` Tobias Huschle
2023-12-08 17:28                       ` Mike Christie
     [not found]                     ` <56082.123120804242300177@us-mta-137.us.mimecast.lan>
2023-12-08 10:31                       ` Re: " Michael S. Tsirkin
2023-12-08 11:41                         ` Tobias Huschle [this message]
     [not found]                         ` <53044.123120806415900549@us-mta-342.us.mimecast.lan>
2023-12-09 10:42                           ` Michael S. Tsirkin
2023-12-11  7:26                             ` Jason Wang
2023-12-11 16:53                               ` Michael S. Tsirkin
2023-12-12  3:00                                 ` Jason Wang
2023-12-12 16:15                                   ` Michael S. Tsirkin
2023-12-13 10:37                                     ` Tobias Huschle
     [not found]                                     ` <42870.123121305373200110@us-mta-641.us.mimecast.lan>
2023-12-13 12:00                                       ` Michael S. Tsirkin
2023-12-13 12:45                                         ` Tobias Huschle
     [not found]                                         ` <25485.123121307454100283@us-mta-18.us.mimecast.lan>
2023-12-13 14:47                                           ` Michael S. Tsirkin
2023-12-13 14:55                                           ` Michael S. Tsirkin
2023-12-14  7:14                                             ` Michael S. Tsirkin
2024-01-08 13:13                                               ` Tobias Huschle
     [not found]                                               ` <92916.124010808133201076@us-mta-622.us.mimecast.lan>
2024-01-09 23:07                                                 ` Michael S. Tsirkin
2024-01-21 18:44                                                 ` Michael S. Tsirkin
2024-01-22 11:29                                                   ` Tobias Huschle
2024-02-01  7:38                                                   ` Tobias Huschle
     [not found]                                                   ` <07974.124020102385100135@us-mta-501.us.mimecast.lan>
2024-02-01  8:08                                                     ` Michael S. Tsirkin
2024-02-01 11:47                                                       ` Tobias Huschle
     [not found]                                                       ` <89460.124020106474400877@us-mta-475.us.mimecast.lan>
2024-02-01 12:08                                                         ` Michael S. Tsirkin
2024-02-22 19:23                                                         ` Michael S. Tsirkin
2024-03-11 17:05                                                         ` Michael S. Tsirkin
2024-03-12  9:45                                                           ` Luis Machado
2024-03-14 11:46                                                             ` Tobias Huschle
     [not found]                                                             ` <73123.124031407552500165@us-mta-156.us.mimecast.lan>
2024-03-14 15:09                                                               ` Michael S. Tsirkin
2024-03-15  8:33                                                                 ` Tobias Huschle
     [not found]                                                                 ` <84704.124031504335801509@us-mta-515.us.mimecast.lan>
2024-03-15 10:31                                                                   ` Michael S. Tsirkin
2024-03-19  8:21                                                                     ` Tobias Huschle
2024-03-19  8:29                                                                       ` Michael S. Tsirkin
2024-03-19  8:59                                                                         ` Tobias Huschle
2024-04-30 10:50                                                                           ` Tobias Huschle
2024-05-01 10:51                                                                             ` Peter Zijlstra
2024-05-01 15:31                                                                               ` Michael S. Tsirkin
2024-05-02  9:16                                                                                 ` Peter Zijlstra
2024-05-02 12:23                                                                                 ` Tobias Huschle
2024-05-02 12:20                                                                               ` Tobias Huschle
2023-11-18  5:14   ` Abel Wu
2023-11-20 10:56     ` Peter Zijlstra
2023-11-20 12:06       ` Abel Wu
2023-11-18  7:33 ` Abel Wu
2023-11-18 15:29   ` Honglei Wang
2023-11-19 13:29 ` Bagas Sanjaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZXMA8r+T86Is8Ohv@DESKTOP-2CCOB1S. \
    --to=huschle@linux.ibm.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=virtualization@lists.linux.dev \
    --cc=wuyun.abel@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).