public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Abel Gordon <ABELG@il.ibm.com>
Cc: Jason Wang <jasowang@redhat.com>,
	abel.gordon@gmail.com, anthony@codemonkey.ws, asias@redhat.com,
	bsd@redhat.com, digitaleric@google.com,
	Eran Raichstein <ERANRA@il.ibm.com>,
	gleb@redhat.com, Joel Nider <JOELN@il.ibm.com>,
	kvm@vger.kernel.org, pbonzini@redhat.com,
	Razya Ladelsky <RAZYA@il.ibm.com>
Subject: Re: Elvis upstreaming plan
Date: Wed, 27 Nov 2013 12:37:01 +0200	[thread overview]
Message-ID: <20131127103701.GE29446@redhat.com> (raw)
In-Reply-To: <OFAA78B27D.94D20A3F-ONC2257C30.00368C6B-C2257C30.0038A89C@il.ibm.com>

On Wed, Nov 27, 2013 at 12:18:51PM +0200, Abel Gordon wrote:
> 
> 
> Jason Wang <jasowang@redhat.com> wrote on 27/11/2013 04:49:20 AM:
> 
> >
> > On 11/24/2013 05:22 PM, Razya Ladelsky wrote:
> > > Hi all,
> > >
> > > I am Razya Ladelsky, I work at IBM Haifa virtualization team, which
> > > developed Elvis, presented by Abel Gordon at the last KVM forum:
> > > ELVIS video:  https://www.youtube.com/watch?v=9EyweibHfEs
> > > ELVIS slides:
> https://drive.google.com/file/d/0BzyAwvVlQckeQmpnOHM5SnB5UVE
> > >
> > >
> > > According to the discussions that took place at the forum, upstreaming
> > > some of the Elvis approaches seems to be a good idea, which we would
> like
> > > to pursue.
> > >
> > > Our plan for the first patches is the following:
> > >
> > > 1.Shared vhost thread between mutiple devices
> > > This patch creates a worker thread and worker queue shared across
> multiple
> > > virtio devices
> > > We would like to modify the patch posted in
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > 3dc6a3ce7bcbe87363c2df8a6b6fee0c14615766
> > > to limit a vhost thread to serve multiple devices only if they belong
> to
> > > the same VM as Paolo suggested to avoid isolation or cgroups concerns.
> > >
> > > Another modification is related to the creation and removal of vhost
> > > threads, which will be discussed next.
> > >
> > > 2. Sysfs mechanism to add and remove vhost threads
> > > This patch allows us to add and remove vhost threads dynamically.
> > >
> > > A simpler way to control the creation of vhost threads is statically
> > > determining the maximum number of virtio devices per worker via a
> kernel
> > > module parameter (which is the way the previously mentioned patch is
> > > currently implemented)
> >
> > Any chance we can re-use the cwmq instead of inventing another
> > mechanism? Looks like there're lots of function duplication here. Bandan
> > has an RFC to do this.
> 
> Thanks for the suggestion. We should certainly take a look at Bandan's
> patches which I guess are:
> 
> http://www.mail-archive.com/kvm@vger.kernel.org/msg96603.html
> 
> My only concern here is that we may not be able to easily implement
> our polling mechanism and heuristics with cwmq.

It's not so hard, to poll you just requeue work to make sure it's
re-invoked.

> > >
> > > I'd like to ask for advice here about the more preferable way to go:
> > > Although having the sysfs mechanism provides more flexibility, it may
> be a
> > > good idea to start with a simple static parameter, and have the first
> > > patches as simple as possible. What do you think?
> > >
> > > 3.Add virtqueue polling mode to vhost
> > > Have the vhost thread poll the virtqueues with high I/O rate for new
> > > buffers , and avoid asking the guest to kick us.
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > 26616133fafb7855cc80fac070b0572fd1aaf5d0
> >
> > Maybe we can make poll_stop_idle adaptive which may help the light load
> > case. Consider guest is often slow than vhost, if we just have one or
> > two vms, polling too much may waste cpu in this case.
> 
> Yes, make polling adaptive based on the amount of wasted cycles (cycles
> we did polling but didn't find new work) and I/O rate is a very good idea.
> Note we already measure and expose these values but we do not use them
> to adapt the polling mechanism.
> 
> Having said that, note that adaptive polling may be a bit tricky.
> Remember that the cycles we waste polling in the vhost thread actually
> improves the performance of the vcpu threads because the guest is no longer
> 
> require to kick (pio==exit) the host when vhost does polling. So even if
> we waste cycles in the vhost thread, we are saving cycles in the
> vcpu thread and improving performance.


So my suggestion would be:

- guest runs some kicks
- measures how long it took, e.g. kick = T cycles
- sends this info to host

host polls for at most fraction * T cycles


> > > 4. vhost statistics
> > > This patch introduces a set of statistics to monitor different
> performance
> > > metrics of vhost and our polling and I/O scheduling mechanisms. The
> > > statistics are exposed using debugfs and can be easily displayed with a
> 
> > > Python script (vhost_stat, based on the old kvm_stats)
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > ac14206ea56939ecc3608dc5f978b86fa322e7b0
> >
> > How about using trace points instead? Besides statistics, it can also
> > help more in debugging.
> 
> Yep, we just had a discussion with Gleb about this :)
> 
> > >
> > > 5. Add heuristics to improve I/O scheduling
> > > This patch enhances the round-robin mechanism with a set of heuristics
> to
> > > decide when to leave a virtqueue and proceed to the next.
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > f6a4f1a5d6b82dc754e8af8af327b8d0f043dc4d
> > >
> > > This patch improves the handling of the requests by the vhost thread,
> but
> > > could perhaps be delayed to a
> > > later time , and not submitted as one of the first Elvis patches.
> > > I'd love to hear some comments about whether this patch needs to be
> part
> > > of the first submission.
> > >
> > > Any other feedback on this plan will be appreciated,
> > > Thank you,
> > > Razya
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >

      reply	other threads:[~2013-11-27 10:33 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-24  9:22 Elvis upstreaming plan Razya Ladelsky
2013-11-24 10:26 ` Michael S. Tsirkin
2013-11-25 11:06   ` Razya Ladelsky
2013-11-26 15:50 ` Stefan Hajnoczi
2013-11-26 18:05 ` Anthony Liguori
2013-11-26 18:53   ` Abel Gordon
2013-11-26 21:11     ` Michael S. Tsirkin
2013-11-27  7:43       ` Joel Nider
2013-11-27 10:27         ` Michael S. Tsirkin
2013-11-27 10:41           ` Abel Gordon
2013-11-27 10:59             ` Michael S. Tsirkin
2013-11-27 11:02               ` Abel Gordon
2013-11-27 11:36                 ` Michael S. Tsirkin
2013-11-27 22:33             ` Anthony Liguori
2013-11-28  8:25               ` Abel Gordon
2013-11-27 15:00         ` Stefan Hajnoczi
2013-11-27 15:30           ` Michael S. Tsirkin
2013-11-28  7:24           ` Joel Nider
2013-11-28  7:31           ` Abel Gordon
2013-11-28 11:01             ` Michael S. Tsirkin
2013-12-02 15:11             ` Stefan Hajnoczi
2013-11-27  9:03       ` Abel Gordon
2013-11-27  9:21         ` Michael S. Tsirkin
2013-11-27  9:49           ` Abel Gordon
2013-11-27 10:29             ` Michael S. Tsirkin
2013-11-27 10:55               ` Abel Gordon
2013-11-27 11:03                 ` Michael S. Tsirkin
2013-11-27 11:05                   ` Abel Gordon
2013-11-27 11:40                     ` Michael S. Tsirkin
2013-11-26 22:27 ` Bandan Das
2013-11-27  2:49 ` Jason Wang
2013-11-27  7:35   ` Gleb Natapov
2013-11-27  7:45     ` Joel Nider
2013-11-27  9:18     ` Abel Gordon
2013-11-27  9:21       ` Gleb Natapov
2013-11-27  9:33         ` Abel Gordon
2013-11-27  9:48           ` Gleb Natapov
2013-11-27 10:18   ` Abel Gordon
2013-11-27 10:37     ` Michael S. Tsirkin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131127103701.GE29446@redhat.com \
    --to=mst@redhat.com \
    --cc=ABELG@il.ibm.com \
    --cc=ERANRA@il.ibm.com \
    --cc=JOELN@il.ibm.com \
    --cc=RAZYA@il.ibm.com \
    --cc=abel.gordon@gmail.com \
    --cc=anthony@codemonkey.ws \
    --cc=asias@redhat.com \
    --cc=bsd@redhat.com \
    --cc=digitaleric@google.com \
    --cc=gleb@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox