From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next 0/3] vhost: accelerate metadata access through vmap()
Date: Fri, 14 Dec 2018 07:52:30 -0500 [thread overview]
Message-ID: <20181214073709-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <836932fc-9266-b73d-2ee5-645f399e1a54@redhat.com>
On Fri, Dec 14, 2018 at 12:29:54PM +0800, Jason Wang wrote:
>
> On 2018/12/14 上午4:12, Michael S. Tsirkin wrote:
> > On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote:
> > > Hi:
> > >
> > > This series tries to access virtqueue metadata through kernel virtual
> > > address instead of copy_user() friends since they had too much
> > > overheads like checks, spec barriers or even hardware feature
> > > toggling.
> > >
> > > Test shows about 24% improvement on TX PPS. It should benefit other
> > > cases as well.
> > >
> > > Please review
> > I think the idea of speeding up userspace access is a good one.
> > However I think that moving all checks to start is way too aggressive.
>
>
> So did packet and AF_XDP. Anyway, sharing address space and access them
> directly is the fastest way. Performance is the major consideration for
> people to choose backend. Compare to userspace implementation, vhost does
> not have security advantages at any level. If vhost is still slow, people
> will start to develop backends based on e.g AF_XDP.
>
Let them what's wrong with that?
> > Instead, let's batch things up but let's not keep them
> > around forever.
> > Here are some ideas:
> >
> >
> > 1. Disable preemption, process a small number of small packets
> > directly in an atomic context. This should cut latency
> > down significantly, the tricky part is to only do it
> > on a light load and disable this
> > for the streaming case otherwise it's unfair.
> > This might fail, if it does just bounce things out to
> > a thread.
>
>
> I'm not sure what context you meant here. Is this for TX path of TUN? But a
> fundamental difference is my series is targeted for extreme heavy load not
> light one, 100% cpu for vhost is expected.
Interesting. You only shared a TCP RR result though.
What's the performance gain in a heavy load case?
>
> >
> > 2. Switch to unsafe_put_user/unsafe_get_user,
> > and batch up multiple accesses.
>
>
> As I said, unless we can batch accessing of two difference places of three
> of avail, descriptor and used. It won't help for batching the accessing of a
> single place like used. I'm even not sure this can be done consider the case
> of packed virtqueue, we have a single descriptor ring.
So that's one of the reasons packed should be faster. Single access
and you get the descriptor no messy redirects. Somehow your
benchmarking so far didn't show a gain with vhost and
packed though - do you know what's wrong?
> Batching through
> unsafe helpers may not help in this case since it's equivalent to safe ones
> . And This requires non trivial refactoring of vhost. And such refactoring
> itself make give us noticeable impact (e.g it may lead regression).
>
>
> >
> > 3. Allow adding a fixup point manually,
> > such that multiple independent get_user accesses
> > can get a single fixup (will allow better compiler
> > optimizations).
> >
>
> So for metadata access, I don't see how you suggest here can help in the
> case of heavy workload.
>
> For data access, this may help but I've played to batch the data copy to
> reduce SMAP/spec barriers in vhost-net but I don't see performance
> improvement.
>
> Thanks
So how about we try to figure what's going on actually?
Can you drop the barriers and show the same gain?
E.g. vmap does not use a huge page IIRC so in fact it
can be slower than direct access. It's not a magic
faster way.
>
> >
> >
> >
> > > Jason Wang (3):
> > > vhost: generalize adding used elem
> > > vhost: fine grain userspace memory accessors
> > > vhost: access vq metadata through kernel virtual address
> > >
> > > drivers/vhost/vhost.c | 281 ++++++++++++++++++++++++++++++++++++++----
> > > drivers/vhost/vhost.h | 11 ++
> > > 2 files changed, 266 insertions(+), 26 deletions(-)
> > >
> > > --
> > > 2.17.1
next prev parent reply other threads:[~2018-12-14 12:52 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-13 10:10 [PATCH net-next 0/3] vhost: accelerate metadata access through vmap() Jason Wang
2018-12-13 10:10 ` [PATCH net-next 1/3] vhost: generalize adding used elem Jason Wang
2018-12-13 19:41 ` Michael S. Tsirkin
2018-12-14 4:00 ` Jason Wang
2018-12-13 10:10 ` [PATCH net-next 2/3] vhost: fine grain userspace memory accessors Jason Wang
2018-12-13 10:10 ` [PATCH net-next 3/3] vhost: access vq metadata through kernel virtual address Jason Wang
2018-12-13 15:44 ` Michael S. Tsirkin
2018-12-13 21:18 ` Konrad Rzeszutek Wilk
2018-12-13 21:58 ` Michael S. Tsirkin
2018-12-14 3:57 ` Jason Wang
2018-12-14 12:36 ` Michael S. Tsirkin
2018-12-24 7:53 ` Jason Wang
2018-12-24 18:10 ` Michael S. Tsirkin
2018-12-25 10:05 ` Jason Wang
2018-12-25 12:50 ` Michael S. Tsirkin
2018-12-26 3:57 ` Jason Wang
2018-12-26 15:02 ` Michael S. Tsirkin
2018-12-27 9:39 ` Jason Wang
2018-12-30 18:30 ` Michael S. Tsirkin
2019-01-02 11:38 ` Jason Wang
2018-12-15 21:15 ` David Miller
2018-12-14 14:48 ` kbuild test robot
2018-12-13 15:27 ` [PATCH net-next 0/3] vhost: accelerate metadata access through vmap() Michael S. Tsirkin
2018-12-14 3:42 ` Jason Wang
2018-12-14 12:33 ` Michael S. Tsirkin
2018-12-14 15:31 ` Michael S. Tsirkin
2018-12-24 8:32 ` Jason Wang
2018-12-24 18:12 ` Michael S. Tsirkin
2018-12-25 10:09 ` Jason Wang
2018-12-25 12:52 ` Michael S. Tsirkin
2018-12-26 3:59 ` Jason Wang
2018-12-13 20:12 ` Michael S. Tsirkin
2018-12-14 4:29 ` Jason Wang
2018-12-14 12:52 ` Michael S. Tsirkin [this message]
2018-12-15 19:43 ` David Miller
2018-12-16 19:57 ` Michael S. Tsirkin
2018-12-24 8:44 ` Jason Wang
2018-12-24 19:09 ` Michael S. Tsirkin
2018-12-14 15:16 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181214073709-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).