From: Avi Kivity <avi@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
mtosatti@redhat.com, xiaoguangrong@cn.fujitsu.com
Subject: Re: [RFC PATCH 0/3] Weight-balanced binary tree + KVM growable memory slots using wbtree
Date: Wed, 02 Mar 2011 15:31:32 +0200 [thread overview]
Message-ID: <4D6E46B4.7030909@redhat.com> (raw)
In-Reply-To: <1299003632.4177.66.camel@x201>
On 03/01/2011 08:20 PM, Alex Williamson wrote:
> > > It seems like we need a good mixed workload benchmark. So far we've
> > > only tested worst case, with a pure emulated I/O test, and best case,
> > > with a pure memory test. Ordering an array only helps the latter, and
> > > only barely beats the tree, so I suspect overall performance would be
> > > better with a tree.
> >
> > But if we cache the missed-all-memslots result in the spte, we eliminate
> > the worst case, and are left with just the best case.
>
> There's potentially a lot of entries between best case and worst case.
The mid case is where we have a lot of small slots which are
continuously flushed. That would be (ept=0 && new mappings continuously
established) || (lots of small mappings && lots of host paging
activity). I don't know of any guests that continuously reestablish BAR
mappings; and host paging activity doesn't apply to device assignment.
What are we left with?
> >
> > The problem here is that all workloads will cache all memslots very
> > quickly into sptes and all lookups will be misses. There are two cases
> > where we have lookups that hit the memslots structure: ept=0, and host
> > swap. Neither are things we want to optimize too heavily.
>
> Which seems to suggest that:
>
> A. making those misses fast = win
> B. making those misses fast + caching misses = win++
> C. we don't care if the sorted array is subtly faster for ept=0
>
> Sound right? So is the question whether cached misses alone gets us 99%
> of the improvement since hits are already getting cached in sptes for
> cases we care about?
Yes, that's my feeling. Caching those misses is a lot more important
than speeding them up, since the cache will stay valid for long periods,
and since the hit rate will be very high.
Cache+anything=O(1)
no-cache+tree=O(log(n))
no-cache+array=O(n)
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2011-03-02 13:31 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-22 8:08 [PATCH 0/7] KVM: optimize memslots searching and cache GPN to GFN Xiao Guangrong
2011-02-22 8:09 ` [PATCH 1/7] KVM: cleanup memslot_id function Xiao Guangrong
2011-02-22 8:10 ` [PATCH 2/7] KVM: introduce KVM_MEM_SLOTS_NUM macro Xiao Guangrong
2011-02-22 8:11 ` [PATCH 1/3] KVM: introduce memslots_updated function Xiao Guangrong
2011-02-22 8:12 ` [PATCH 4/7] KVM: sort memslots and use binary search to search the right slot Xiao Guangrong
2011-02-22 14:25 ` Avi Kivity
2011-02-22 14:54 ` Alex Williamson
2011-02-22 18:54 ` [RFC PATCH 0/3] Weight-balanced binary tree + KVM growable memory slots using wbtree Alex Williamson
2011-02-22 18:55 ` [RFC PATCH 1/3] Weight-balanced tree Alex Williamson
2011-02-23 13:09 ` Avi Kivity
2011-02-23 17:02 ` Alex Williamson
2011-02-23 17:08 ` Avi Kivity
2011-02-23 20:19 ` Alex Williamson
2011-02-24 23:04 ` Andrew Morton
2011-02-22 18:55 ` [RFC PATCH 2/3] kvm: Allow memory slot array to grow on demand Alex Williamson
2011-02-24 10:39 ` Avi Kivity
2011-02-24 18:08 ` Alex Williamson
2011-02-27 9:44 ` Avi Kivity
2011-02-22 18:55 ` [RFC PATCH 3/3] kvm: Use weight-balanced tree for memory slot management Alex Williamson
2011-02-22 18:59 ` [RFC PATCH 0/3] Weight-balanced binary tree + KVM growable memory slots using wbtree Alex Williamson
2011-02-23 1:56 ` Alex Williamson
2011-02-23 13:12 ` Avi Kivity
2011-02-23 18:06 ` Alex Williamson
2011-02-23 19:28 ` Alex Williamson
2011-02-24 10:06 ` Avi Kivity
2011-02-24 17:35 ` Alex Williamson
2011-02-27 9:54 ` Avi Kivity
2011-02-28 23:04 ` Alex Williamson
2011-03-01 15:03 ` Avi Kivity
2011-03-01 18:20 ` Alex Williamson
2011-03-02 13:31 ` Avi Kivity [this message]
2011-03-01 19:47 ` Marcelo Tosatti
2011-03-02 13:34 ` Avi Kivity
2011-02-24 10:04 ` Avi Kivity
2011-02-23 1:30 ` [PATCH 4/7] KVM: sort memslots and use binary search to search the right slot Xiao Guangrong
2011-02-22 8:13 ` [PATCH 5/7] KVM: cache the last used slot Xiao Guangrong
2011-02-22 14:26 ` Avi Kivity
2011-02-22 8:15 ` [PATCH 6/7] KVM: cleanup traversal used slots Xiao Guangrong
2011-02-22 8:16 ` [PATCH 7/7] KVM: MMU: cache guest page number to guest frame number Xiao Guangrong
2011-02-22 14:32 ` Avi Kivity
2011-02-23 1:38 ` Xiao Guangrong
2011-02-23 9:28 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D6E46B4.7030909@redhat.com \
--to=avi@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).