From: Avi Kivity <avi@redhat.com>
To: Takuya Yoshikawa <takuya.yoshikawa@gmail.com>
Cc: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>,
mtosatti@redhat.com, agraf@suse.de, paulus@samba.org,
aarcange@redhat.com, kvm@vger.kernel.org,
kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/4] KVM: MMU: Make kvm_handle_hva() handle range of addresses
Date: Thu, 21 Jun 2012 11:24:59 +0300 [thread overview]
Message-ID: <4FE2DA5B.4080706@redhat.com> (raw)
In-Reply-To: <20120619224648.bbb360c6eeae9887de7b1f93@gmail.com>
On 06/19/2012 04:46 PM, Takuya Yoshikawa wrote:
> On Mon, 18 Jun 2012 15:11:42 +0300
> Avi Kivity <avi@redhat.com> wrote:
>
>> Potential for improvement: don't do 512 iterations on same large page.
>>
>> Something like
>>
>> if ((gfn ^ prev_gfn) & mask(level))
>> ret |= handler(...)
>>
>> with clever selection of the first prev_gfn so it always matches (~gfn
>> maybe).
>
>
> I thought up a better solution:
>
> 1. Separate rmap_pde from lpage_info->write_count and
> make this a simple array. (I once tried this.)
>
This has the potential to increase cache misses, but I don't think it's
a killer. The separation can simplify other things as well.
> 2. Use gfn_to_index() and loop over rmap array:
> ...
> /* intersection check */
> start = max(start, memslot->userspace_addr);
> end = min(end, memslot->userspace_addr +
> (memslot->npages << PAGE_SHIFT));
> if (start > end)
> continue;
>
> /* hva to gfn conversion */
> gfn_start = hva_to_gfn_memslot(start);
> gfn_end = hva_to_gfn_memslot(end);
>
> /* main part */
> for each level {
> rmapp = __gfn_to_rmap(gfn_start, level, memslot);
> for (idx = gfn_to_index(gfn_start, memslot->base_gfn, level);
> idx < gfn_to_index(gfn_end, memslot->base_gfn, level); idx++) {
> ...
> /* loop over rmap array */
> ret |= handler(kvm, rmapp + idx, data);
> }
> }
>
Probably want idx <= gfn_to_index(gfn_end-1, ...) otherwise we fail on
small slots.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-06-21 8:24 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-15 11:30 [RFC PATCH 0/4] KVM: Optimize MMU notifier's THP page invalidation Takuya Yoshikawa
2012-06-15 11:30 ` [PATCH 1/4] KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva() Takuya Yoshikawa
2012-06-15 11:31 ` [PATCH 2/4] KVM: Introduce hva_to_gfn() for kvm_handle_hva() Takuya Yoshikawa
2012-06-15 21:49 ` Takuya Yoshikawa
2012-06-18 11:59 ` Avi Kivity
2012-06-15 11:32 ` [PATCH 3/4] KVM: MMU: Make kvm_handle_hva() handle range of addresses Takuya Yoshikawa
2012-06-18 12:11 ` Avi Kivity
2012-06-18 13:20 ` Takuya Yoshikawa
2012-06-19 13:46 ` Takuya Yoshikawa
2012-06-21 8:24 ` Avi Kivity [this message]
2012-06-21 13:41 ` Takuya Yoshikawa
2012-06-15 11:33 ` [PATCH 4/4] KVM: Introduce kvm_unmap_hva_range() for kvm_mmu_notifier_invalidate_range_start() Takuya Yoshikawa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FE2DA5B.4080706@redhat.com \
--to=avi@redhat.com \
--cc=aarcange@redhat.com \
--cc=agraf@suse.de \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=paulus@samba.org \
--cc=takuya.yoshikawa@gmail.com \
--cc=yoshikawa.takuya@oss.ntt.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).