From: Avi Kivity <avi@redhat.com>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
KVM list <kvm@vger.kernel.org>
Subject: Re: [PATCH v5 6/9] KVM: MMU: introduce pte_prefetch_topup_memory_cache()
Date: Mon, 12 Jul 2010 15:26:37 +0300 [thread overview]
Message-ID: <4C3B09FD.3060307@redhat.com> (raw)
In-Reply-To: <4C3A8694.1000401@cn.fujitsu.com>
On 07/12/2010 06:05 AM, Xiao Guangrong wrote:
>
> Avi Kivity wrote:
>
>> On 07/06/2010 01:49 PM, Xiao Guangrong wrote:
>>
>>> Introduce this function to topup prefetch cache
>>>
>>>
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 3dcd55d..cda4587 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -89,6 +89,8 @@ module_param(oos_shadow, bool, 0644);
>>> }
>>> #endif
>>>
>>> +#define PTE_PREFETCH_NUM 16
>>>
>>>
>> Let's make it 8 to start with... It's frightening enough.
>>
>> (8 = one cache line in both guest and host)
>>
> Umm, before post this patchset, i have done the draft performance test for
> different prefetch distance, and it shows 16 is the best distance that we can
> get highest performance.
>
What's the different between 8 and 16?
I'm worried that there are workloads that don't benefit from prefetch,
and we may regress there. So I'd like to limit it, at least at first.
btw, what about dirty logging? will prefetch cause pages to be marked dirty?
We may need to instantiate prefetched pages with spte.d=0 and examine it
when tearing down the spte.
>>> +static int pte_prefetch_topup_memory_cache(struct kvm_vcpu *vcpu)
>>> +{
>>> + return __mmu_topup_memory_cache(&vcpu->arch.mmu_rmap_desc_cache,
>>> + rmap_desc_cache, PTE_PREFETCH_NUM,
>>> + PTE_PREFETCH_NUM, GFP_ATOMIC);
>>> +}
>>> +
>>>
>>>
>> Just make the ordinary topup sufficient for prefetch. If we allocate
>> too much, we don't lose anything, the memory remains for the next time
>> around.
>>
>>
> Umm, but at the worst case, we should allocate 40 items for rmap, it's heavy
> for GFP_ATOMIC allocation and holding mmu_lock.
>
>
Why use GFP_ATOMIC at all? Make mmu_topup_memory_caches() always assume
we'll be prefetching.
Why 40? I think all we need is PTE_PREFETCH_NUM rmap entries.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-07-12 12:26 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-06 10:44 [PATCH v5 1/9] KVM: MMU: fix forgot reserved bits check in speculative path Xiao Guangrong
2010-07-06 10:45 ` [PATCH v5 2/9] KVM: MMU: fix race between 'walk_addr' and 'fetch' Xiao Guangrong
2010-07-06 10:46 ` [PATCH v5 3/9] export __get_user_pages_fast() function Xiao Guangrong
2010-07-11 12:52 ` [PATCH v5 2/9] KVM: MMU: fix race between 'walk_addr' and 'fetch' Avi Kivity
2010-07-11 15:40 ` Avi Kivity
2010-07-06 10:47 ` [PATCH v5 4/9] KVM: MMU: introduce gfn_to_pfn_atomic() function Xiao Guangrong
2010-07-06 11:22 ` Gleb Natapov
2010-07-06 11:28 ` Xiao Guangrong
2010-07-09 1:34 ` Xiao Guangrong
2010-07-06 10:48 ` [PATCH v5 5/9] KVM: MMU: introduce gfn_to_page_many_atomic() function Xiao Guangrong
2010-07-11 12:59 ` Avi Kivity
2010-07-12 2:55 ` Xiao Guangrong
2010-07-12 12:28 ` Avi Kivity
2010-07-13 1:17 ` Xiao Guangrong
2010-07-06 10:49 ` [PATCH v5 6/9] KVM: MMU: introduce pte_prefetch_topup_memory_cache() Xiao Guangrong
2010-07-11 13:05 ` Avi Kivity
2010-07-12 3:05 ` Xiao Guangrong
2010-07-12 12:26 ` Avi Kivity [this message]
2010-07-13 1:16 ` Xiao Guangrong
2010-07-13 4:21 ` Avi Kivity
2010-07-13 4:25 ` Xiao Guangrong
2010-07-13 5:35 ` Avi Kivity
2010-07-13 5:48 ` Xiao Guangrong
2010-07-13 6:05 ` Avi Kivity
2010-07-13 6:10 ` Xiao Guangrong
2010-07-13 6:29 ` Avi Kivity
2010-07-13 6:52 ` Xiao Guangrong
2010-07-13 7:45 ` Avi Kivity
2010-07-06 10:50 ` [PATCH v5 7/9] KVM: MMU: prefetch ptes when intercepted guest #PF Xiao Guangrong
2010-07-06 10:51 ` [PATCH v5 8/9] KVM: MMU: combine guest pte read between fetch and pte prefetch Xiao Guangrong
2010-07-06 19:52 ` Marcelo Tosatti
2010-07-07 1:23 ` Xiao Guangrong
2010-07-07 13:07 ` Marcelo Tosatti
2010-07-07 13:11 ` Xiao Guangrong
2010-07-07 13:40 ` Marcelo Tosatti
2010-07-07 14:10 ` Xiao Guangrong
2010-07-07 15:30 ` Marcelo Tosatti
2010-07-06 10:52 ` [PATCH v5 9/9] KVM: MMU: trace " Xiao Guangrong
2010-07-11 12:24 ` [PATCH v5 1/9] KVM: MMU: fix forgot reserved bits check in speculative path Avi Kivity
2010-07-12 2:37 ` Xiao Guangrong
2010-07-12 13:15 ` Avi Kivity
2010-07-13 1:57 ` Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C3B09FD.3060307@redhat.com \
--to=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).