From: Avi Kivity <avi@redhat.com>
To: Alexander Graf <agraf@suse.de>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
KVM list <kvm@vger.kernel.org>,
"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>
Subject: Re: [PATCH] KVM: PPC: Add generic hpte management functions
Date: Mon, 28 Jun 2010 13:01:25 +0300 [thread overview]
Message-ID: <4C2872F5.20501@redhat.com> (raw)
In-Reply-To: <4C2871A8.1060706@suse.de>
On 06/28/2010 12:55 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>
>> On 06/28/2010 12:27 PM, Alexander Graf wrote:
>>
>>>> Am I looking at old code?
>>>>
>>>
>>> Apparently. Check book3s_mmu_*.c
>>>
>> I don't have that pattern.
>>
> It's in this patch.
>
Yes. Silly me.
>> +static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
>> +{
>> + dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
>> + pte->pte.eaddr, pte->pte.vpage, pte->host_va);
>> +
>> + /* Different for 32 and 64 bit */
>> + kvmppc_mmu_invalidate_pte(vcpu, pte);
>> +
>> + if (pte->pte.may_write)
>> + kvm_release_pfn_dirty(pte->pfn);
>> + else
>> + kvm_release_pfn_clean(pte->pfn);
>> +
>> + list_del(&pte->list_pte);
>> + list_del(&pte->list_vpte);
>> + list_del(&pte->list_vpte_long);
>> + list_del(&pte->list_all);
>> +
>> + kmem_cache_free(vcpu->arch.hpte_cache, pte);
>> +}
>> +
>>
(that's the old one with list_all - better check what's going on here)
>>>> (another difference is using struct hlist_head instead of list_head,
>>>> which I recommend since it saves space)
>>>>
>>> Hrm. I thought about this quite a bit before too, but that makes
>>> invalidation more complicated, no? We always need to remember the
>>> previous entry in a list.
>>>
>> hlist_for_each_entry_safe() does that.
>>
> Oh - very nice. So all I need to do is pass the previous list entry to
> invalide_pte too and I'm good. I guess I'll give it a shot.
>
No, just the for_each cursor.
>> Less and simpler code, better reporting through slabtop, less wastage
>> of partially allocated slab pages.
>>
> But it also means that one VM can spill the global slab cache and kill
> another VM's mm performance, no?
>
What do you mean by spill?
btw, in the midst of the nit-picking frenzy I forgot to ask how the
individual hash chain lengths as well as the per-vm allocation were limited.
On x86 we have a per-vm limit and we allow the mm shrinker to reduce
shadow mmu data structures dynamically.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-06-28 10:01 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-25 23:16 [PATCH] Faster MMU lookups for Book3s Alexander Graf
2010-06-25 23:16 ` [PATCH] KVM: PPC: Add generic hpte management functions Alexander Graf
2010-06-25 23:18 ` Alexander Graf
2010-06-28 8:28 ` Avi Kivity
2010-06-28 8:55 ` Alexander Graf
2010-06-28 9:12 ` Avi Kivity
2010-06-28 9:27 ` Alexander Graf
2010-06-28 9:34 ` Avi Kivity
2010-06-28 9:55 ` Alexander Graf
2010-06-28 10:01 ` Avi Kivity [this message]
2010-06-28 13:25 ` Alexander Graf
2010-06-28 13:30 ` Avi Kivity
2010-06-28 13:32 ` Alexander Graf
2010-06-29 12:56 ` Alexander Graf
2010-06-29 13:05 ` Avi Kivity
2010-06-29 13:06 ` Alexander Graf
2010-06-29 13:13 ` Avi Kivity
2010-06-25 23:16 ` [PATCH] KVM: PPC: Make use of hash based Shadow MMU Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C2872F5.20501@redhat.com \
--to=avi@redhat.com \
--cc=agraf@suse.de \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).