From: Avi Kivity <avi@redhat.com>
To: Alexander Graf <agraf@suse.de>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
KVM list <kvm@vger.kernel.org>,
"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>
Subject: Re: [PATCH] KVM: PPC: Add generic hpte management functions
Date: Mon, 28 Jun 2010 12:34:16 +0300 [thread overview]
Message-ID: <4C286C98.8060903@redhat.com> (raw)
In-Reply-To: <1A0E0E54-D055-4333-B5EC-DE2F71382AB7@suse.de>
On 06/28/2010 12:27 PM, Alexander Graf wrote:
>> Am I looking at old code?
>
>
> Apparently. Check book3s_mmu_*.c
I don't have that pattern.
>
>>
>> (another difference is using struct hlist_head instead of list_head,
>> which I recommend since it saves space)
>
> Hrm. I thought about this quite a bit before too, but that makes
> invalidation more complicated, no? We always need to remember the
> previous entry in a list.
hlist_for_each_entry_safe() does that.
>>
>>>>> +int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu)
>>>>> +{
>>>>> + char kmem_name[128];
>>>>> +
>>>>> + /* init hpte slab cache */
>>>>> + snprintf(kmem_name, 128, "kvm-spt-%p", vcpu);
>>>>> + vcpu->arch.hpte_cache = kmem_cache_create(kmem_name,
>>>>> + sizeof(struct hpte_cache), sizeof(struct hpte_cache), 0,
>>>>> NULL);
>>>>>
>>>>>
>>>> Why not one global cache?
>>>>
>>> You mean over all vcpus? Or over all VMs?
>>
>> Totally global. As in 'static struct kmem_cache *kvm_hpte_cache;'.
>
> What would be the benefit?
Less and simpler code, better reporting through slabtop, less wastage of
partially allocated slab pages.
>>> Because this way they don't interfere. An operation on one vCPU
>>> doesn't inflict anything on another. There's also no locking
>>> necessary this way.
>>>
>>
>> The slab writers have solved this for everyone, not just us.
>> kmem_cache_alloc() will usually allocate from a per-cpu cache, so no
>> interference and/or locking. See ____cache_alloc().
>>
>> If there's a problem in kmem_cache_alloc(), solve it there, don't
>> introduce workarounds.
>
> So you would still keep different hash arrays and everything, just
> allocate the objects from a global pool?
Yes.
> I still fail to see how that benefits anyone.
See above.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-06-28 9:34 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-25 23:16 [PATCH] Faster MMU lookups for Book3s Alexander Graf
2010-06-25 23:16 ` [PATCH] KVM: PPC: Add generic hpte management functions Alexander Graf
2010-06-25 23:18 ` Alexander Graf
2010-06-28 8:28 ` Avi Kivity
2010-06-28 8:55 ` Alexander Graf
2010-06-28 9:12 ` Avi Kivity
2010-06-28 9:27 ` Alexander Graf
2010-06-28 9:34 ` Avi Kivity [this message]
2010-06-28 9:55 ` Alexander Graf
2010-06-28 10:01 ` Avi Kivity
2010-06-28 13:25 ` Alexander Graf
2010-06-28 13:30 ` Avi Kivity
2010-06-28 13:32 ` Alexander Graf
2010-06-29 12:56 ` Alexander Graf
2010-06-29 13:05 ` Avi Kivity
2010-06-29 13:06 ` Alexander Graf
2010-06-29 13:13 ` Avi Kivity
2010-06-25 23:16 ` [PATCH] KVM: PPC: Make use of hash based Shadow MMU Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C286C98.8060903@redhat.com \
--to=avi@redhat.com \
--cc=agraf@suse.de \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).