From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hollis Blanchard Subject: Re: [PATCH]0/2 Patches to furthure split kvm_init Date: Fri, 30 Nov 2007 11:29:42 -0600 Message-ID: <1196443782.7103.24.camel@basalt> References: <42DFA526FC41B1429CE7279EF83C6BDCA394B2@pdsmsx415.ccr.corp.intel.com> <474E8D88.4090508@linux.vnet.ibm.com> <474FBF0D.7020601@qumranet.com> <42DFA526FC41B1429CE7279EF83C6BDCA397B9@pdsmsx415.ccr.corp.intel.com> <474FCB79.2010008@qumranet.com> <42DFA526FC41B1429CE7279EF83C6BDCA397CF@pdsmsx415.ccr.corp.intel.com> <474FD21E.8030900@qumranet.com> <42DFA526FC41B1429CE7279EF83C6BDCA397EA@pdsmsx415.ccr.corp.intel.com> <474FDD34.9020807@qumranet.com> <42DFA526FC41B1429CE7279EF83C6BDCA39817@pdsmsx415.ccr.corp.intel.com> Reply-To: Hollis Blanchard Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Christian Ehrhardt , carsteno-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org, "Zhang, Xiantao" To: Avi Kivity Return-path: In-Reply-To: <42DFA526FC41B1429CE7279EF83C6BDCA39817-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org On Fri, 2007-11-30 at 18:03 +0800, Zhang, Xiantao wrote: > Avi Kivity wrote: > > Zhang, Xiantao wrote: > >>>> > >>> Ah, I see. It isn't just the alignment. How do you allocate > >>> kvm_vcpu, then? > >>> > >> > >> For evevy vm, we allocate a big chunk of memory for structure > >> allocation. For vcpu, it should be always 64k aligned through our > >> allocation mechanism. So, we don't care about its aligment issue :) > >> > > > > I see. Can you explain why you do that? Do you have a special > > allocator in your guest-resident vmm module? > > Since our VMM module and KVM module will share the kvm and vcpu > structure, but VMM module has a different address space, so we have to > use fixed allocation method to handle this share. For example, we > allocates 1M memory(1M align) for every vm for this purpose in kvm > module, and the first 64k is used for first vcpu of guest, and the > second 64 for the second vcpu, and same for other vcpus. You can call > it as special allocator or other names:) This is determined by IA64 > virtualization architecture, and hard to workaround it in this > host-based vm model. :( We're doing something similar with very large allocations. Currently, PowerPC's "vcpu" is actually a copy of the exception handlers, plus the real vcpu data structure at a higher offset. Since our exception handlers can't span 64KB regions, we allocate a full 64KB for each vcpu. I'm not sure what benefit a kmem_cache would have in this situation... -- Hollis Blanchard IBM Linux Technology Center ------------------------------------------------------------------------- SF.Net email is sponsored by: The Future of Linux Business White Paper from Novell. From the desktop to the data center, Linux is going mainstream. Let it simplify your IT future. http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4