From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hollis Blanchard Subject: Re: [PATCH 1/2] kvm/e500v2: Remove shadow tlb Date: Wed, 08 Sep 2010 09:06:41 -0700 Message-ID: <4C87B491.2050002@mentor.com> References: <1283938806-2981-1-git-send-email-yu.liu@freescale.com> <1283938806-2981-2-git-send-email-yu.liu@freescale.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-ppc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, agraf-l3A5Bk7waGM@public.gmane.org To: Liu Yu Return-path: In-Reply-To: <1283938806-2981-2-git-send-email-yu.liu-KZfg59tc24xl57MIdRCFDg@public.gmane.org> Sender: kvm-ppc-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: kvm.vger.kernel.org On 09/08/2010 02:40 AM, Liu Yu wrote: > It is unnecessary to keep shadow tlb. > first, shadow tlb keep fixed value in shadow, which make things unflexible. > second, remove shadow tlb can save a lot memory. > > This patch remove shadow tlb and caculate the shadow tlb entry value > before we write it to hardware. > > Also we use new struct tlbe_ref to trace the relation > between guest tlb entry and page. Did you look at the performance impact? Back in the day, we did essentially the same thing on 440. However, rather than discard the whole TLB when context switching away from the host (to be demand-faulted when the guest is resumed), we found a noticeable performance improvement by preserving a shadow TLB across context switches. We only use it in the vcpu_put/vcpu_load path. Of course, our TLB was much smaller (64 entries), so the use model may not be the same at all (e.g. it takes longer to restore a full guest TLB working set, but maybe it's not really possible to use all 1024 TLB0 entries in one timeslice anyways). -- Hollis Blanchard Mentor Graphics, Embedded Systems Division