From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Deegan Subject: Re: [PATCH 03/10] xen: arm: reduce instruction cache and tlb flushes to inner-shareable. Date: Thu, 4 Jul 2013 12:07:01 +0100 Message-ID: <20130704110701.GC40611@ocelot.phlegethon.org> References: <1372435809.8976.169.camel@zakaz.uk.xensource.com> <1372435856-14040-3-git-send-email-ian.campbell@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <1372435856-14040-3-git-send-email-ian.campbell@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Campbell Cc: julien.grall@citrix.com, stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org At 17:10 +0100 on 28 Jun (1372439449), Ian Campbell wrote: > Now that Xen maps memory and performs pagetable walks as inner shareable we > don't need to push updates down so far when modifying page tables etc. > > Signed-off-by: Ian Campbell > --- a/xen/include/asm-arm/arm32/page.h > +++ b/xen/include/asm-arm/arm32/page.h > @@ -39,8 +39,8 @@ static inline void flush_xen_text_tlb(void) > asm volatile ( > "isb;" /* Ensure synchronization with previous changes to text */ > STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ > - STORE_CP32(0, ICIALLU) /* Flush I-cache */ > - STORE_CP32(0, BPIALL) /* Flush branch predictor */ > + STORE_CP32(0, ICIALLUIS) /* Flush I-cache */ > + STORE_CP32(0, BPIALLIS) /* Flush branch predictor */ > "dsb;" /* Ensure completion of TLB+BP flush */ > "isb;" > : : "r" (r0) /*dummy*/ : "memory"); > @@ -54,7 +54,7 @@ static inline void flush_xen_data_tlb(void) > { > register unsigned long r0 asm ("r0"); > asm volatile("dsb;" /* Ensure preceding are visible */ > - STORE_CP32(0, TLBIALLH) > + STORE_CP32(0, TLBIALLHIS) > "dsb;" /* Ensure completion of the TLB flush */ > "isb;" > : : "r" (r0) /* dummy */: "memory"); > @@ -69,7 +69,7 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s > unsigned long end = va + size; > dsb(); /* Ensure preceding are visible */ > while ( va < end ) { > - asm volatile(STORE_CP32(0, TLBIMVAH) > + asm volatile(STORE_CP32(0, TLBIMVAHIS) > : : "r" (va) : "memory"); > va += PAGE_SIZE; > } That's OK for actual Xen data mappings, map_domain_page() &c., but now set_fixmap() and clear_fixmap() need to use a stronger flush whenever they map device memory. The same goes for create_xen_entries() when ai != WRITEALLOC. > diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h > index d0535a0..3a6d2cb 100644 > --- a/xen/include/asm-arm/arm64/flushtlb.h > +++ b/xen/include/asm-arm/arm64/flushtlb.h > @@ -6,7 +6,7 @@ static inline void flush_tlb_local(void) > { > asm volatile( > "dsb sy;" > - "tlbi vmalle1;" > + "tlbi vmalle1is;" > "dsb sy;" > "isb;" > : : : "memory"); > @@ -17,7 +17,7 @@ static inline void flush_tlb_all_local(void) > { > asm volatile( > "dsb sy;" > - "tlbi alle1;" > + "tlbi alle1is;" > "dsb sy;" > "isb;" > : : : "memory"); Might these need to be stronger if we're using them on context switch and guests have MMIO/outer-shareable mappings? Tim.