From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: [PATCH] xen: arm: invalidate caches after map_domain_page done Date: Fri, 01 Aug 2014 18:49:51 +0100 Message-ID: <53DBD33F.6000808@linaro.org> References: <1406877914-31583-1-git-send-email-andrii.tseglytskyi@globallogic.com> <53DB5C88.8030602@linaro.org> <53DB6BE1.2040005@linaro.org> <53DB72EA.6080704@linaro.org> <53DB9DA3.2080109@linaro.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrii Tseglytskyi Cc: Tim Deegan , Stefano Stabellini , Ian Campbell , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 01/08/14 16:06, Andrii Tseglytskyi wrote: > Looks like I see where is the issue: > After mapping done kernel driver calls flush_tlb_all() function, which > just invalidates cache, it does the similar command, as the following flush_tlb_all doesn't invalidate the cache but the TLB. > Xen macros: > > #define DTLBIALL p15,0,c8,c6,0 /* Invalidate data TLB */ > > Then after mapping done, remoteproc_iommu starts translation, calls > map_domain_page() -> flush_xen_data_tlb_range_va_local(), > which is described with following macros: > > #define TLBIMVAH p15,4,c8,c7,1 /* Invalidate Unified Hyp. TLB by MVA */ > > So, I got 2 invalidates and no cleans. And when I started using > clean_and_invalidate_xen_dcache_va_range() I got both: > > #define DCCIMVAC p15,0,c7,c14,1 /* Data cache clean and > invalidate by MVA */ > > I need both - clean and invalidate. If I don't have clean - data may > still present in cache and not flushed to RAM - I will see invalid > data after map_domain_page() call You seem to mix TLB and cache in your mail. If the page has been mapped with cache attribute (should be done by kmalloc), then it should not have any issue in Xen. Your patch is removing the TLB flush and you are very lucky that Xen is still working correctly... Regards, -- Julien Grall