From mboxrd@z Thu Jan 1 00:00:00 1970 From: James Bottomley Subject: Re: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap() Date: Tue, 12 Mar 2019 14:19:15 -0700 Message-ID: <1552425555.14432.14.camel@HansenPartnership.com> References: <56374231-7ba7-0227-8d6d-4d968d71b4d6@redhat.com> <20190311095405-mutt-send-email-mst@kernel.org> <20190311.111413.1140896328197448401.davem@davemloft.net> <6b6dcc4a-2f08-ba67-0423-35787f3b966c@redhat.com> <20190311235140-mutt-send-email-mst@kernel.org> <76c353ed-d6de-99a9-76f9-f258074c1462@redhat.com> <20190312075033-mutt-send-email-mst@kernel.org> <1552405610.3083.17.camel@HansenPartnership.com> <20190312200450.GA25147@redhat.com> <1552424017.14432.11.camel@HansenPartnership.com> <20190312211117.GB25147@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: "Michael S. Tsirkin" , Jason Wang , David Miller , hch@infradead.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, peterx@redhat.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org To: Andrea Arcangeli Return-path: In-Reply-To: <20190312211117.GB25147@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org I think we might be talking past each other. Let me try the double flush first On Tue, 2019-03-12 at 17:11 -0400, Andrea Arcangeli wrote: > On Tue, Mar 12, 2019 at 01:53:37PM -0700, James Bottomley wrote: > > > Which means after we fix vhost to add the flush_dcache_page after > > > kunmap, Parisc will get a double hit (but it also means Parisc > > > was > > > the only one of those archs needed explicit cache flushes, where > > > vhost worked correctly so far.. so it kinds of proofs your point > > > of > > > giving up being the safe choice). > > > > What double hit? If there's no cache to flush then cache flush is > > a no-op. It's also a highly piplineable no-op because the CPU has > > the L1 cache within easy reach. The only event when flush takes a > > large amount time is if we actually have dirty data to write back > > to main memory. > > The double hit is in parisc copy_to_user_page: > > #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ > do { \ > flush_cache_page(vma, vaddr, page_to_pfn(page)); \ > memcpy(dst, src, len); \ > flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned > long)dst + len); \ > } while (0) > > That is executed just before kunmap: > > static inline void kunmap(struct page *page) > { > flush_kernel_dcache_page_addr(page_address(page)); > } I mean in the sequence flush_dcache_page(page); flush_dcache_page(page); The first flush_dcache_page did all the work and the second it a tightly pipelined no-op. That's what I mean by there not really being a double hit. James