From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755935AbdEIRB1 (ORCPT ); Tue, 9 May 2017 13:01:27 -0400 Received: from imap0.codethink.co.uk ([185.43.218.159]:32866 "EHLO imap0.codethink.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755809AbdEIRBW (ORCPT ); Tue, 9 May 2017 13:01:22 -0400 Message-ID: <1494349243.3965.21.camel@codethink.co.uk> Subject: Re: [PATCH 4.4 26/28] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions From: Ben Hutchings To: Dan Williams , Ross Zwisler , Toshi Kani Cc: linux-kernel@vger.kernel.org, stable@vger.kernel.org, x86@kernel.org, Jan Kara , Jeff Moyer , Ingo Molnar , Christoph Hellwig , "H. Peter Anvin" , Al Viro , Thomas Gleixner , Matthew Wilcox , Greg Kroah-Hartman Date: Tue, 09 May 2017 18:00:43 +0100 In-Reply-To: <20170425150816.086596201@linuxfoundation.org> References: <20170425150814.719042460@linuxfoundation.org> <20170425150816.086596201@linuxfoundation.org> Organization: Codethink Ltd. Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.9-1+b1 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2017-04-25 at 16:08 +0100, Greg Kroah-Hartman wrote: > 4.4-stable review patch. If anyone has any objections, please let me know. > > ------------------ > > From: Dan Williams > > commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream. [...] > + if (iter_is_iovec(i)) { > + unsigned long flushed, dest = (unsigned long) addr; > + > + if (bytes < 8) { > + if (!IS_ALIGNED(dest, 4) || (bytes != 4)) > + __arch_wb_cache_pmem(addr, 1); [...] What if the write crosses a cache line boundary? I think you need the following fix-up (untested, I don't have this kind of hardware). Ben. --- From: Ben Hutchings Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes Commit 11e63f6d920d added cache flushing for unaligned writes from an iovec, covering the first and last cache line of a >= 8 byte write and the first cache line of a < 8 byte write. But an unaligned write of 2-7 bytes can still cover two cache lines, so make sure we flush both in that case. Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...") Signed-off-by: Ben Hutchings --- arch/x86/include/asm/pmem.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h index d5a22bac9988..0ff8fe71b255 100644 --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, if (bytes < 8) { if (!IS_ALIGNED(dest, 4) || (bytes != 4)) - arch_wb_cache_pmem(addr, 1); + arch_wb_cache_pmem(addr, bytes); } else { if (!IS_ALIGNED(dest, 8)) { dest = ALIGN(dest, boot_cpu_data.x86_clflush_size); -- Ben Hutchings Software Developer, Codethink Ltd.