From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:48786 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932358AbdERKuN (ORCPT ); Thu, 18 May 2017 06:50:13 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ben Hutchings , Dan Williams Subject: [PATCH 4.11 025/114] x86, pmem: Fix cache flushing for iovec write < 8 bytes Date: Thu, 18 May 2017 12:45:35 +0200 Message-Id: <20170518103608.523684300@linuxfoundation.org> In-Reply-To: <20170518103604.736737251@linuxfoundation.org> References: <20170518103604.736737251@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.11-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ben Hutchings commit 8376efd31d3d7c44bd05be337adde023cc531fa1 upstream. Commit 11e63f6d920d added cache flushing for unaligned writes from an iovec, covering the first and last cache line of a >= 8 byte write and the first cache line of a < 8 byte write. But an unaligned write of 2-7 bytes can still cover two cache lines, so make sure we flush both in that case. Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...") Signed-off-by: Ben Hutchings Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/pmem.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -103,7 +103,7 @@ static inline size_t arch_copy_from_iter if (bytes < 8) { if (!IS_ALIGNED(dest, 4) || (bytes != 4)) - arch_wb_cache_pmem(addr, 1); + arch_wb_cache_pmem(addr, bytes); } else { if (!IS_ALIGNED(dest, 8)) { dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);