From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from verein.lst.de ([213.95.11.211]:43642 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751924AbdJSPMQ (ORCPT ); Thu, 19 Oct 2017 11:12:16 -0400 Date: Thu, 19 Oct 2017 17:12:14 +0200 From: Christoph Hellwig To: Marek Szyprowski Cc: Huacai Chen , Christoph Hellwig , Robin Murphy , Andrew Morton , Fuxin Zhang , linux-kernel@vger.kernel.org, Ralf Baechle , James Hogan , linux-mips@linux-mips.org, "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, Tejun Heo , linux-ide@vger.kernel.org, stable@vger.kernel.org Subject: Re: [PATCH V8 4/5] libsas: Align SMP req/resp to dma_get_cache_alignment() Message-ID: <20171019151214.GD24204@lst.de> References: <1508227542-13165-1-git-send-email-chenhc@lemote.com> <1508227542-13165-4-git-send-email-chenhc@lemote.com> <286bf838-4d2f-a25f-baf9-ef3ac9b37d93@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <286bf838-4d2f-a25f-baf9-ef3ac9b37d93@samsung.com> Sender: stable-owner@vger.kernel.org List-ID: On Tue, Oct 17, 2017 at 01:55:43PM +0200, Marek Szyprowski wrote: > If I remember correctly, kernel guarantees that each kmalloced buffer is > always at least aligned to the CPU cache line, so CPU cache can be > invalidated on the allocated buffer without corrupting anything else. Yes, from slab.h: /* * Some archs want to perform DMA into kmalloc caches and need a guaranteed * alignment larger than the alignment of a 64-bit integer. * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that. */ #if defined(ARCH_DMA_MINALIGN) && ARCH_DMA_MINALIGN > 8 #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN #define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN #define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) #else #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) #endif Mips sets this for a few subarchitectures, but it seems like you need to set it for yours as well.