From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757353AbdJKI2R (ORCPT ); Wed, 11 Oct 2017 04:28:17 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:7550 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757318AbdJKI2F (ORCPT ); Wed, 11 Oct 2017 04:28:05 -0400 From: Abbott Liu To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , Subject: [PATCH 06/11] change memory_is_poisoned_16 for aligned error Date: Wed, 11 Oct 2017 16:22:22 +0800 Message-ID: <20171011082227.20546-7-liuwenliang@huawei.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171011082227.20546-1-liuwenliang@huawei.com> References: <20171011082227.20546-1-liuwenliang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.67.54.198] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59DDD540.008C,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 75e5a26b4fba18c6e2c3ae31b6ce9c56 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because arm instruction set don't support access the address which is not aligned, so must change memory_is_poisoned_16 for arm. Cc: Andrey Ryabinin Signed-off-by: Abbott Liu --- mm/kasan/kasan.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index 12749da..e0e152b 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -149,6 +149,25 @@ static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr, return memory_is_poisoned_1(addr + size - 1); } +#ifdef CONFIG_ARM +static __always_inline bool memory_is_poisoned_16(unsigned long addr) +{ + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr); + + if (unlikely(shadow_addr[0] || shadow_addr[1])) return true; + else { + /* + * If two shadow bytes covers 16-byte access, we don't + * need to do anything more. Otherwise, test the last + * shadow byte. + */ + if (likely(IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) + return false; + return memory_is_poisoned_1(addr + 15); + } +} + +#else static __always_inline bool memory_is_poisoned_16(unsigned long addr) { u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); @@ -159,6 +178,7 @@ static __always_inline bool memory_is_poisoned_16(unsigned long addr) return *shadow_addr; } +#endif static __always_inline unsigned long bytes_is_nonzero(const u8 *start, size_t size) -- 2.9.0