From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ua0-f198.google.com (mail-ua0-f198.google.com [209.85.217.198]) by kanga.kvack.org (Postfix) with ESMTP id 408AA6B0279 for ; Thu, 1 Jun 2017 13:06:16 -0400 (EDT) Received: by mail-ua0-f198.google.com with SMTP id n38so4897766uai.1 for ; Thu, 01 Jun 2017 10:06:16 -0700 (PDT) Received: from mail-vk0-x234.google.com (mail-vk0-x234.google.com. [2607:f8b0:400c:c05::234]) by mx.google.com with ESMTPS id h19si4601315uaa.76.2017.06.01.10.06.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 01 Jun 2017 10:06:15 -0700 (PDT) Received: by mail-vk0-x234.google.com with SMTP id x71so28155721vkd.0 for ; Thu, 01 Jun 2017 10:06:15 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <31a41822-35e1-1b4a-09f7-0a99571ee89a@virtuozzo.com> References: <20170601162338.23540-1-aryabinin@virtuozzo.com> <20170601162338.23540-3-aryabinin@virtuozzo.com> <20170601163442.GC17711@leverpostej> <20170601165205.GA8191@leverpostej> <75ea368f-9268-44fd-f3f6-2a48dc8d2fe8@virtuozzo.com> <31a41822-35e1-1b4a-09f7-0a99571ee89a@virtuozzo.com> From: Dmitry Vyukov Date: Thu, 1 Jun 2017 19:05:54 +0200 Message-ID: Subject: Re: [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory Content-Type: text/plain; charset="UTF-8" Sender: owner-linux-mm@kvack.org List-ID: To: Andrey Ryabinin Cc: Mark Rutland , Andrew Morton , Catalin Marinas , Will Deacon , LKML , kasan-dev , "linux-mm@kvack.org" , Alexander Potapenko , linux-arm-kernel@lists.infradead.org, Yuri Gribov On Thu, Jun 1, 2017 at 7:00 PM, Andrey Ryabinin wrote: > > > On 06/01/2017 07:59 PM, Andrey Ryabinin wrote: >> >> >> On 06/01/2017 07:52 PM, Mark Rutland wrote: >>> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote: >>>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland wrote: >>>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote: >>>>>> We used to read several bytes of the shadow memory in advance. >>>>>> Therefore additional shadow memory mapped to prevent crash if >>>>>> speculative load would happen near the end of the mapped shadow memory. >>>>>> >>>>>> Now we don't have such speculative loads, so we no longer need to map >>>>>> additional shadow memory. >>>>> >>>>> I see that patch 1 fixed up the Linux helpers for outline >>>>> instrumentation. >>>>> >>>>> Just to check, is it also true that the inline instrumentation never >>>>> performs unaligned accesses to the shadow memory? >>>> >> >> Correct, inline instrumentation assumes that all accesses are properly aligned as it >> required by C standard. I knew that the kernel violates this rule in many places, >> therefore I decided to add checks for unaligned accesses in outline case. >> >> >>>> Inline instrumentation generally accesses only a single byte. >>> >>> Sorry to be a little pedantic, but does that mean we'll never access the >>> additional shadow, or does that mean it's very unlikely that we will? >>> >>> I'm guessing/hoping it's the former! >>> >> >> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses > > s/Outline/inline of course. I suspect that actual implementations have diverged from that description. Trying to follow asan_expand_check_ifn in: https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/asan.c?revision=246703&view=markup but it's not trivial. +Yuri, maybe you know off the top of your head if asan instrumentation in gcc ever accesses off-by-one shadow byte (i.e. 1 byte after actual object end)? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org