From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id A27DF6B0279 for ; Tue, 30 May 2017 04:40:58 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id m5so87286318pfc.1 for ; Tue, 30 May 2017 01:40:58 -0700 (PDT) Received: from foss.arm.com (foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id t78si13070937pfi.321.2017.05.30.01.40.57 for ; Tue, 30 May 2017 01:40:57 -0700 (PDT) Subject: Re: [PATCH v1 00/11] mm/kasan: support per-page shadow memory to reduce memory consumption References: <1494897409-14408-1-git-send-email-iamjoonsoo.kim@lge.com> <20170516062318.GC16015@js1304-desktop> <20170524074539.GA9697@js1304-desktop> <20170525004104.GA21336@js1304-desktop> <1131ff71-eb7a-8396-9a72-211f7077e5ec@arm.com> From: Vladimir Murzin Message-ID: <2d35bbe9-e833-1bf3-ecd0-a02da63b381a@arm.com> Date: Tue, 30 May 2017 09:40:52 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Dmitry Vyukov Cc: Joonsoo Kim , Andrew Morton , Andrey Ryabinin , Alexander Potapenko , kasan-dev , "linux-mm@kvack.org" , LKML , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , kernel-team@lge.com On 30/05/17 09:31, Vladimir Murzin wrote: > [This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing] > > On 30/05/17 09:15, Dmitry Vyukov wrote: >> On Tue, May 30, 2017 at 9:58 AM, Vladimir Murzin >> wrote: >>> On 29/05/17 16:29, Dmitry Vyukov wrote: >>>> I have an alternative proposal. It should be conceptually simpler and >>>> also less arch-dependent. But I don't know if I miss something >>>> important that will render it non working. >>>> Namely, we add a pointer to shadow to the page struct. Then, create a >>>> slab allocator for 512B shadow blocks. Then, attach/detach these >>>> shadow blocks to page structs as necessary. It should lead to even >>>> smaller memory consumption because we won't need a whole shadow page >>>> when only 1 out of 8 corresponding kernel pages are used (we will need >>>> just a single 512B block). I guess with some fragmentation we need >>>> lots of excessive shadow with the current proposed patch. >>>> This does not depend on TLB in any way and does not require hooking >>>> into buddy allocator. >>>> The main downside is that we will need to be careful to not assume >>>> that shadow is continuous. In particular this means that this mode >>>> will work only with outline instrumentation and will need some ifdefs. >>>> Also it will be slower due to the additional indirection when >>>> accessing shadow, but that's meant as "small but slow" mode as far as >>>> I understand. >>>> >>>> But the main win as I see it is that that's basically complete support >>>> for 32-bit arches. People do ask about arm32 support: >>>> https://groups.google.com/d/msg/kasan-dev/Sk6BsSPMRRc/Gqh4oD_wAAAJ >>>> https://groups.google.com/d/msg/kasan-dev/B22vOFp-QWg/EVJPbrsgAgAJ >>>> and probably mips32 is relevant as well. >>>> Such mode does not require a huge continuous address space range, has >>>> minimal memory consumption and requires minimal arch-dependent code. >>>> Works only with outline instrumentation, but I think that's a >>>> reasonable compromise. >>> >>> .. or you can just keep shadow in page extension. It was suggested back in >>> 2015 [1], but seems that lack of stack instrumentation was "no-way"... >>> >>> [1] https://lkml.org/lkml/2015/8/24/573 >> >> Right. It describes basically the same idea. >> >> How is page_ext better than adding data page struct? > > page_ext is already here along with some other debug options ;) > >> It seems that memory for all page_ext is preallocated along with page >> structs; but just the lookup is slower. >> > > Yup. Lookup would look like (based on v4.0): > > ... > page_ext = lookup_page_ext_begin(virt_to_page(start)); > > do { > page_ext->shadow[idx++] = value; > } while (idx < bound); > > lookup_page_ext_end((void *)page_ext); > > ... Correction: please, ignore that *_{begin,end} stuff - mainline only lookup_page_ext() is only used. Cheers Vladimir > > Cheers > Vladimir > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org