From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2E60C4360F for ; Wed, 20 Mar 2019 18:19:17 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4EFD4218C3 for ; Wed, 20 Mar 2019 18:19:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EFD4218C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 44PdVB6vZbzDqNv for ; Thu, 21 Mar 2019 05:19:14 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=arm.com (client-ip=217.140.101.70; helo=foss.arm.com; envelope-from=catalin.marinas@arm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by lists.ozlabs.org (Postfix) with ESMTP id 44PdRj0XS4zDq7c for ; Thu, 21 Mar 2019 05:17:03 +1100 (AEDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE9A8A78; Wed, 20 Mar 2019 11:17:01 -0700 (PDT) Received: from arrakis.emea.arm.com (arrakis.cambridge.arm.com [10.1.196.78]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1F723F59C; Wed, 20 Mar 2019 11:16:59 -0700 (PDT) Date: Wed, 20 Mar 2019 18:16:57 +0000 From: Catalin Marinas To: Michael Ellerman Subject: Re: [PATCH v2] kmemleak: skip scanning holes in the .bss section Message-ID: <20190320181656.GB38229@arrakis.emea.arm.com> References: <20190313145717.46369-1-cai@lca.pw> <20190319115747.GB59586@arrakis.emea.arm.com> <87lg19y9dp.fsf@concordia.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87lg19y9dp.fsf@concordia.ellerman.id.au> User-Agent: Mutt/1.10.1 (2018-07-13) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-mm@kvack.org, Qian Cai , akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Thu, Mar 21, 2019 at 12:15:46AM +1100, Michael Ellerman wrote: > Catalin Marinas writes: > > On Wed, Mar 13, 2019 at 10:57:17AM -0400, Qian Cai wrote: > >> @@ -1531,7 +1547,14 @@ static void kmemleak_scan(void) > >> > >> /* data/bss scanning */ > >> scan_large_block(_sdata, _edata); > >> - scan_large_block(__bss_start, __bss_stop); > >> + > >> + if (bss_hole_start) { > >> + scan_large_block(__bss_start, bss_hole_start); > >> + scan_large_block(bss_hole_stop, __bss_stop); > >> + } else { > >> + scan_large_block(__bss_start, __bss_stop); > >> + } > >> + > >> scan_large_block(__start_ro_after_init, __end_ro_after_init); > > > > I'm not a fan of this approach but I couldn't come up with anything > > better. I was hoping we could check for PageReserved() in scan_block() > > but on arm64 it ends up not scanning the .bss at all. > > > > Until another user appears, I'm ok with this patch. > > > > Acked-by: Catalin Marinas > > I actually would like to rework this kvm_tmp thing to not be in bss at > all. It's a bit of a hack and is incompatible with strict RWX. > > If we size it a bit more conservatively we can hopefully just reserve > some space in the text section for it. > > I'm not going to have time to work on that immediately though, so if > people want this fixed now then this patch could go in as a temporary > solution. I think I have a simpler idea. Kmemleak allows punching holes in allocated objects, so just turn the data/bss sections into dedicated kmemleak objects. This happens when kmemleak is initialised, before the initcalls are invoked. The kvm_free_tmp() would just free the corresponding part of the bss. Patch below, only tested briefly on arm64. Qian, could you give it a try on powerpc? Thanks. --------8<------------------------------ diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c index 683b5b3805bd..c4b8cb3c298d 100644 --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -712,6 +712,8 @@ static void kvm_use_magic_page(void) static __init void kvm_free_tmp(void) { + kmemleak_free_part(&kvm_tmp[kvm_tmp_index], + ARRAY_SIZE(kvm_tmp) - kvm_tmp_index); free_reserved_area(&kvm_tmp[kvm_tmp_index], &kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL); } diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 707fa5579f66..0f6adcbfc2c7 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1529,11 +1529,6 @@ static void kmemleak_scan(void) } rcu_read_unlock(); - /* data/bss scanning */ - scan_large_block(_sdata, _edata); - scan_large_block(__bss_start, __bss_stop); - scan_large_block(__start_ro_after_init, __end_ro_after_init); - #ifdef CONFIG_SMP /* per-cpu sections scanning */ for_each_possible_cpu(i) @@ -2071,6 +2066,15 @@ void __init kmemleak_init(void) } local_irq_restore(flags); + /* register the data/bss sections */ + create_object((unsigned long)_sdata, _edata - _sdata, + KMEMLEAK_GREY, GFP_ATOMIC); + create_object((unsigned long)__bss_start, __bss_stop - __bss_start, + KMEMLEAK_GREY, GFP_ATOMIC); + create_object((unsigned long)__start_ro_after_init, + __end_ro_after_init - __start_ro_after_init, + KMEMLEAK_GREY, GFP_ATOMIC); + /* * This is the point where tracking allocations is safe. Automatic * scanning is started during the late initcall. Add the early logged