From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 480CFC47404 for ; Mon, 7 Oct 2019 08:02:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D5CF820867 for ; Mon, 7 Oct 2019 08:02:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YKX80zqK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5CF820867 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4877D8E0006; Mon, 7 Oct 2019 04:02:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 438DE8E0003; Mon, 7 Oct 2019 04:02:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34EF98E0006; Mon, 7 Oct 2019 04:02:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 132C38E0003 for ; Mon, 7 Oct 2019 04:02:21 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id B93D9824CA3D for ; Mon, 7 Oct 2019 08:02:20 +0000 (UTC) X-FDA: 76016246040.06.jewel29_317926f79e93e X-HE-Tag: jewel29_317926f79e93e X-Filterd-Recvd-Size: 7964 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Oct 2019 08:02:19 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id c195so8550237lfg.9 for ; Mon, 07 Oct 2019 01:02:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=dD+rwXY/nYOWBAq4CYmtKPehya96hW8MC3evJji/Yec=; b=YKX80zqK9BnbAJeglBJloTgyzdIqVvyc8qZo+JZ3ztomB/oSL2bR0cdAskjskLkt7W V9UDSFbbXRmNPZ33L0D1wU65+1nH2oAp6y8pZPleWLLZCun4eCXakWs8mS4RWb6C/ker JW+RSQDoEOrmeywy1usBuMsnExaiprDjukREATWF2vWVXQ4YtCLFHEdtes6ohAKHQKtf q+L1AWmRm9SfD1K0Iykdhz4e4rb4yRZA5P2ouhEARMWWMz9rxMUok+sY7y4FKuV75C8t 0xbcdLUk2/EkgOxlDRwB9q8u6xQVYUzdRCjrlrEYJnWvIf4sPd888A9bqnH++YKjN+jF dmZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=dD+rwXY/nYOWBAq4CYmtKPehya96hW8MC3evJji/Yec=; b=f0xVn60qzwtTP4DyXa1IwUHj5BmwdVBCESEnF4jm39UcbbhIzp41jsoiHQRA5fbyeF I+Is4T3E5P0PQgojsNDEP7PPv4I2Fvkr9QORNfYEpCwsk6L9n0x3c+Kj6u8f9lyzn5kc EbN5NKZF55/sdyo/iQsgvOtTqYzEFm/VL29eD4v9dSFCIJy+MTkpy+7AaOj3gGUlf6Sw MREod2RzRwQXidD58AZ+tJhbbg58OJncY1+h4+/Bfyhpvf28dV7UIgBz3pi+tmyO5FYY iJp+5JEtln/3yo2MblqFqsqRF2cCyjPhoJY2cY+SbHfbSBO/yeiLro34OIVyvyGAn/Uq 0u5Q== X-Gm-Message-State: APjAAAX0Uk5JvH3tUDZtOGbyXuD1Wsa/PwxcPga9VMgO6DbmseT18imz 6LhBTtPAYv0LNg+W5YvVq/4= X-Google-Smtp-Source: APXvYqyYVjjEk4t55bliwrB5kqrPSByuGmBtIo84DXhVvDur8OAwQWGVhoAdoOt78OMFi9yojAsMng== X-Received: by 2002:ac2:4a8f:: with SMTP id l15mr16241092lfp.21.1570435338163; Mon, 07 Oct 2019 01:02:18 -0700 (PDT) Received: from pc636 (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id f21sm3218392lfm.90.2019.10.07.01.02.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Oct 2019 01:02:17 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Mon, 7 Oct 2019 10:02:09 +0200 To: Daniel Axtens Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr, linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory Message-ID: <20191007080209.GA22997@pc636> References: <20191001065834.8880-1-dja@axtens.net> <20191001065834.8880-2-dja@axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191001065834.8880-2-dja@axtens.net> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a3c70e275f4e..9fb7a16f42ae 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va, > struct list_head *next; > struct rb_node **link; > struct rb_node *parent; > + unsigned long orig_start, orig_end; > bool merged = false; > > + /* > + * To manage KASAN vmalloc memory usage, we use this opportunity to > + * clean up the shadow memory allocated to back this allocation. > + * Because a vmalloc shadow page covers several pages, the start or end > + * of an allocation might not align with a shadow page. Use the merging > + * opportunities to try to extend the region we can release. > + */ > + orig_start = va->va_start; > + orig_end = va->va_end; > + > /* > * Find a place in the tree where VA potentially will be > * inserted, unless it is merged with its sibling/siblings. > @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va, > if (sibling->va_end == va->va_start) { > sibling->va_end = va->va_end; > > + kasan_release_vmalloc(orig_start, orig_end, > + sibling->va_start, > + sibling->va_end); > + > /* Check and update the tree if needed. */ > augment_tree_propagate_from(sibling); > > @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va, > } > > insert: > + kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end); > + > if (!merged) { > link_va(va, root, parent, link, head); > augment_tree_propagate_from(va); Hello, Daniel. Looking at it one more, i think above part of code is a bit wrong and should be separated from merge_or_add_vmap_area() logic. The reason is to keep it simple and do only what it is supposed to do: merging or adding. Also the kasan_release_vmalloc() gets called twice there and looks like a duplication. Apart of that, merge_or_add_vmap_area() can be called via recovery path when vmap/vmaps is/are not even setup. See percpu allocator. I guess your part could be moved directly to the __purge_vmap_area_lazy() where all vmaps are lazily freed. To do so, we also need to modify merge_or_add_vmap_area() to return merged area: diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e92ff5f7dd8b..fecde4312d68 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va, * free area is inserted. If VA has been merged, it is * freed. */ -static __always_inline void +static __always_inline struct vmap_area * merge_or_add_vmap_area(struct vmap_area *va, struct rb_root *root, struct list_head *head) { @@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va, /* Free vmap_area object. */ kmem_cache_free(vmap_area_cachep, va); - return; + + /* Point to the new merged area. */ + va = sibling; + merged = true; } } @@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va, link_va(va, root, parent, link, head); augment_tree_propagate_from(va); } + + return va; } static __always_inline bool @@ -1172,7 +1177,7 @@ static void __free_vmap_area(struct vmap_area *va) /* * Merge VA with its neighbors, otherwise just add it. */ - merge_or_add_vmap_area(va, + (void) merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); } @@ -1279,15 +1284,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) spin_lock(&vmap_area_lock); llist_for_each_entry_safe(va, n_va, valist, purge_list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; + unsigned long orig_start = va->va_start; + unsigned long orig_end = va->va_end; /* * Finally insert or merge lazily-freed area. It is * detached and there is no need to "unlink" it from * anything. */ - merge_or_add_vmap_area(va, + va = merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); + kasan_release_vmalloc(orig_start, + orig_end, va->va_start, va->va_end); + atomic_long_sub(nr, &vmap_lazy_nr); if (atomic_long_read(&vmap_lazy_nr) < resched_threshold) -- Vlad Rezki