From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30E5FC352AA for ; Tue, 1 Oct 2019 10:17:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B8CED21906 for ; Tue, 1 Oct 2019 10:17:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iyTT0tp7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8CED21906 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 15F6E8E0005; Tue, 1 Oct 2019 06:17:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 111298E0001; Tue, 1 Oct 2019 06:17:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F406C8E0005; Tue, 1 Oct 2019 06:17:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id CC86C8E0001 for ; Tue, 1 Oct 2019 06:17:18 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 3A7901EF0 for ; Tue, 1 Oct 2019 10:17:18 +0000 (UTC) X-FDA: 75994813356.27.ray53_57701948ff3d X-HE-Tag: ray53_57701948ff3d X-Filterd-Recvd-Size: 6944 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Oct 2019 10:17:17 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id d1so12685851ljl.13 for ; Tue, 01 Oct 2019 03:17:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=FHRobxW8BCnp6m8ULWiX2Kwd0/E8VfgVZeuErnZpDKE=; b=iyTT0tp7ZwvjP8G+F4spWmeJHxWDKn8IoYov8qS3M/3JoIwHRFIOwTk2TDCnxfCmvg kqtoeN4WEUL8RDM+qD14oWE+GWzxf97qtHsvMwriDZbFctY9RECXJc8i95eNo7ZxlXTS 3gQBwD9MGCVvAM7UQfZuHa8ue1DSy9IQa3CfT3qhZdmCLm9/vN+GUp6sq3LiBlGKRNOV Y1C3oMggQ4mwGFE+5Bb/8+R76SyLIgzuWmWUY4GothLMvZkALi/xuV1yBltikGINfvbA 3uv4EgUcP/LZ81r8YRbR57lXhPXY75V8CaxLb5c+fm7TlnfjK8s1RAxQgV7/URKE/cx4 fOXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=FHRobxW8BCnp6m8ULWiX2Kwd0/E8VfgVZeuErnZpDKE=; b=qGdUD6DU4isxqaQvOaRmhjUhDxwbVHt32JblgcWLir9EJfFDBtl6osmmiZKU74vNFO JWPzUAaZYJ5alxGQkmSS+oaOTvWVvQqkKyPGFLmldLiXUCOY8wQuXiUFDG0yXKrri2Sh 0hc5DcGyg/GM+5ZFqIn4srBRCGIRjRKI0mMCMtTmE5gT4uHVJ525F8IiCRB4JZKGJE8f HUrIc02IRydYnnS+XStxTCnNO4Z5H2abjm7p6U68uljsatrVyMOk40FclywpvFwIhCRW WSgKrt73avZKe2yNMEWfF0yANCOysfVfGuj1lzWascluVV1o35jC6AiRL5s40SF5GPsc MrxA== X-Gm-Message-State: APjAAAUx7w0CPIWUUlD/lnaInethEfZQAlZdEWpohyLva/lAwxQvZ0SC 4KdL7QABF/jJmOk6t2c8hps= X-Google-Smtp-Source: APXvYqyI8sE+R3+orwhB1jbe/FdjLP4QnkV1dOHQo3qbKS7dDFRLVXcdBqSSQImlrl5Pfek3gJaYGA== X-Received: by 2002:a2e:8084:: with SMTP id i4mr15814978ljg.119.1569925035845; Tue, 01 Oct 2019 03:17:15 -0700 (PDT) Received: from pc636 ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id u8sm4877959lfb.36.2019.10.01.03.17.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Oct 2019 03:17:14 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Tue, 1 Oct 2019 12:17:07 +0200 To: Daniel Axtens Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr, linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory Message-ID: <20191001101707.GA21929@pc636> References: <20191001065834.8880-1-dja@axtens.net> <20191001065834.8880-2-dja@axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191001065834.8880-2-dja@axtens.net> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello, Daniel. > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a3c70e275f4e..9fb7a16f42ae 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va, > struct list_head *next; > struct rb_node **link; > struct rb_node *parent; > + unsigned long orig_start, orig_end; Shouldn't that be wrapped around #ifdef CONFIG_KASAN_VMALLOC? > bool merged = false; > > + /* > + * To manage KASAN vmalloc memory usage, we use this opportunity to > + * clean up the shadow memory allocated to back this allocation. > + * Because a vmalloc shadow page covers several pages, the start or end > + * of an allocation might not align with a shadow page. Use the merging > + * opportunities to try to extend the region we can release. > + */ > + orig_start = va->va_start; > + orig_end = va->va_end; > + The same. > /* > * Find a place in the tree where VA potentially will be > * inserted, unless it is merged with its sibling/siblings. > @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va, > if (sibling->va_end == va->va_start) { > sibling->va_end = va->va_end; > > + kasan_release_vmalloc(orig_start, orig_end, > + sibling->va_start, > + sibling->va_end); > + The same. > /* Check and update the tree if needed. */ > augment_tree_propagate_from(sibling); > > @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va, > } > > insert: > + kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end); > + The same + all further changes in this file. > if (!merged) { > link_va(va, root, parent, link, head); > augment_tree_propagate_from(va); > @@ -2068,6 +2085,22 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, > > setup_vmalloc_vm(area, va, flags, caller); > > + /* > + * For KASAN, if we are in vmalloc space, we need to cover the shadow > + * area with real memory. If we come here through VM_ALLOC, this is > + * done by a higher level function that has access to the true size, > + * which might not be a full page. > + * > + * We assume module space comes via VM_ALLOC path. > + */ > + if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { > + if (kasan_populate_vmalloc(area->size, area)) { > + unmap_vmap_area(va); > + kfree(area); > + return NULL; > + } > + } > + > return area; > } > > @@ -2245,6 +2278,9 @@ static void __vunmap(const void *addr, int deallocate_pages) > debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); > debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); > > + if (area->flags & VM_KASAN) > + kasan_poison_vmalloc(area->addr, area->size); > + > vm_remove_mappings(area, deallocate_pages); > > if (deallocate_pages) { > @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, > if (!addr) > return NULL; > > + if (kasan_populate_vmalloc(real_size, area)) > + return NULL; > + > /* > * In this function, newly allocated vm_struct has VM_UNINITIALIZED > * flag. It means that vm_struct is not fully initialized. > @@ -3351,10 +3390,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > spin_unlock(&vmap_area_lock); > > /* insert all vm's */ > - for (area = 0; area < nr_vms; area++) > + for (area = 0; area < nr_vms; area++) { > setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, > pcpu_get_vm_areas); > > + /* assume success here */ > + kasan_populate_vmalloc(sizes[area], vms[area]); > + } > + > kfree(vas); > return vms; > -- Vlad Rezki