From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B04C3C433EF for ; Tue, 21 Sep 2021 23:50:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37E8A61038 for ; Tue, 21 Sep 2021 23:50:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 37E8A61038 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jms.id.au Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 739386B006C; Tue, 21 Sep 2021 19:50:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C21D900002; Tue, 21 Sep 2021 19:50:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 589B26B0073; Tue, 21 Sep 2021 19:50:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id 4315F6B006C for ; Tue, 21 Sep 2021 19:50:22 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E566430152 for ; Tue, 21 Sep 2021 23:50:21 +0000 (UTC) X-FDA: 78613227042.17.0B9CDD4 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf09.hostedemail.com (Postfix) with ESMTP id 7ECDD300010B for ; Tue, 21 Sep 2021 23:50:21 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id f130so3402637qke.6 for ; Tue, 21 Sep 2021 16:50:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jms.id.au; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3pBdD6hrMTmywMpC16GHqHBzp8LFNfLUt8n19YiipuE=; b=j98HoVhGODFy30aSZxYuX/CaYVRRgrLXGPZZcL3+5TEMk0kxZ9rEUZ5VzStdEt0IGw 3GdBetANKHrXw0Bg13KgU9vZfOKAKCK8ysvqoLeq2CE8u8xSqc/YIrNHWmSlqDbYFesa fd4+CZ0W6h3HDeC+g6xK4UlALy3rQ5FspPWvw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3pBdD6hrMTmywMpC16GHqHBzp8LFNfLUt8n19YiipuE=; b=JRcU89660vvYq2ISQ3APlc2TlMmgvv/dT8slQzG+kC0cKugIbssNa/dtzzRP+Xgumx 6lfDry6osA6JnfhejNDcKiPrQK/0k1sXRSiHAZ6tccTqi7MCHXkfK5faq9qmAYh5CWC9 /02Gr5+GWVHIJSQRlpOX5JkGSnrALMl88S81Ur3GMoSoTq8N0fB9MRE7+lAs0bvbECTk XcGYF7Qw5q7SaiVE4qQBqlu3rXMIgruI8Sd/CfFqf+txKI9Qsn+oi52oCjwZKb49l37p BhYBOwmTnruZ9S/NiEF8whRI3j6VF5fpVKw+OGna8AVxv4iDMU0stS90+yShvRRAXXZb kz+g== X-Gm-Message-State: AOAM533U8lhbafjAA6S/Uw829mJ0acJlBioHO9iw0vBOKvXhHiDQK3BE 5gjYfswQFD7XQYZD2W1qf07CM18/ZLSLP3HJUsU= X-Google-Smtp-Source: ABdhPJzmNaG8eSfVVVJ//FXKvDj4/vyLvQR0ArS164V3qTiwONEwO8r07klh3oIBYs3tiqkXwlfH9eLZGel8VCFwWKo= X-Received: by 2002:a37:6596:: with SMTP id z144mr18531657qkb.292.1632268220710; Tue, 21 Sep 2021 16:50:20 -0700 (PDT) MIME-Version: 1.0 References: <20210921061149.1091163-1-steve@sk2.org> In-Reply-To: <20210921061149.1091163-1-steve@sk2.org> From: Joel Stanley Date: Tue, 21 Sep 2021 23:50:08 +0000 Message-ID: Subject: Re: [PATCH] mm: Remove HARDENED_USERCOPY_FALLBACK To: Stephen Kitt Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , James Morris , "Serge E . Hallyn" , Kees Cook , Linux Kernel Mailing List , linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-hardening@vger.kernel.org, linuxppc-dev Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: ptg8aubq973by64hw1aih75xwdfgf4qq Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=jms.id.au header.s=google header.b=j98HoVhG; dmarc=none; spf=pass (imf09.hostedemail.com: domain of joel.stan@gmail.com designates 209.85.222.180 as permitted sender) smtp.mailfrom=joel.stan@gmail.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7ECDD300010B X-HE-Tag: 1632268221-6897 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 21 Sept 2021 at 09:50, Stephen Kitt wrote: > > This has served its purpose and is no longer used. All usercopy > violations appear to have been handled by now, any remaining > instances (or new bugs) will cause copies to be rejected. > > This isn't a direct revert of commit 2d891fbc3bb6 ("usercopy: Allow > strict enforcement of whitelists"); since usercopy_fallback is > effectively 0, the fallback handling is removed too. > > This also removes the usercopy_fallback module parameter on > slab_common. > > Link: https://github.com/KSPP/linux/issues/153 > Signed-off-by: Stephen Kitt > Suggested-by: Kees Cook > --- > arch/powerpc/configs/skiroot_defconfig | 1 - For the defconfig change: Reviewed-by: Joel Stanley Cheers, Joel > include/linux/slab.h | 2 -- > mm/slab.c | 13 ------------- > mm/slab_common.c | 8 -------- > mm/slub.c | 14 -------------- > security/Kconfig | 14 -------------- > 6 files changed, 52 deletions(-) > > diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig > index b806a5d3a695..c3ba614c973d 100644 > --- a/arch/powerpc/configs/skiroot_defconfig > +++ b/arch/powerpc/configs/skiroot_defconfig > @@ -275,7 +275,6 @@ CONFIG_NLS_UTF8=y > CONFIG_ENCRYPTED_KEYS=y > CONFIG_SECURITY=y > CONFIG_HARDENED_USERCOPY=y > -# CONFIG_HARDENED_USERCOPY_FALLBACK is not set > CONFIG_HARDENED_USERCOPY_PAGESPAN=y > CONFIG_FORTIFY_SOURCE=y > CONFIG_SECURITY_LOCKDOWN_LSM=y > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 0c97d788762c..5b21515afae0 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -142,8 +142,6 @@ struct mem_cgroup; > void __init kmem_cache_init(void); > bool slab_is_available(void); > > -extern bool usercopy_fallback; > - > struct kmem_cache *kmem_cache_create(const char *name, unsigned int size, > unsigned int align, slab_flags_t flags, > void (*ctor)(void *)); > diff --git a/mm/slab.c b/mm/slab.c > index d0f725637663..4d826394ffcb 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -4207,19 +4207,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, > n <= cachep->useroffset - offset + cachep->usersize) > return; > > - /* > - * If the copy is still within the allocated object, produce > - * a warning instead of rejecting the copy. This is intended > - * to be a temporary method to find any missing usercopy > - * whitelists. > - */ > - if (usercopy_fallback && > - offset <= cachep->object_size && > - n <= cachep->object_size - offset) { > - usercopy_warn("SLAB object", cachep->name, to_user, offset, n); > - return; > - } > - > usercopy_abort("SLAB object", cachep->name, to_user, offset, n); > } > #endif /* CONFIG_HARDENED_USERCOPY */ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index a4a571428c51..925b00c1d4e8 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -37,14 +37,6 @@ LIST_HEAD(slab_caches); > DEFINE_MUTEX(slab_mutex); > struct kmem_cache *kmem_cache; > > -#ifdef CONFIG_HARDENED_USERCOPY > -bool usercopy_fallback __ro_after_init = > - IS_ENABLED(CONFIG_HARDENED_USERCOPY_FALLBACK); > -module_param(usercopy_fallback, bool, 0400); > -MODULE_PARM_DESC(usercopy_fallback, > - "WARN instead of reject usercopy whitelist violations"); > -#endif > - > static LIST_HEAD(slab_caches_to_rcu_destroy); > static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work); > static DECLARE_WORK(slab_caches_to_rcu_destroy_work, > diff --git a/mm/slub.c b/mm/slub.c > index 3f96e099817a..77f53e76a3c3 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4125,7 +4125,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, > { > struct kmem_cache *s; > unsigned int offset; > - size_t object_size; > bool is_kfence = is_kfence_address(ptr); > > ptr = kasan_reset_tag(ptr); > @@ -4158,19 +4157,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, > n <= s->useroffset - offset + s->usersize) > return; > > - /* > - * If the copy is still within the allocated object, produce > - * a warning instead of rejecting the copy. This is intended > - * to be a temporary method to find any missing usercopy > - * whitelists. > - */ > - object_size = slab_ksize(s); > - if (usercopy_fallback && > - offset <= object_size && n <= object_size - offset) { > - usercopy_warn("SLUB object", s->name, to_user, offset, n); > - return; > - } > - > usercopy_abort("SLUB object", s->name, to_user, offset, n); > } > #endif /* CONFIG_HARDENED_USERCOPY */ > diff --git a/security/Kconfig b/security/Kconfig > index 0ced7fd33e4d..d9698900c9b7 100644 > --- a/security/Kconfig > +++ b/security/Kconfig > @@ -163,20 +163,6 @@ config HARDENED_USERCOPY > or are part of the kernel text. This kills entire classes > of heap overflow exploits and similar kernel memory exposures. > > -config HARDENED_USERCOPY_FALLBACK > - bool "Allow usercopy whitelist violations to fallback to object size" > - depends on HARDENED_USERCOPY > - default y > - help > - This is a temporary option that allows missing usercopy whitelists > - to be discovered via a WARN() to the kernel log, instead of > - rejecting the copy, falling back to non-whitelisted hardened > - usercopy that checks the slab allocation size instead of the > - whitelist size. This option will be removed once it seems like > - all missing usercopy whitelists have been identified and fixed. > - Booting with "slab_common.usercopy_fallback=Y/N" can change > - this setting. > - > config HARDENED_USERCOPY_PAGESPAN > bool "Refuse to copy allocations that span multiple pages" > depends on HARDENED_USERCOPY > > base-commit: 368094df48e680fa51cedb68537408cfa64b788e > -- > 2.30.2 >