From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37A1BC2BA1A for ; Mon, 17 Jun 2024 18:43:07 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=KgXttsZ4; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4W2zKP5cYPz3gGR for ; Tue, 18 Jun 2024 04:43:05 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=KgXttsZ4; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2a00:1450:4864:20::136; helo=mail-lf1-x136.google.com; envelope-from=urezki@gmail.com; receiver=lists.ozlabs.org) Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4W2zJc5q9gz3g6K for ; Tue, 18 Jun 2024 04:42:23 +1000 (AEST) Received: by mail-lf1-x136.google.com with SMTP id 2adb3069b0e04-52bc29c79fdso6411090e87.1 for ; Mon, 17 Jun 2024 11:42:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718649733; x=1719254533; darn=lists.ozlabs.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=+jO2FlPNmcOLf+hC3upZKg3SDI4/5kwrnW8GkxW5YVg=; b=KgXttsZ4lIyTmYM6yUHmDkHY7Gy9QpcvJAdc6xhd1HMJITGO6s5RypYAe9VbYuBoJS 5NT36CvCGKoXxJ+qzKcPKLKgvcryZyyzfsjzCPG+fBe6nkLgsVkXsObTBCMVyqjH1Ng1 /X9a4UzEF2depLw+U00z7r65U6nj0jO9VOZGJ2VGiRmkvdsNdGOtu4Qr5bGrnO2/v7tM xERhEW7SBHi/tIwE0tItulEZe/WTrivirFX0Tka09dDpGOUiZFxQx+53fgGUra8hBVlF uS3uxZWJH+MlTRJgxtCBX0/xaeq7YHVKVocR2/mnViNnMZzqz9Q6Z5BEh8I3J7ayusRq xiOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718649733; x=1719254533; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=+jO2FlPNmcOLf+hC3upZKg3SDI4/5kwrnW8GkxW5YVg=; b=vJerauJA4h6oHTjTqJ2zdnRTJmud7ucWVTeN8ER0gHJy2LSpx3mfFw1esVGfD7r63r TxyM+iDxuW2M2ynQDxiNA/glV4EuGMNyWYvc6xq1y2PTQI50y6Y1b1S4XnJyIYHq2KEt WnMvPsqJ6YAR81uhLHXdUUzGQFn7kKIpGP+UHaUJxBGTS9dZSfOhVwrSBKCw9njDC1rN dfsaJWKCQP5zpGMjTpTHqZdD/VJH6EU9Dj9mGvkT4QHd3StqPe3l4WyKcvRyhvqV6A4N VyrKCOyX9J5WtK+qlCPPdSmZdkLlBSDxHdmbGF1JqUSYRagKtEZinslpveNp01rbbqnA P32g== X-Forwarded-Encrypted: i=1; AJvYcCVIiB9AWUTHhfspYv/0Gd72645TCFtQy+uUasmPMRkwirFZKTyN72mTpURTHK1mG8OI4j7tQsxS7iqk7u8G3bQlsQ4F+GEtZLWuxJbOoQ== X-Gm-Message-State: AOJu0YyVgvvQo/pFt+Nqic596Zz4cXmNYSJcv0v6nDu6rYggjO2GGxPW botjQKTee1YpGSyHliwS0ipFk+2p1ujRgZVUacedwO1V/M2elyfc X-Google-Smtp-Source: AGHT+IG+He9esutTE4DcK6Uo+MkQ9CT1wI+N+kkdrm92KSfS1cK189MGjfkb18QFb85ZwtxeC6GLJw== X-Received: by 2002:a05:6512:549:b0:521:cc8a:46dd with SMTP id 2adb3069b0e04-52ca6e56e2dmr7855127e87.11.1718649732281; Mon, 17 Jun 2024 11:42:12 -0700 (PDT) Received: from pc636 (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f56db6182sm540019666b.51.2024.06.17.11.42.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 11:42:11 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Mon, 17 Jun 2024 20:42:09 +0200 To: Vlastimil Babka Subject: Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback Message-ID: References: <20240609082726.32742-1-Julia.Lawall@inria.fr> <20240612143305.451abf58@kernel.org> <08ee7eb2-8d08-4f1f-9c46-495a544b8c0e@paulmck-laptop> <3b6fe525-626c-41fb-8625-3925ca820d8e@paulmck-laptop> <6711935d-20b5-41c1-8864-db3fc7d7823d@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6711935d-20b5-41c1-8864-db3fc7d7823d@suse.cz> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A. Donenfeld" , kvm@vger.kernel.org, Neil Brown , kernel-janitors@vger.kernel.org, Olga Kornievskaia , kasan-dev , Dai Ngo , Christophe Leroy , coreteam@netfilter.org, "Naveen N. Rao" , Jakub Kicinski , linux-trace-kernel@vger.kernel.org, paulmck@kernel.org, bridge@lists.linux.dev, ecryptfs@vger.kernel.org, Nicholas Piggin , linux-can@vger.kernel.org, linux-block@vger.kernel.org, Mathieu Desnoyers , Tom Talpey , linux-nfs@vger.kernel.org, netdev@vger.kernel.org, Lai Jiangshan , linux-kernel@vger.kernel.org, Julia Lawall , "Uladzislau Rezki \(Sony\)" , netfilter-devel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, wireguard@lists.zx2c4.com Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Mon, Jun 17, 2024 at 07:23:36PM +0200, Vlastimil Babka wrote: > On 6/17/24 6:12 PM, Paul E. McKenney wrote: > > On Mon, Jun 17, 2024 at 05:10:50PM +0200, Vlastimil Babka wrote: > >> On 6/13/24 2:22 PM, Jason A. Donenfeld wrote: > >> > On Wed, Jun 12, 2024 at 08:38:02PM -0700, Paul E. McKenney wrote: > >> >> o Make the current kmem_cache_destroy() asynchronously wait for > >> >> all memory to be returned, then complete the destruction. > >> >> (This gets rid of a valuable debugging technique because > >> >> in normal use, it is a bug to attempt to destroy a kmem_cache > >> >> that has objects still allocated.) > >> > >> This seems like the best option to me. As Jason already said, the debugging > >> technique is not affected significantly, if the warning just occurs > >> asynchronously later. The module can be already unloaded at that point, as > >> the leak is never checked programatically anyway to control further > >> execution, it's just a splat in dmesg. > > > > Works for me! > > Great. So this is how a prototype could look like, hopefully? The kunit test > does generate the splat for me, which should be because the rcu_barrier() in > the implementation (marked to be replaced with the real thing) is really > insufficient. Note the test itself passes as this kind of error isn't wired > up properly. > > Another thing to resolve is the marked comment about kasan_shutdown() with > potential kfree_rcu()'s in flight. > > Also you need CONFIG_SLUB_DEBUG enabled otherwise node_nr_slabs() is a no-op > and it might fail to notice the pending slabs. This will need to change. > > ----8<---- > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c > index e6667a28c014..e3e4d0ca40b7 100644 > --- a/lib/slub_kunit.c > +++ b/lib/slub_kunit.c > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > #include "../mm/slab.h" > > static struct kunit_resource resource; > @@ -157,6 +158,26 @@ static void test_kmalloc_redzone_access(struct kunit *test) > kmem_cache_destroy(s); > } > > +struct test_kfree_rcu_struct { > + struct rcu_head rcu; > +}; > + > +static void test_kfree_rcu(struct kunit *test) > +{ > + struct kmem_cache *s = test_kmem_cache_create("TestSlub_kfree_rcu", > + sizeof(struct test_kfree_rcu_struct), > + SLAB_NO_MERGE); > + struct test_kfree_rcu_struct *p = kmem_cache_alloc(s, GFP_KERNEL); > + > + kasan_disable_current(); > + > + KUNIT_EXPECT_EQ(test, 0, slab_errors); > + > + kasan_enable_current(); > + kfree_rcu(p, rcu); > + kmem_cache_destroy(s); > +} > + > static int test_init(struct kunit *test) > { > slab_errors = 0; > @@ -177,6 +198,7 @@ static struct kunit_case test_cases[] = { > > KUNIT_CASE(test_clobber_redzone_free), > KUNIT_CASE(test_kmalloc_redzone_access), > + KUNIT_CASE(test_kfree_rcu), > {} > }; > > diff --git a/mm/slab.h b/mm/slab.h > index b16e63191578..a0295600af92 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -277,6 +277,8 @@ struct kmem_cache { > unsigned int red_left_pad; /* Left redzone padding size */ > const char *name; /* Name (only for display!) */ > struct list_head list; /* List of slab caches */ > + struct work_struct async_destroy_work; > + > #ifdef CONFIG_SYSFS > struct kobject kobj; /* For sysfs */ > #endif > @@ -474,7 +476,7 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s) > SLAB_NO_USER_FLAGS) > > bool __kmem_cache_empty(struct kmem_cache *); > -int __kmem_cache_shutdown(struct kmem_cache *); > +int __kmem_cache_shutdown(struct kmem_cache *, bool); > void __kmem_cache_release(struct kmem_cache *); > int __kmem_cache_shrink(struct kmem_cache *); > void slab_kmem_cache_release(struct kmem_cache *); > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 5b1f996bed06..c5c356d0235d 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -44,6 +44,8 @@ static LIST_HEAD(slab_caches_to_rcu_destroy); > static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work); > static DECLARE_WORK(slab_caches_to_rcu_destroy_work, > slab_caches_to_rcu_destroy_workfn); > +static void kmem_cache_kfree_rcu_destroy_workfn(struct work_struct *work); > + > > /* > * Set of flags that will prevent slab merging > @@ -234,6 +236,7 @@ static struct kmem_cache *create_cache(const char *name, > > s->refcount = 1; > list_add(&s->list, &slab_caches); > + INIT_WORK(&s->async_destroy_work, kmem_cache_kfree_rcu_destroy_workfn); > return s; > > out_free_cache: > @@ -449,12 +452,16 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) > } > } > > -static int shutdown_cache(struct kmem_cache *s) > +static int shutdown_cache(struct kmem_cache *s, bool warn_inuse) > { > /* free asan quarantined objects */ > + /* > + * XXX: is it ok to call this multiple times? and what happens with a > + * kfree_rcu() in flight that finishes after or in parallel with this? > + */ > kasan_cache_shutdown(s); > > - if (__kmem_cache_shutdown(s) != 0) > + if (__kmem_cache_shutdown(s, warn_inuse) != 0) > return -EBUSY; > > list_del(&s->list); > @@ -477,6 +484,32 @@ void slab_kmem_cache_release(struct kmem_cache *s) > kmem_cache_free(kmem_cache, s); > } > > +static void kmem_cache_kfree_rcu_destroy_workfn(struct work_struct *work) > +{ > + struct kmem_cache *s; > + int err = -EBUSY; > + bool rcu_set; > + > + s = container_of(work, struct kmem_cache, async_destroy_work); > + > + // XXX use the real kmem_cache_free_barrier() or similar thing here It implies that we need to introduce kfree_rcu_barrier(), a new API, which i wanted to avoid initially. Since you do it asynchronous can we just repeat and wait until it a cache is furry freed? I am asking because inventing a new kfree_rcu_barrier() might not be so straight forward. -- Uladzislau Rezki