From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B27AC07E97 for ; Sun, 3 Dec 2023 10:26:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCDAB6B0312; Sun, 3 Dec 2023 05:26:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C7E6E6B0313; Sun, 3 Dec 2023 05:26:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B455C6B0314; Sun, 3 Dec 2023 05:26:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A649E6B0312 for ; Sun, 3 Dec 2023 05:26:54 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 658CF140109 for ; Sun, 3 Dec 2023 10:26:54 +0000 (UTC) X-FDA: 81525128748.16.B5DFDEF Received: from out-177.mta0.migadu.com (out-177.mta0.migadu.com [91.218.175.177]) by imf07.hostedemail.com (Postfix) with ESMTP id 6F48B40009 for ; Sun, 3 Dec 2023 10:26:52 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=INAAscK9; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.177 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701599212; a=rsa-sha256; cv=none; b=KaOO05UpvYD9kBvM4ZbbNAtKANqWslbx90vtPGKXVDMHFXeOqF3t+IYlOmxbznRvw+m5+t CceHGB2bRPJFGNE5z8rOqNAw8FXq0xYOXphIuERLXvOMEW7WFtzk8S3tWW3duyQ0vdvb14 uvbMsskTr6w04QJYf3EW7NmxxVle9mE= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=INAAscK9; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.177 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701599212; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qrtEfm+ABJ7wZDXABUJNQSB06ZVHq0EffLwzkPVkelA=; b=LubNVb1G8CsNx4niD5fnFSEBOt16yAj9QRCyH68ZL9lTWIIAdT+nW8uGQcDzmlaKeaGSKt iriuWF1g38kqTpyN0qqheaj7+dbl+pxeXlT4AFXPoVf+K7E+uhuRCPgFQqpv/NVjOG+sLd 7Vqfjmgzrx1HvYQ0qrCwJXnQ7LxIRhc= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1701599210; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qrtEfm+ABJ7wZDXABUJNQSB06ZVHq0EffLwzkPVkelA=; b=INAAscK9yQN+V7BQYtKeEpCT1C17m2Hca1J8hrYp/Z1bDiwW77I/C1hELG0uwf+s8Z4oIr 5ICSXQIcAgKxXakvUgG553J6mAuRjw75+HroH/joXRoT7C8sx9nNPoZvG2YopXQzbqklLW mKTRmMY4LzeQYS01TV7ECoNApQ7OFQs= Date: Sun, 3 Dec 2023 18:26:20 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v5 7/9] slub: Optimize deactivate_slab() Content-Language: en-US To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-8-chengming.zhou@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6F48B40009 X-Stat-Signature: qu7qxo9hbtagfwn4oe6zxnkjpgpypd65 X-HE-Tag: 1701599212-3719 X-HE-Meta: U2FsdGVkX19qffskavKCJxq3tlclr0qKA7rkDvYIhc4Jeqrbmuyvi/murkh2EaTRvHyUy3QDOdESpxTDMP2UJGukLra+BfiFGIzyg00iPmbjYDQQocMFahyyqoOw7mTzSWnJO0ijC5OM/HBfNuXv/5jlpB7I8YYZ1ucBSPbc+ZTtmAkmeDVic/+bYWKgtt6uNeRXdALOTzfv1YqeYEiQqR+T9/ZJzfGGdBda8+agr0eiwMe+PePqTpgRZNStXmwiDhUxynoXjoKxUq4sfu8o2sknNFAZz0khx7TCCkwx/b2tJQQzYNhhtFrbnHlH+ZumVvvaTKLMT2PwBv+pZ9GVrzHMcOs2mltd5V77swhq6hpGi5SfLmuZmwe34BddsHr/USwq9uXBP0rXM6rtd0Ho866ZUzMDbzgbgycVYuVWHk9eU4yRIXDZDb5e0dUuDkVn5R8LS8H8NyIYhVqYu41FLcKoc5G3v2DO4s9TVufhn1CQK1E9IsemUGbB25kcIVFMK/slMm64j4b2OEHr+LHrL9uakf9HYS9tlZqIGVO6KddC7GM4G5bto0xQLxKz611ZgJyAHyCRDgsMhyg7yaXod1lANC3gutaJrE9GL6arDD2DGPnXyqK9lj/fKKxFSzAgNhRCPsb6qUj2q+9V3Gt3pLHyI68HeoZw9y4AczUSXuweqSKUXWHp5WFK3zPUzdShYupqHHvTlV6zFiJ7kT849GzMyy4ufRoyzkMp5AcygJmyeK9J3Qw5UCs5Gw99PyodiacwkW2/Po+U+kZFbm8EbQO637qSm40GFr4Mg9e//yX+kygIyjfrVmwUHQKwYSTUHlRpZK0bfI8NTAazh4Y9IDlZWP071OoO21/sgyqtVfXQ9ljlRWx0f8nLMBROJVKDIGT8SF+OAc/fKdUchtLrM+3tXkzzBDI6YQoFaBTdhXl4SSigg4J0NzTQ3vYthMYzfVUVsUKLWT9vNU+g0px UBoIgplb L/2goO23jxRJWG3X3NpVSzxuGN2XJ0/WfJVbhRbW3jseVdGiPpwTcP5cTV++j8//v72G+uRssN42klNe2LZ+ViGNht7A4gd+VvV0ijE6pQVg0/WFKzLlHLhN0zuMbaanTuH5atZ8jspO2/WgXsfLii85KP8trWrplkfRt7fEShxaOEhxSOfwE7E3zMrp0qfaAg8BbBMJyWwEQGbMPFI/0l932YGuSJJmciF+ScSe4OmFugbbrn4ePw/IM3o91S8lMj7VNgZNqglPRuWeDJgE2aq7y87QgG6lzTiLh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/12/3 17:23, Hyeonggon Yoo wrote: > On Thu, Nov 2, 2023 at 12:25 PM wrote: >> >> From: Chengming Zhou >> >> Since the introduce of unfrozen slabs on cpu partial list, we don't >> need to synchronize the slab frozen state under the node list_lock. >> >> The caller of deactivate_slab() and the caller of __slab_free() won't >> manipulate the slab list concurrently. >> >> So we can get node list_lock in the last stage if we really need to >> manipulate the slab list in this path. >> >> Signed-off-by: Chengming Zhou >> Reviewed-by: Vlastimil Babka >> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> >> --- >> mm/slub.c | 79 ++++++++++++++++++------------------------------------- >> 1 file changed, 26 insertions(+), 53 deletions(-) >> >> diff --git a/mm/slub.c b/mm/slub.c >> index bcb5b2c4e213..d137468fe4b9 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2468,10 +2468,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) >> static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >> void *freelist) >> { >> - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; >> struct kmem_cache_node *n = get_node(s, slab_nid(slab)); >> int free_delta = 0; >> - enum slab_modes mode = M_NONE; >> void *nextfree, *freelist_iter, *freelist_tail; >> int tail = DEACTIVATE_TO_HEAD; >> unsigned long flags = 0; >> @@ -2509,65 +2507,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >> /* >> * Stage two: Unfreeze the slab while splicing the per-cpu >> * freelist to the head of slab's freelist. >> - * >> - * Ensure that the slab is unfrozen while the list presence >> - * reflects the actual number of objects during unfreeze. >> - * >> - * We first perform cmpxchg holding lock and insert to list >> - * when it succeed. If there is mismatch then the slab is not >> - * unfrozen and number of objects in the slab may have changed. >> - * Then release lock and retry cmpxchg again. >> */ >> -redo: >> - >> - old.freelist = READ_ONCE(slab->freelist); >> - old.counters = READ_ONCE(slab->counters); >> - VM_BUG_ON(!old.frozen); >> - >> - /* Determine target state of the slab */ >> - new.counters = old.counters; >> - if (freelist_tail) { >> - new.inuse -= free_delta; >> - set_freepointer(s, freelist_tail, old.freelist); >> - new.freelist = freelist; >> - } else >> - new.freelist = old.freelist; >> - >> - new.frozen = 0; >> + do { >> + old.freelist = READ_ONCE(slab->freelist); >> + old.counters = READ_ONCE(slab->counters); >> + VM_BUG_ON(!old.frozen); >> + >> + /* Determine target state of the slab */ >> + new.counters = old.counters; >> + new.frozen = 0; >> + if (freelist_tail) { >> + new.inuse -= free_delta; >> + set_freepointer(s, freelist_tail, old.freelist); >> + new.freelist = freelist; >> + } else { >> + new.freelist = old.freelist; >> + } >> + } while (!slab_update_freelist(s, slab, >> + old.freelist, old.counters, >> + new.freelist, new.counters, >> + "unfreezing slab")); >> >> + /* >> + * Stage three: Manipulate the slab list based on the updated state. >> + */ > > deactivate_slab() might unconsciously put empty slabs into partial list, like: > > deactivate_slab() __slab_free() > cmpxchg(), slab's not empty > cmpxchg(), slab's empty > and unfrozen Hi, Sorry, but I don't get it here how __slab_free() can see the slab empty, since the slab is not empty from deactivate_slab() path, and it can't be used by any CPU at that time? Thanks for review! > spin_lock(&n->list_lock) > (slab's empty but not > on partial list, > > spin_unlock(&n->list_lock) and return) > spin_lock(&n->list_lock) > put slab into partial list > spin_unlock(&n->list_lock) > > IMHO it should be fine in the real world, but just wanted to > mention as it doesn't seem to be intentional. > > Otherwise it looks good to me! > >> if (!new.inuse && n->nr_partial >= s->min_partial) { >> - mode = M_FREE; >> + stat(s, DEACTIVATE_EMPTY); >> + discard_slab(s, slab); >> + stat(s, FREE_SLAB); >> } else if (new.freelist) { >> - mode = M_PARTIAL; >> - /* >> - * Taking the spinlock removes the possibility that >> - * acquire_slab() will see a slab that is frozen >> - */ >> spin_lock_irqsave(&n->list_lock, flags); >> - } else { >> - mode = M_FULL_NOLIST; >> - } >> - >> - >> - if (!slab_update_freelist(s, slab, >> - old.freelist, old.counters, >> - new.freelist, new.counters, >> - "unfreezing slab")) { >> - if (mode == M_PARTIAL) >> - spin_unlock_irqrestore(&n->list_lock, flags); >> - goto redo; >> - } >> - >> - >> - if (mode == M_PARTIAL) { >> add_partial(n, slab, tail); >> spin_unlock_irqrestore(&n->list_lock, flags); >> stat(s, tail); >> - } else if (mode == M_FREE) { >> - stat(s, DEACTIVATE_EMPTY); >> - discard_slab(s, slab); >> - stat(s, FREE_SLAB); >> - } else if (mode == M_FULL_NOLIST) { >> + } else { >> stat(s, DEACTIVATE_FULL); >> } >> } >> -- >> 2.20.1 >>