From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30412C25B48 for ; Wed, 25 Oct 2023 02:18:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 870BF6B0306; Tue, 24 Oct 2023 22:18:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 821F46B0307; Tue, 24 Oct 2023 22:18:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E9E46B0308; Tue, 24 Oct 2023 22:18:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5B64F6B0306 for ; Tue, 24 Oct 2023 22:18:54 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C0961120BE7 for ; Wed, 25 Oct 2023 02:18:53 +0000 (UTC) X-FDA: 81382375746.04.5B1A781 Received: from out-190.mta1.migadu.com (out-190.mta1.migadu.com [95.215.58.190]) by imf02.hostedemail.com (Postfix) with ESMTP id D091680008 for ; Wed, 25 Oct 2023 02:18:50 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=AmiPxCLT; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.190 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698200331; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/QErtuE7jtTGYZC64XoY1sDur6jRHRBSfnHHKhuu6XU=; b=CK7jzkUbrdCvAX4MC6wOY29f86Hj4eV/PXGaV4cdvlK0DrNZcCTI7JMAYW2RHge4hZvvjz i6B8PrwQrpqcv7nHQl+P43t/RJQ0alTUbWDIiVWxQ4oRtPCkaUrGx0mHKjrkX+TkZpmF// 96cL18zBFs3HDsZSLmuGftdM0d3bXRE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=AmiPxCLT; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.190 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698200331; a=rsa-sha256; cv=none; b=wmDACrG3AW9kPznlnA1lG6PIuaRVbeq2Zgqdy6nuWtnbcnh94s4/31yVcr2LKW8MCMFTYx 3VPe0lQX0SJ4KlFad4k15u+i7kOM8vq9OxI6FPyHhXYOtYdaESUbDuwOWuqNWXfcQHKyh6 Yb0G/K9lXkCJOxnJDydHyuwECnT/KCQ= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698200328; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/QErtuE7jtTGYZC64XoY1sDur6jRHRBSfnHHKhuu6XU=; b=AmiPxCLTxGMx67NNdT+Um/pi4jJXGv3LWzRCElKkSSYPjrLrFCERd54Xlg0kvIsYoDP+Yr sJuXyNqcMZORoJ817h0fij9xOs3uWgvA/v1PnMAbD2shrFrFwn9B49WJtbfBl0NKM1t/eg NV3dNIzKYblcz18/5Y5Y/9NoY99J3xo= Date: Wed, 25 Oct 2023 10:18:16 +0800 MIME-Version: 1.0 Subject: Re: [RFC PATCH v3 6/7] slub: Delay freezing of partial slabs Content-Language: en-US To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou References: <20231024093345.3676493-1-chengming.zhou@linux.dev> <20231024093345.3676493-7-chengming.zhou@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: <20231024093345.3676493-7-chengming.zhou@linux.dev> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: D091680008 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: sqd8f39jcxua5dmfss5ffwqkf1e356cm X-HE-Tag: 1698200330-958187 X-HE-Meta: U2FsdGVkX1/uJdr37Cr5oGKcyiqRHP0nG+iiKPmy8IFwOeQkROEz5DmEdruwGGMUopzXVjGSVbUdRmawoMGllJqd8pysp1WFnqMfCSR/oTVwoXfJ79HzJ59/gGcW5XmJoBKZ4EcAcbC2w4uUEQxXIBRFTvKMhcUctSFyZj11VVUDAyI8pjR5cM5P/UYgqJ2XWFy2r11xtBcRFRxKRcRt+9xsK3e43WIxCUKe2o59C1yWJDyBGUN5GjxW9MiEmFckRIiO4afCJ+NiGZcl6jGQ5dirU2Li/+XFPQCUPn+9nv1zQDwB5+22csj1SG2XI6nIcqOVL8mpDqZiRWiFGfXiA94E9zfFxvEOBcoBhFqJcGcRwRbETFQC7wSFbnpw8SqQ/uUcbguKi/Y5ouSIhI2sOYEFw2+ReHlh6CSC3tSWC73zaOhe/EHsz5Lhu1UNaFTU9gzWrnTAcxX4eNPi63LIxim4eBqPMb00vgM38DS9ynuT01BLS0ixcmrCJQVwlgPyjYO2Xd5CnXvOhv9/xu/rHvT6z/Ux4Ofrfv/CkSHtSIbeT1CSBzuSZCsewNFabYTgALDcUrb2vGdEsBijaktkRTEv/Xplv/qSHwIBPe1SBnDms4fMBEqX1vqmU1Ue4xML592fkP4tKt6XMzpuQZi6mp7KBedaIhl5DBAUdV1M4CEjX7mRWdRsOCwvLYBoQ+IfIjBAOhdjwxf5AUKSRURa46JpCz6bl4PWjQNDkzcSwsxC3doS4OeIS1/4EqG2YJX9c5vfW4cLgZ4LMwkldKFo80puhN6BQeECnCRUi4jD3hWOcdgy42RQZcyQfKd9h80BYhm1tcZi5xUlT2GHUu/B00cKudFVFjKpw7yjHuqQqokrccddRQb9EaFhJPj/RxOnd6k1/MX1lK/w/Q0hfXPcEzrDjVvn/7QTuv4fqPtQpjSvT2qLDurVg6zm4lZxUJGCqAqGb20ILzgUxN2SCnv /Gd+Ik3s 098Vzl1tyVgpkuTdn+4FglFV9OloRuTUapAmSc6cwEmp4E6DSpYsqETRV8fHE1cmFqx174CrQK7o/X5QhDaYibEoEiJinaadi8sHhzMZmGTAza/UbD2bgCDusFxAivAJpLkr5Sd90y/CXf8CpkrM531az7fNWWqGW7ckID+YK4RC6m3pQ7nuGaAXtsSW0hP64A+tbZN/bGFQnBeG3CpdlTlwOg3pK4yrbOQaE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/10/24 17:33, chengming.zhou@linux.dev wrote: > From: Chengming Zhou > > Now we will freeze slabs when moving them out of node partial list to > cpu partial list, this method needs two cmpxchg_double operations: > > 1. freeze slab (acquire_slab()) under the node list_lock > 2. get_freelist() when pick used in ___slab_alloc() > > Actually we don't need to freeze when moving slabs out of node partial > list, we can delay freezing to when use slab freelist in ___slab_alloc(), > so we can save one cmpxchg_double(). > > And there are other good points: > - The moving of slabs between node partial list and cpu partial list > becomes simpler, since we don't need to freeze or unfreeze at all. > > - The node list_lock contention would be less, since we don't need to > freeze any slab under the node list_lock. > > We can achieve this because there is no concurrent path would manipulate > the partial slab list except the __slab_free() path, which is serialized > now. > > Since the slab returned by get_partial() interfaces is not frozen anymore > and no freelist in the partial_context, so we need to use the introduced > freeze_slab() to freeze it and get its freelist. > > Similarly, the slabs on the CPU partial list are not frozen anymore, > we need to freeze_slab() on it before use. > > Signed-off-by: Chengming Zhou > --- > mm/slub.c | 111 +++++++++++------------------------------------------- > 1 file changed, 21 insertions(+), 90 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 5b428648021f..486d44421432 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2215,51 +2215,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, > return object; > } > > -/* > - * Remove slab from the partial list, freeze it and > - * return the pointer to the freelist. > - * > - * Returns a list of objects or NULL if it fails. > - */ > -static inline void *acquire_slab(struct kmem_cache *s, > - struct kmem_cache_node *n, struct slab *slab, > - int mode) > -{ > - void *freelist; > - unsigned long counters; > - struct slab new; > - > - lockdep_assert_held(&n->list_lock); > - > - /* > - * Zap the freelist and set the frozen bit. > - * The old freelist is the list of objects for the > - * per cpu allocation list. > - */ > - freelist = slab->freelist; > - counters = slab->counters; > - new.counters = counters; > - if (mode) { > - new.inuse = slab->objects; > - new.freelist = NULL; > - } else { > - new.freelist = freelist; > - } > - > - VM_BUG_ON(new.frozen); > - new.frozen = 1; > - > - if (!__slab_update_freelist(s, slab, > - freelist, counters, > - new.freelist, new.counters, > - "acquire_slab")) > - return NULL; > - > - remove_partial(n, slab); > - WARN_ON(!freelist); > - return freelist; > -} > - > #ifdef CONFIG_SLUB_CPU_PARTIAL > static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); > #else > @@ -2276,7 +2231,6 @@ static struct slab *get_partial_node(struct kmem_cache *s, > struct partial_context *pc) > { > struct slab *slab, *slab2, *partial = NULL; > - void *object = NULL; > unsigned long flags; > unsigned int partial_slabs = 0; > > @@ -2295,7 +2249,7 @@ static struct slab *get_partial_node(struct kmem_cache *s, > continue; > > if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { > - object = alloc_single_from_partial(s, n, slab, > + void *object = alloc_single_from_partial(s, n, slab, > pc->orig_size); > if (object) { > partial = slab; > @@ -2305,13 +2259,10 @@ static struct slab *get_partial_node(struct kmem_cache *s, > continue; > } > > - object = acquire_slab(s, n, slab, object == NULL); > - if (!object) > - break; > + remove_partial(n, slab); > > if (!partial) { > partial = slab; > - pc->object = object; > stat(s, ALLOC_FROM_PARTIAL); > } else { > put_cpu_partial(s, slab, 0); > @@ -2610,9 +2561,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) > unsigned long flags = 0; > > while (partial_slab) { > - struct slab new; > - struct slab old; > - > slab = partial_slab; > partial_slab = slab->next; > > @@ -2625,23 +2573,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) > spin_lock_irqsave(&n->list_lock, flags); > } > > - do { > - > - old.freelist = slab->freelist; > - old.counters = slab->counters; > - VM_BUG_ON(!old.frozen); > - > - new.counters = old.counters; > - new.freelist = old.freelist; > - > - new.frozen = 0; > - > - } while (!__slab_update_freelist(s, slab, > - old.freelist, old.counters, > - new.freelist, new.counters, > - "unfreezing slab")); > - > - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { > + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { > slab->next = slab_to_discard; > slab_to_discard = slab; > } else { > @@ -3148,7 +3080,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > node = NUMA_NO_NODE; > goto new_slab; > } > -redo: > > if (unlikely(!node_match(slab, node))) { > /* > @@ -3224,7 +3155,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > > new_slab: > > - if (slub_percpu_partial(c)) { > + while (slub_percpu_partial(c)) { > local_lock_irqsave(&s->cpu_slab->lock, flags); > if (unlikely(c->slab)) { > local_unlock_irqrestore(&s->cpu_slab->lock, flags); > @@ -3236,11 +3167,20 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > goto new_objects; > } > > - slab = c->slab = slub_percpu_partial(c); > + slab = slub_percpu_partial(c); > slub_set_percpu_partial(c, slab); > local_unlock_irqrestore(&s->cpu_slab->lock, flags); > stat(s, CPU_PARTIAL_ALLOC); > - goto redo; > + > + if (unlikely(!node_match(slab, node) || > + !pfmemalloc_match(slab, gfpflags))) { > + slab->next = NULL; > + __unfreeze_partials(s, slab); > + continue; > + } > + > + freelist = freeze_slab(s, slab); > + goto retry_load_slab; > } Oops, this while(slub_percpu_partial(c)) loop block should be put in #ifdef CONFIG_SLUB_CPU_PARTIAL, since the slab->next and __unfreeze_partials() only defined when CONFIG_SLUB_CPU_PARTIAL. And I should append a cleanup patch to rename all *unfreeze_partials* functions to *put_partials* since there is no "unfreeze" in these functions anymore. Will do in the next version. Thanks. > > new_objects: > @@ -3249,8 +3189,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > pc.orig_size = orig_size; > slab = get_partial(s, node, &pc); > if (slab) { > - freelist = pc.object; > if (kmem_cache_debug(s)) { > + freelist = pc.object; > /* > * For debug caches here we had to go through > * alloc_single_from_partial() so just store the > @@ -3262,6 +3202,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > return freelist; > } > > + freelist = freeze_slab(s, slab); > goto retry_load_slab; > } > > @@ -3663,18 +3604,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > was_frozen = new.frozen; > new.inuse -= cnt; > if ((!new.inuse || !prior) && !was_frozen) { > - > - if (kmem_cache_has_cpu_partial(s) && !prior) { > - > - /* > - * Slab was on no list before and will be > - * partially empty > - * We can defer the list move and instead > - * freeze it. > - */ > - new.frozen = 1; > - > - } else { /* Needs to be taken off a list */ > + /* Needs to be taken off a list */ > + if (!kmem_cache_has_cpu_partial(s) || prior) { > > n = get_node(s, slab_nid(slab)); > /* > @@ -3704,9 +3635,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > * activity can be necessary. > */ > stat(s, FREE_FROZEN); > - } else if (new.frozen) { > + } else if (kmem_cache_has_cpu_partial(s) && !prior) { > /* > - * If we just froze the slab then put it onto the > + * If we started with a full slab then put it onto the > * per cpu partial list. > */ > put_cpu_partial(s, slab, 1);