From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 181F73126D6 for ; Thu, 2 Apr 2026 04:53:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775105640; cv=none; b=P3a8NI0RgchWfrpMng8TJmOhu/eFrn1mA46UXIt7Y76kOZ8NBMBT6htPzkcBwIaKd+9bPXGLaFi4TK6aIuUf6v1Pl+HyOSMhEu02CnLhYhOa1jumlJ8/eeeHu3+XpHEIrsawtvf5Cu7GzUEd29XT+Usg68sSBJ7c3dk+0OUAHEQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775105640; c=relaxed/simple; bh=eH21nATmfeN6FhaBpxnYW7O91zboBrBvU5TlcAbX+n8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=JKJNKG1lvhnp0NFvrg1v1seZHMRvckTDtkSGf19B4H3hUSJHzXSevZVjxF5QR8qOFxK2RgPzmgXuC7p7Z1MEU0MIvVwzhX+U8J98a///V1mWjHcQtcoLDHC4/BmdNBPO2IQxMnVmeKvhKiod16c4/YQYeou99z8na8+MJcOkfiI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N17R0CTx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N17R0CTx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42F0CC19423; Thu, 2 Apr 2026 04:53:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775105639; bh=eH21nATmfeN6FhaBpxnYW7O91zboBrBvU5TlcAbX+n8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=N17R0CTxt8pY+BwFURQZTQyl8ZmvhdhI+h5D4iORXgypLYUWSyoycsFUUpjXO06iX kAicocDlXmCN40AQ6sJhqmKvpPhxePcQ0wjrPxgZrIPYGuAMSEXaUTJicz6d7mZnx7 X8iPpp/9JEfPwfWWHmgT3Ow/A7uaH3G/gt4jfZYw9VaLLg51O3BkPQIAHbCbDwVtyC 3R2gpjFqy24U/fecDcSb8STaZIBfxlX2QonXnIwDo0NldRQ9b8DTJGkrpdAXfidTnt zbxM0hkL4XC/2zRnEOia63xY8zJ3o9HMw3mHmolc62rX6aXMvLjJ5B3rtSbgRvjf6Z 9qp4zlO+/tDcg== Date: Thu, 2 Apr 2026 13:53:57 +0900 From: "Harry Yoo (Oracle)" To: Hao Li Cc: hu.shengming@zte.com.cn, vbabka@kernel.org, akpm@linux-foundation.org, cl@gentwo.org, rientjes@google.com, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, zhang.run@zte.com.cn, xu.xin16@zte.com.cn, yang.tao172@zte.com.cn, yang.yang29@zte.com.cn Subject: Re: [PATCH v2] mm/slub: skip freelist construction for whole-slab bulk refill Message-ID: References: <202604011257259669oAdDsdnKx6twdafNZsF5@zte.com.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Apr 01, 2026 at 02:55:23PM +0800, Hao Li wrote: > On Wed, Apr 01, 2026 at 12:57:25PM +0800, hu.shengming@zte.com.cn wrote: > > @@ -4395,6 +4458,48 @@ static unsigned int alloc_from_new_slab(struct kmem_cache *s, struct slab *slab, > > return allocated; > > } > > > > +static unsigned int alloc_whole_from_new_slab(struct kmem_cache *s, > > + struct slab *slab, void **p, bool allow_spin) > > +{ > > + > > + unsigned int allocated = 0; > > + void *object, *start; > > + > > + if (alloc_whole_from_new_slab_random(s, slab, p, allow_spin, > > + &allocated)) { > > + goto done; > > + } > > + > > + start = fixup_red_left(s, slab_address(slab)); > > + object = setup_object(s, start); > > + > > + while (allocated < slab->objects - 1) { > > + p[allocated] = object; > > + maybe_wipe_obj_freeptr(s, object); > > + > > + allocated++; > > + object += s->size; > > + object = setup_object(s, object); > > + } > > Also, I feel the current patch contains some duplicated code like this loop. > > Would it make sense to split allocate_slab() into two functions? > > For example, > the first part could be called allocate_slab_meta_setup() (just an example name) > And, the second part could be allocate_slab_objects_setup(), with the core logic > being the loop over objects. Then allocate_slab_objects_setup() could support > two modes: one called BUILD_FREELIST, which builds the freelist, and another > called EMIT_OBJECTS, which skips building the freelist and directly places the > objects into the target array. Something similar but a little bit more thoughts to unify the code (**regardless of CONFIG_SLAB_FREELIST_RANDOM**) and avoid treating "the whole slab->freelist fits into the sheaf" as a special case: - allocate_slab() no longer builds the freelist. the freelist is built only when there are objects left after allocating objects from the new slab. - new_slab() allocates a new slab AND builds the freelist to keep existing behaviour. - refill_objects() allocates a slab using allocate_slab(), and passes it to alloc_from_new_slab(). alloc_from_new_slab() consumes some objects in random order, and then build the freelist with the objects left (if exists). We could actually abstract "iterating free objects in random order" into an API, and there would be two users of the API: - Building freelist - Filling objects into the sheaf (without building freelist!) Something like this... (names here are just examples, I'm not good at naming things!) struct freelist_iter { int pos; int freelist_count; int page_limit; void *start; }; /* note: handling !allow_spin nicely is tricky :-) */ alloc_from_new_slab(...) { struct freelist_iter fit; prep_freelist_iter(s, slab, &fit, allow_spin); while (slab->inuse < min(count, slab->objects)) { p[slab->inuse++] = next_freelist_entry(s, &fit); } if (slab->inuse < slab->objects) build_freelist(s, slab, &fit); } build_freelist(s, slab, fit) { size = slab->objects - slab->inuse; cur = next_freelist_entry(s, fit); cur = setup_object(s, cur); slab->freelist = cur; for (i = 1; i < size; i++) { next = next_freelist_entry(s, fit); next = setup_object(s, next); set_freepointer(s, cur, next); cur = next; } } #ifdef CONFIG_SLAB_FREELIST_RANDOM prep_freelist_iter(s, slab, fit, allow_spin) { fit->freelist_count = oo_objects(s->oo); fit->page_limit = slab->objects * s->size; fit->start = fixup_red_left(s, slab_address(slab)); if (slab->objects < 2 || !s->random_seq) { fit->pos = 0; } else if (allow_spin) { fit->pos = get_random_u32_below(freelist_count); } else { struct rnd_state *state; /* * An interrupt or NMI handler might interrupt and change * the state in the middle, but that's safe. */ state = &get_cpu_var(slab_rnd_state); fit->pos = prandom_u32_state(state) % freelist_count; put_cpu_var(slab_rnd_state); } return; } next_freelist_entry(s, fit) { /* * If the target page allocation failed, the number of objects on the * page might be smaller than the usual size defined by the cache. */ do { idx = s->random_seq[fit->pos]; fit->pos += 1; if (fit->pos >= freelist_count) fit->pos = 0; } while (unlikely(idx >= page_limit)); return (char *)start + idx; } #else prep_freelist_iter(s, slab, fit, allow_spin) { fit->pos = 0; return; } next_freelist_entry(s, fit) { void *next = fit->start + fit->pos * s->size; fit->pos++; return next; } #endif -- Cheers, Harry / Hyeonggon