From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B325C433FE for ; Sat, 15 Oct 2022 11:47:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35E746B0072; Sat, 15 Oct 2022 07:47:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 30E216B0075; Sat, 15 Oct 2022 07:47:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D5266B0078; Sat, 15 Oct 2022 07:47:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0955E6B0072 for ; Sat, 15 Oct 2022 07:47:38 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C3FF9120C60 for ; Sat, 15 Oct 2022 11:47:37 +0000 (UTC) X-FDA: 80023008954.29.0718DF3 Received: from mail-oa1-f51.google.com (mail-oa1-f51.google.com [209.85.160.51]) by imf23.hostedemail.com (Postfix) with ESMTP id 5E5CA140029 for ; Sat, 15 Oct 2022 11:47:37 +0000 (UTC) Received: by mail-oa1-f51.google.com with SMTP id 586e51a60fabf-134072c15c1so8750962fac.2 for ; Sat, 15 Oct 2022 04:47:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :sender:from:to:cc:subject:date:message-id:reply-to; bh=jUB8fo9MMJnwwxJWvOyVkFrZtTfJO2gl5Nmsaw1xq7c=; b=Ipgf7V6nSwDnZZdF27TMbiNWCO/XmdPJVzbRR+HI+57721p2YmvrvlUArEJqh73rZx CgIsbK3Lhw5eDMgbmvljQJbbWO+lwdKMRc0Wci7UTcXBbg1kwA3SZrmie7ZImrDNqgFO xnoegaQsGzrkVIri9vBtnLx0lpJS04+/ONO0K1SwIGMSGzXbXYW10wHGZy1t9CSjHPtK Y8KjykSUD9IeI4TOD8GOWngqWbHqmBDfkhJyhC/q8kkkUyrlfP8dp7vDQqFnj4+wUdfH 42N4NUWwUYOs9+CGpG1ITD/pmBSgpxdtKDPTNjoTUuq5+B3NTEFoCp7b8BMFhTmAZzmM WBLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-disposition:mime-version:message-id:subject:cc:to:from:date :sender:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=jUB8fo9MMJnwwxJWvOyVkFrZtTfJO2gl5Nmsaw1xq7c=; b=b0cNOX7k7rhvsLbP7zzQ3GNPls0tdQs66gq9qJw80MPfYglRR6hekNPEoqTust6IcY igMxCiLj7lGBXGy6o6/4NcF5jbwpKcvb4BSreAIa6m11OGH/pE3XM7eoSgjQtKTptW8P oC0Hglyxg1R/iHDrFfNWXafnRm+6hCOikQnf9W36hreCGrfHTt0kBYg47Dp5tsDIyZ8b nxpskKjBdLPgaeSsas4unkD2WWlzjLZA1NMPZAAC2S34qLy/HV34bkpaJ5FIJ2hd3fOk W8GL3w5iXclMXhDYz6MUcdtFeQStjFPhqQQ8LEJ8PLzsuIL3VfadaYQXmDHnXPACxovb 4Syg== X-Gm-Message-State: ACrzQf1vJG3wargXHKkVVxsUnaN1WVO3N8AN1f+VNnBnTJCavZAXv0It TW/yA6PJNCHL6/O5pjFxves= X-Google-Smtp-Source: AMsMyM5ElYdfs7lovrfbrFXm5kSdfqzRJB+ZsAIAjPuZCCmKKcEC4CdfbN0dxn9SSAYuhk/Rz6/UMw== X-Received: by 2002:a05:6870:238a:b0:127:9dba:7c82 with SMTP id e10-20020a056870238a00b001279dba7c82mr10966291oap.28.1665834456587; Sat, 15 Oct 2022 04:47:36 -0700 (PDT) Received: from server.roeck-us.net ([2600:1700:e321:62f0:329c:23ff:fee3:9d7c]) by smtp.gmail.com with ESMTPSA id h7-20020a9d61c7000000b00661c3846b4csm2423308otk.27.2022.10.15.04.47.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 15 Oct 2022 04:47:35 -0700 (PDT) Date: Sat, 15 Oct 2022 04:47:33 -0700 From: Guenter Roeck To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation Message-ID: <20221015114733.GA2931132@roeck-us.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665834457; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=jUB8fo9MMJnwwxJWvOyVkFrZtTfJO2gl5Nmsaw1xq7c=; b=d+RkighPoa1Fe9mxtpeFGpnb9MbnXB2zHTn6QWS6qyY/Fu+DmzbCL2IpXRAWFYLu3QkRhf 7Gq8b4fJ+K2Ix5gsHGcHcOV3RgnZEu4D9+lVbwaFlFlCczjbnAfIo7Q6BYTNfnof7/uYr+ k5+OR5g1N6UEVqMPEsUMur4fjzpR8z0= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ipgf7V6n; dmarc=none; spf=pass (imf23.hostedemail.com: domain of groeck7@gmail.com designates 209.85.160.51 as permitted sender) smtp.mailfrom=groeck7@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665834457; a=rsa-sha256; cv=none; b=QnneWi+KPKgpu8b5MWoXNSaCMPuIcxwbECadbcy3hbNF58J7HOjAE4XrYcScudrdO3xWYV WMQ10UCqRU/9zoRsF7xqJx0PzA5xNkLuPCdvAqd8lhyfQpkiHQC0gZSrQNAJKq8aS5xqWU Sgf/bTz18z0W7/6JzYhQyvCcx+W90VY= X-Stat-Signature: 83b8yyxj93ijqw6zpyiax3yab1kxq3up X-Rspamd-Queue-Id: 5E5CA140029 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ipgf7V6n; dmarc=none; spf=pass (imf23.hostedemail.com: domain of groeck7@gmail.com designates 209.85.160.51 as permitted sender) smtp.mailfrom=groeck7@gmail.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1665834457-447816 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Oct 15, 2022 at 01:34:29PM +0900, Hyeonggon Yoo wrote: > After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than > order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2) > requests to buddy like SLUB does. > > SLAB has been using kmalloc caches to allocate freelist_idx_t array for > off slab caches. But after the commit, freelist_size can be bigger than > KMALLOC_MAX_CACHE_SIZE. > > Instead of using pointer to kmalloc cache, use kmalloc_node() and only > check if the kmalloc cache is off slab during calculate_slab_order(). > If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens > as it allocates freelist_idx_t array directly from buddy. > > Reported-by: Guenter Roeck > Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator") > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > > @Guenter: > This fixes the issue on my emulation. > Can you please test this on your environment? Yes, that fixes the problem for me. Tested-by: Guenter Roeck Thanks, Guenter > > include/linux/slab_def.h | 1 - > mm/slab.c | 37 +++++++++++++++++++------------------ > 2 files changed, 19 insertions(+), 19 deletions(-) > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h > index e24c9aff6fed..f0ffad6a3365 100644 > --- a/include/linux/slab_def.h > +++ b/include/linux/slab_def.h > @@ -33,7 +33,6 @@ struct kmem_cache { > > size_t colour; /* cache colouring range */ > unsigned int colour_off; /* colour offset */ > - struct kmem_cache *freelist_cache; > unsigned int freelist_size; > > /* constructor func */ > diff --git a/mm/slab.c b/mm/slab.c > index a5486ff8362a..d1f6e2c64c2e 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1619,7 +1619,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slab) > * although actual page can be freed in rcu context > */ > if (OFF_SLAB(cachep)) > - kmem_cache_free(cachep->freelist_cache, freelist); > + kfree(freelist); > } > > /* > @@ -1671,21 +1671,27 @@ static size_t calculate_slab_order(struct kmem_cache *cachep, > if (flags & CFLGS_OFF_SLAB) { > struct kmem_cache *freelist_cache; > size_t freelist_size; > + size_t freelist_cache_size; > > freelist_size = num * sizeof(freelist_idx_t); > - freelist_cache = kmalloc_slab(freelist_size, 0u); > - if (!freelist_cache) > - continue; > - > - /* > - * Needed to avoid possible looping condition > - * in cache_grow_begin() > - */ > - if (OFF_SLAB(freelist_cache)) > - continue; > + if (freelist_size > KMALLOC_MAX_CACHE_SIZE) { > + freelist_cache_size = PAGE_SIZE << get_order(freelist_size); > + } else { > + freelist_cache = kmalloc_slab(freelist_size, 0u); > + if (!freelist_cache) > + continue; > + freelist_cache_size = freelist_cache->size; > + > + /* > + * Needed to avoid possible looping condition > + * in cache_grow_begin() > + */ > + if (OFF_SLAB(freelist_cache)) > + continue; > + } > > /* check if off slab has enough benefit */ > - if (freelist_cache->size > cachep->size / 2) > + if (freelist_cache_size > cachep->size / 2) > continue; > } > > @@ -2061,11 +2067,6 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) > cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER); > #endif > > - if (OFF_SLAB(cachep)) { > - cachep->freelist_cache = > - kmalloc_slab(cachep->freelist_size, 0u); > - } > - > err = setup_cpu_cache(cachep, gfp); > if (err) { > __kmem_cache_release(cachep); > @@ -2292,7 +2293,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep, > freelist = NULL; > else if (OFF_SLAB(cachep)) { > /* Slab management obj is off-slab. */ > - freelist = kmem_cache_alloc_node(cachep->freelist_cache, > + freelist = kmalloc_node(cachep->freelist_size, > local_flags, nodeid); > } else { > /* We will use last bytes at the slab for freelist */ > -- > 2.32.0