From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DECAFC433F5 for ; Thu, 24 Mar 2022 11:06:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45ED66B0072; Thu, 24 Mar 2022 07:06:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40CF76B0073; Thu, 24 Mar 2022 07:06:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AF906B0074; Thu, 24 Mar 2022 07:06:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 1C7306B0072 for ; Thu, 24 Mar 2022 07:06:13 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D4CF324631 for ; Thu, 24 Mar 2022 11:06:12 +0000 (UTC) X-FDA: 79279000584.02.2B66150 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf27.hostedemail.com (Postfix) with ESMTP id 5BEAA4002D for ; Thu, 24 Mar 2022 11:06:12 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id u22so3718452pfg.6 for ; Thu, 24 Mar 2022 04:06:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :in-reply-to; bh=nfj33Z1hktxCveFcrncNdeDrZFsHaZSKxlrG2enCzhs=; b=mFOzrh1axEwQuzE2GV7dILxkSQgWZSfAQT60rT8MNVqC0HoIz7RpNrzcm0OnMeZhF6 o2lvwUA+E6XQP2dyHjMAD5QvtNwm4zvXSRlI5aHTwMigKPWyh/FlTIg0T/Ux2FZnrF40 YO5xCSrfV9X3fWvWcHw7v72sWd3wBcvpXO7hyx9wK6u1x5hcOdGQ1QW100p/7zFUtvcm PvMF1Rwjf9iPdfFqq5KpXOQk4hxU4NKAIA+7WrBpqfasIhkbegC4v3+MhcyVuHHW7I+E qF1C/fYhPGp0gZepDLQgiaT+pxg/XYBmTELmqRusvkOr9Wt2I7K29Ke90I7jLDD87HLr QKGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:in-reply-to; bh=nfj33Z1hktxCveFcrncNdeDrZFsHaZSKxlrG2enCzhs=; b=o+aBrbT16kHOC8bXc+3AhTfCMKr424Pq/8+11yQdRhEdjjoT8XVvlD6jeZn8igeqBZ Orq334rBnd95BBw9qDjAJxUGeuy1ILHobwpMKuC3qsZtqk64fA0dJ6ti7IhIknoQvXzG 4HVwqrMK+8buuNeBiO2NdUwH7fhnybUIwgsZFlA+FD8FGY1Pf9YXTrfk4/8xg0bTQj41 hPHt69m97Q//xP8p8uhW9L9k20CU78bj0ekVUdWp3XNqguxeVoXWSdNUHYknLafVJg16 G7f6tCLi90KWfwl6wKLTrNQBUrdvny7jSTQ2Emusy2A7ndOzh/X1GfLE+I1DRxFNx/Zj WkJw== X-Gm-Message-State: AOAM530SVWLDswBiL5Teiazgi83mtksIO2YwHPIJeiapOIeFL0nLOqMn sOh7y11ak24YSqUPLF2BNIQ= X-Google-Smtp-Source: ABdhPJwXczoeV5tsZAaiZHkZgYJgmG+n/zkaAI/QS5VhUeAxcUXHTDna6Qh9Qah8aQuS7w+duKJLLg== X-Received: by 2002:a05:6a00:2485:b0:4f7:37cd:d040 with SMTP id c5-20020a056a00248500b004f737cdd040mr4751618pfv.55.1648119971193; Thu, 24 Mar 2022 04:06:11 -0700 (PDT) Received: from odroid ([114.29.23.97]) by smtp.gmail.com with ESMTPSA id u126-20020a637984000000b0038147b4f53esm2294799pgc.93.2022.03.24.04.06.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Mar 2022 04:06:10 -0700 (PDT) Date: Thu, 24 Mar 2022 11:06:03 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH v1 01/15] mm/slab: cleanup slab_alloc() and slab_alloc_node() Message-ID: <20220324110603.GA2112827@odroid> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5833607a-4444-206d-db4f-9f958653c5b0@suse.cz> Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mFOzrh1a; spf=pass (imf27.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.178 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 5BEAA4002D X-Stat-Signature: p3cpgqycqsuw8z9z5dn7kmutkp963nrc X-HE-Tag: 1648119972-888312 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Vlastimil wrote: > On 3/8/22 12:41, Hyeonggon Yoo wrote: > > + > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > + unsigned long caller) > > { > > unsigned long save_flags; > > - void *objp; > > + void *ptr; > > + int slab_node = numa_mem_id(); > > struct obj_cgroup *objcg = NULL; > > bool init = false; > > > > @@ -3299,21 +3255,49 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo > > if (unlikely(!cachep)) > > return NULL; > > > > - objp = kfence_alloc(cachep, orig_size, flags); > > - if (unlikely(objp)) > > - goto out; > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > - objp = __do_cache_alloc(cachep, flags); > > Looks like after this patch, slab_alloc() (without a node specified) > will not end up in __do_cache_alloc() anymore, so there's no more > possibility of alternate_node_alloc(), which looks like a functional > regression? > Ah, that was not intended. Thank you for catching this! Will fix in v2. Thank you so much. > > + > > + if (node_match(nodeid, slab_node)) { > > + /* > > + * Use the locally cached objects if possible. > > + * However ____cache_alloc does not allow fallback > > + * to other nodes. It may fail while we still have > > + * objects on other nodes available. > > + */ > > + ptr = ____cache_alloc(cachep, flags); > > + if (ptr) > > + goto out; > > + } > > +#ifdef CONFIG_NUMA > > + else if (unlikely(!get_node(cachep, nodeid))) { > > + /* Node not bootstrapped yet */ > > + ptr = fallback_alloc(cachep, flags); > > + goto out; > > + } > > + > > + /* ___cache_alloc_node can fall back to other nodes */ > > + ptr = ____cache_alloc_node(cachep, flags, nodeid); > > +#endif > > +out: > > local_irq_restore(save_flags); > > - objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); > > - prefetchw(objp); > > + ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); > > + prefetchw(ptr); > > init = slab_want_init_on_alloc(flags, cachep); > > > > -out: > > - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); > > - return objp; > > +out_hooks: > > + slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); > > + return ptr; > > +} > > + > > +static __always_inline void * > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > +{ > > + return slab_alloc_node(cachep, flags, NUMA_NO_NODE, orig_size, caller); > > } > > > > /* -- Thank you, You are awesome! Hyeonggon :-)