From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78918C433FE for ; Thu, 24 Mar 2022 09:59:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB95D6B0072; Thu, 24 Mar 2022 05:59:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E69A96B0073; Thu, 24 Mar 2022 05:59:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D313E6B0074; Thu, 24 Mar 2022 05:59:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id C05DF6B0072 for ; Thu, 24 Mar 2022 05:59:53 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 74A2E247D1 for ; Thu, 24 Mar 2022 09:59:53 +0000 (UTC) X-FDA: 79278833466.04.F25EC70 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf03.hostedemail.com (Postfix) with ESMTP id DEAB420029 for ; Thu, 24 Mar 2022 09:59:52 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id e6-20020a17090a77c600b001c795ee41e9so4033441pjs.4 for ; Thu, 24 Mar 2022 02:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=nfj33Z1hktxCveFcrncNdeDrZFsHaZSKxlrG2enCzhs=; b=fQJlJL6WKmTT59FYIIPj5Vv4XET4DURA1AsmtUAdpJ+bPjJcIKiePcS3JV3l3pIwyX Qp9Ic25l9h449RHfrzvqV6zDtZnA1gtIz9wsZ0lVroSSz5h9jP73E5U4LPwgAOAR+WQz TASx+JrUUDcLIkQQYo3ZsDUZC3qfn0xYBoGwyelyf6z877AROqq/71mMTH1vvV1bBnAf IjKMRmCfwR2jlyS5VOuvwLZdc3tj9Y+EPaL+acbWtXv+0SZAzmH6tCBwIDapTdERv4QR akbHEry33JYmhb2c8L7sWMgLirEK2UqwcnTLqzePlWg6ptCw1LhrBRLJ9qtiTWs1HZuQ 0MOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=nfj33Z1hktxCveFcrncNdeDrZFsHaZSKxlrG2enCzhs=; b=5Bb5jp1+ig+o2tWuA/P6yoxigz9uCNHV2AvXm1psTLizbHE0fWYnnBRHsej6V3IQUg 2CrjfkqDTy9O6HRZtGNIp7IElIRUe0MbqGTztTtVhXKxJEqEC13yoZODxlXsNvNa2zea ctf/+4XVonS1SNtoHgY/2XyXS8oOLjMPeHRtcXFY8iD+MagNJKjd2w++Wf17p1if5eUI isiiTtmAS3AAtRtwYoXv1LiZd25p8LcJac6TqR8SGbBgvjeZpSolJwd87pTMCzx9QV2C 5V2D4dsF7GMtm4D01CCtAHFuWnwtKKtVHxjMmr8D3GBAyoj6ee9WsZi0nstik1Z1asjs UE5w== X-Gm-Message-State: AOAM530Z33qe4mjlAbaMZYas0nW3O1lNLHrWOgRn2iFmhyx7LE6KF6U/ uA1LfH2fWyLEBCrxGxgU+YxBFsoTiXU= X-Google-Smtp-Source: ABdhPJzBPKz5IAtGBay48ozd9g2P9e0MAM3KBnkFbXW0riCLGFTj3lIZpW5HqidyW7ONiHvSzjOd8g== X-Received: by 2002:a17:90a:e50c:b0:1c7:c5e2:36e3 with SMTP id t12-20020a17090ae50c00b001c7c5e236e3mr1322634pjy.245.1648115991128; Thu, 24 Mar 2022 02:59:51 -0700 (PDT) Received: from odroid ([114.29.23.97]) by smtp.gmail.com with ESMTPSA id h2-20020a056a00218200b004f6519ce666sm2825812pfi.170.2022.03.24.02.59.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Mar 2022 02:59:50 -0700 (PDT) Date: Thu, 24 Mar 2022 09:59:43 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH v1 15/15] mm/sl[au]b: check if large object is valid in __ksize() Message-ID: <20220324095943.GB2108184@odroid> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> <20220308114142.1744229-16-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220308114142.1744229-16-42.hyeyoo@gmail.com> X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: syi6c786macpa7p4911n8dsmit6d685p Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=fQJlJL6W; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Queue-Id: DEAB420029 X-HE-Tag: 1648115992-930103 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Vlastimil wrote: > On 3/8/22 12:41, Hyeonggon Yoo wrote: > > + > > static __always_inline void * > > -slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, > > + unsigned long caller) > > { > > unsigned long save_flags; > > - void *objp; > > + void *ptr; > > + int slab_node = numa_mem_id(); > > struct obj_cgroup *objcg = NULL; > > bool init = false; > > > > @@ -3299,21 +3255,49 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo > > if (unlikely(!cachep)) > > return NULL; > > > > - objp = kfence_alloc(cachep, orig_size, flags); > > - if (unlikely(objp)) > > - goto out; > > + ptr = kfence_alloc(cachep, orig_size, flags); > > + if (unlikely(ptr)) > > + goto out_hooks; > > > > cache_alloc_debugcheck_before(cachep, flags); > > local_irq_save(save_flags); > > - objp = __do_cache_alloc(cachep, flags); > > Looks like after this patch, slab_alloc() (without a node specified) > will not end up in __do_cache_alloc() anymore, so there's no more > possibility of alternate_node_alloc(), which looks like a functional > regression? > Ah, that was not intended. Thank you for catching this! Will fix in v2. Thank you so much. > > + > > + if (node_match(nodeid, slab_node)) { > > + /* > > + * Use the locally cached objects if possible. > > + * However ____cache_alloc does not allow fallback > > + * to other nodes. It may fail while we still have > > + * objects on other nodes available. > > + */ > > + ptr = ____cache_alloc(cachep, flags); > > + if (ptr) > > + goto out; > > + } > > +#ifdef CONFIG_NUMA > > + else if (unlikely(!get_node(cachep, nodeid))) { > > + /* Node not bootstrapped yet */ > > + ptr = fallback_alloc(cachep, flags); > > + goto out; > > + } > > + > > + /* ___cache_alloc_node can fall back to other nodes */ > > + ptr = ____cache_alloc_node(cachep, flags, nodeid); > > +#endif > > +out: > > local_irq_restore(save_flags); > > - objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); > > - prefetchw(objp); > > + ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); > > + prefetchw(ptr); > > init = slab_want_init_on_alloc(flags, cachep); > > > > -out: > > - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); > > - return objp; > > +out_hooks: > > + slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); > > + return ptr; > > +} > > + > > +static __always_inline void * > > +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) > > +{ > > + return slab_alloc_node(cachep, flags, NUMA_NO_NODE, orig_size, caller); > > } > > > > /* -- Thank you, You are awesome! Hyeonggon :-)