From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 991E1C46467 for ; Mon, 16 Jan 2023 12:00:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 375166B0073; Mon, 16 Jan 2023 07:00:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 325376B0074; Mon, 16 Jan 2023 07:00:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1ED1D6B0075; Mon, 16 Jan 2023 07:00:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0FE646B0073 for ; Mon, 16 Jan 2023 07:00:08 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9871F16080D for ; Mon, 16 Jan 2023 12:00:07 +0000 (UTC) X-FDA: 80360518854.15.D0A94C0 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf12.hostedemail.com (Postfix) with ESMTP id 9423D40021 for ; Mon, 16 Jan 2023 12:00:04 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Cb7mojGp; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673870404; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=2y2/Dw3JjXAFCwQUAwaRhQnRMTfeRCyfKk39TsSzCva4Z0OgvUjtAN8ThNdYNL2MV2MCfg tM7xl7li7iCp/eoasyGzAgCMOrM0Tpudg46uf7o6Zep47aG28J9FQfm3eX0g2MC+ev8E7y hx64zMa+BT6Rm0a36C2AOYPHDaB/akU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Cb7mojGp; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673870404; a=rsa-sha256; cv=none; b=ZCprTmybCYcQ2nihjI0n3rSUjbZnNDqVcyGwa6PijZNoiSKALVaCt1HXZvK63Nv1fZdhEg +LCsJUGgMoVSDCAwrEmNtMEg2Uc3sU84rjBD/q5L5Vhk4fhYett0sRQUuS4gGSnOcx5l// f/bgB9oUz1Ky+N8brdHi5+f+q2of6DU= Received: by mail-pf1-f169.google.com with SMTP id a184so20767588pfa.9 for ; Mon, 16 Jan 2023 04:00:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=Cb7mojGp5V/dmR3xIWSy3C5h1dAUB+67iYTjOwzrvjU/baqnzYeuM+9j4dFADpKXlR rHZc4yfBF9VzP750vYbG2dgoWSjYqYk7pbhL7yfdguF1LFMFf95XSTy5Zt1ZrWIiZFZ4 KIB+O4BRYWWv9Qe5Dy3I8uizc4ckekucSARK4n5QgDIFXXzzEn6CojQrjU/mC9z/slyY mm8CtsMGNM8wf4UwV33cwLBwK9oU6cLL1bmDLkOLxlUcxRdI3PLRdCeKoVa/1z140uQh q9qK387DTMIRhj95KUq8dVmXM4jmOvRfoP8bWO+D+To+vcPfFmuYsMWPlorqJMiaoMEM GHaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=yjK5pV8lbB2+/iOpg5dZ+FqsEmAxLEbjxKcHJJZtats=; b=2uwJs8cgc+4lG3XtFvNvLEIui62173b+7K6JQbwXejLpzZu40Ux4dRY5VlnFHGeYOG azoSQ0yVopSuYv2z9Vm/0G5irM+fu8U1S+cE0V8r1gXsHC2+wuZG0nfJe3SOYtAWVZt4 IpX3aK09eLVy6slF0GSNaIP+VbEKsdB33G11CezjyJFWs7tVWAHePwVeiywe2Kh0nP8X XMjuu1wI4kb0ntrSwc/TYjJEPjEF97mAqUEOwaWqngTqmgVnoa7z5rgVNF8FebrEgbR8 klPK88P4fSQb+Wec6r66IDBL0o5acww7teq7A1DoTSN2Tm8w5mV5B2KZ75wghoD/JOSD 3ntQ== X-Gm-Message-State: AFqh2kqNHv1UMmiUJxTYhffPzjgtBo/B6yL7ey454ySVlD8FdxW+4jxi 66ZMAfTyO46Hky3PWXfDbK0= X-Google-Smtp-Source: AMrXdXuBl7tE8Nkx4UQUUlvHQnT8+6ZszDtfmoQ/QXjkPdrVIlHq7pgD6rBIJzXONVOElLrCXA1Zog== X-Received: by 2002:aa7:99da:0:b0:58d:90d2:8b12 with SMTP id v26-20020aa799da000000b0058d90d28b12mr8038396pfi.3.1673870403272; Mon, 16 Jan 2023 04:00:03 -0800 (PST) Received: from localhost ([2400:8902::f03c:93ff:fe27:642a]) by smtp.gmail.com with ESMTPSA id x186-20020a6231c3000000b0058bcb42dd1asm3687140pfx.111.2023.01.16.03.59.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Jan 2023 04:00:02 -0800 (PST) Date: Mon, 16 Jan 2023 11:59:53 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Rong Tao Cc: cl@linux.com, sdf@google.com, yhs@fb.com, Rong Tao , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , "open list:SLAB ALLOCATOR" , open list Subject: Re: [PATCH] mm: Functions used internally should not be put into slub_def.h Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9423D40021 X-Rspam-User: X-Stat-Signature: 3othdhyckmxk9ydog76884urd3j9ekp7 X-HE-Tag: 1673870404-352898 X-HE-Meta: U2FsdGVkX19G6u5XGjjzQBPh/5sE1kMY4wb9GQ+9wmaWyyeFcxjTLBas00LnG7zBwm/uYKX2H0du1htROBglSF21iy3sQ+o1tQjN+RGcV4cqzESA/XtFhB3oYn37IM2F4W7jzPo4fBoG0s5uG3x071nyxI6LduIgGLN3SfZ4ZddETBoYaqRj2xzovjMzoTViKx8rsO1vJi5AlSg/uRANlc2KaR4PbnVXbHCWoBMulDXTfxz/2yQHCSenTWdmmk6CnvXzfmbhryAJlnykMbyau9XhoPIn+LsLvo/Y1IUqKczV+x3iPQQDs4xLLYoSqFth2qfsT7kuezeeIqiFCqfdeQMjEkvbIbfcjH9FMHfPoJc+a/vw4tEUbCRfPmO0xPqo7rbuRmjU1m4WnGCc3chH6AouriGC0PxrkiBigjLmzOcdjB8ff1vsDizmTIX7FoPMKrSCKItHxiPkyj//dJdXR6wQWzWCgwApbtKKeuUcfr9sxQN4APKztf+VIyQEIM99+OwXS/j51BQ1OpU+uJwI/RaOw1ptHJSSkJm56Np+/7UkBi0gdoTKZO1pGx+SUjsupB8pjxJYC8M3rC6+ejOKHt6ghi9zp/HWXfNghOIYJbjeHHCYdci/dpikn1M7d0df8/vqAA+taLWzi3m16ZQ88AP8v+qmr2AUhKQkrKl1jLH/6GhCmLZGCqPfJxNUUjrU6DaF7oe8eR2UIImVRfiE9y9hGAv/PVJv/cL8EesSRTi7QOJNdx93MGAPKN9psFdzKXlbyqsx2HFOV1c3IWnZoiwl/bz5WVsYitjFBbprn8GdCT0LrIIncJ5+dLsgSVYiFbhP3+zfTt4IeHf0bcEyo6hPyLS8loKMivKUjIAjdfBqxgSFDV4CftrGzjuF5ZCjxCfS972zkc8bwNWjch9xfPhBEJh6rvdblT3LN+jm9WXuw5AGArXcRoRg4K7X809jiKGAZYpG1joJ9rgenZ+ H/PTd8x4 EfHOWAcn1G4zms47pzv7omcbriVeQvUz1ZFucJaJtgip9kryOTZT7RBxH3QGEBQ1wQvfCRgWCvvdriv4MjpkeZZo4lQgJ5pMbmSF3EH7d7ylQhdT/Gf31ChZHqHJhL7Vr5qcuTHIi1+qdI5t+PIZFl7dJYfPQ6Yc1aNZP4SE+TSw7+hY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 16, 2023 at 04:50:05PM +0800, Rong Tao wrote: > From: Rong Tao > > commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions > used by other subsystems") introduce 'slab_address()' and 'struct slab' > in slab_def.h(CONFIG_SLAB) and slub_def.h(CONFIG_SLUB). When referencing > a header file in a module or BPF code, 'slab_address()' > and 'struct slab' are not recognized, resulting in incomplete and > undefined errors(see bcc slabratetop.py error [0]). > Hello Rong, IMO sl*b_def.h is not intended to be used externally. and I'm not sure if it's worth for -stable release too. IIUC The reason for slabratetop.py to rely on sl*b_def.h is to read cachep->cache and cachep->size. I think this can be solved if you use a tool that supports BPF Type Format? > Moving the function definitions of reference data structures such as > struct slab and slab_address() such as nearest_obj(), obj_to_index(), > and objs_per_slab() to the internal header file slab.h solves this > fatal problem. > > [0] https://github.com/iovisor/bcc/issues/4438 > > Signed-off-by: Rong Tao > --- > include/linux/slab_def.h | 33 -------------------- > include/linux/slub_def.h | 32 ------------------- > mm/slab.h | 66 ++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 66 insertions(+), 65 deletions(-) > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h > index 5834bad8ad78..5658b5fddf9b 100644 > --- a/include/linux/slab_def.h > +++ b/include/linux/slab_def.h > @@ -88,37 +88,4 @@ struct kmem_cache { > struct kmem_cache_node *node[MAX_NUMNODES]; > }; > > -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > - void *x) > -{ > - void *object = x - (x - slab->s_mem) % cache->size; > - void *last_object = slab->s_mem + (cache->num - 1) * cache->size; > - > - if (unlikely(object > last_object)) > - return last_object; > - else > - return object; > -} > - > -/* > - * We want to avoid an expensive divide : (offset / cache->size) > - * Using the fact that size is a constant for a particular cache, > - * we can replace (offset / cache->size) by > - * reciprocal_divide(offset, cache->reciprocal_buffer_size) > - */ > -static inline unsigned int obj_to_index(const struct kmem_cache *cache, > - const struct slab *slab, void *obj) > -{ > - u32 offset = (obj - slab->s_mem); > - return reciprocal_divide(offset, cache->reciprocal_buffer_size); > -} > - > -static inline int objs_per_slab(const struct kmem_cache *cache, > - const struct slab *slab) > -{ > - if (is_kfence_address(slab_address(slab))) > - return 1; > - return cache->num; > -} > - > #endif /* _LINUX_SLAB_DEF_H */ > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index aa0ee1678d29..660fd6b2a748 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -163,36 +163,4 @@ static inline void sysfs_slab_release(struct kmem_cache *s) > > void *fixup_red_left(struct kmem_cache *s, void *p); > > -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > - void *x) { > - void *object = x - (x - slab_address(slab)) % cache->size; > - void *last_object = slab_address(slab) + > - (slab->objects - 1) * cache->size; > - void *result = (unlikely(object > last_object)) ? last_object : object; > - > - result = fixup_red_left(cache, result); > - return result; > -} > - > -/* Determine object index from a given position */ > -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > - void *addr, void *obj) > -{ > - return reciprocal_divide(kasan_reset_tag(obj) - addr, > - cache->reciprocal_size); > -} > - > -static inline unsigned int obj_to_index(const struct kmem_cache *cache, > - const struct slab *slab, void *obj) > -{ > - if (is_kfence_address(obj)) > - return 0; > - return __obj_to_index(cache, slab_address(slab), obj); > -} > - > -static inline int objs_per_slab(const struct kmem_cache *cache, > - const struct slab *slab) > -{ > - return slab->objects; > -} > #endif /* _LINUX_SLUB_DEF_H */ > diff --git a/mm/slab.h b/mm/slab.h > index 7cc432969945..38350a0efa91 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -227,10 +227,76 @@ struct kmem_cache { > > #ifdef CONFIG_SLAB > #include > + > +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > + void *x) > +{ > + void *object = x - (x - slab->s_mem) % cache->size; > + void *last_object = slab->s_mem + (cache->num - 1) * cache->size; > + > + if (unlikely(object > last_object)) > + return last_object; > + else > + return object; > +} > + > +/* > + * We want to avoid an expensive divide : (offset / cache->size) > + * Using the fact that size is a constant for a particular cache, > + * we can replace (offset / cache->size) by > + * reciprocal_divide(offset, cache->reciprocal_buffer_size) > + */ > +static inline unsigned int obj_to_index(const struct kmem_cache *cache, > + const struct slab *slab, void *obj) > +{ > + u32 offset = (obj - slab->s_mem); > + return reciprocal_divide(offset, cache->reciprocal_buffer_size); > +} > + > +static inline int objs_per_slab(const struct kmem_cache *cache, > + const struct slab *slab) > +{ > + if (is_kfence_address(slab_address(slab))) > + return 1; > + return cache->num; > +} > #endif > > #ifdef CONFIG_SLUB > #include > + > +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, > + void *x) { > + void *object = x - (x - slab_address(slab)) % cache->size; > + void *last_object = slab_address(slab) + > + (slab->objects - 1) * cache->size; > + void *result = (unlikely(object > last_object)) ? last_object : object; > + > + result = fixup_red_left(cache, result); > + return result; > +} > + > +/* Determine object index from a given position */ > +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, > + void *addr, void *obj) > +{ > + return reciprocal_divide(kasan_reset_tag(obj) - addr, > + cache->reciprocal_size); > +} > + > +static inline unsigned int obj_to_index(const struct kmem_cache *cache, > + const struct slab *slab, void *obj) > +{ > + if (is_kfence_address(obj)) > + return 0; > + return __obj_to_index(cache, slab_address(slab), obj); > +} > + > +static inline int objs_per_slab(const struct kmem_cache *cache, > + const struct slab *slab) > +{ > + return slab->objects; > +} > #endif > > #include > -- > 2.39.0 >