From: Feng Tang <feng.tang@intel.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: David Rientjes <rientjes@google.com>,
Christoph Lameter <cl@linux.com>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
Jay Patel <jaypatel@linux.ibm.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Pekka Enberg <penberg@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"patches@lists.linux.dev" <patches@lists.linux.dev>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 3/4] mm/slub: attempt to find layouts up to 1/2 waste in calculate_order()
Date: Wed, 20 Sep 2023 21:11:40 +0800 [thread overview]
Message-ID: <ZQrvjFuSAO6DHhQ0@feng-clx> (raw)
In-Reply-To: <20230908145302.30320-9-vbabka@suse.cz>
On Fri, Sep 08, 2023 at 10:53:06PM +0800, Vlastimil Babka wrote:
> The main loop in calculate_order() currently tries to find an order with
> at most 1/4 waste. If that's impossible (for particular large object
> sizes), there's a fallback that will try to place one object within
> slab_max_order.
>
> If we expand the loop boundary to also allow up to 1/2 waste as the last
> resort, we can remove the fallback and simplify the code, as the loop
> will find an order for such sizes as well. Note we don't need to allow
> more than 1/2 waste as that will never happen - calc_slab_order() would
> calculate more objects to fit, reducing waste below 1/2.
>
> Sucessfully finding an order in the loop (compared to the fallback) will
> also have the benefit in trying to satisfy min_objects, because the
> fallback was passing 1. Thus the resulting slab orders might be larger
> (not because it would improve waste, but to reduce pressure on shared
> locks), which is one of the goals of calculate_order().
>
> For example, with nr_cpus=1 and 4kB PAGE_SIZE, slub_max_order=3, before
> the patch we would get the following orders for these object sizes:
>
> 2056 to 10920 - order-3 as selected by the loop
> 10928 to 12280 - order-2 due to fallback, as <1/4 waste is not possible
> 12288 to 32768 - order-3 as <1/4 waste is again possible
>
> After the patch:
>
> 2056 to 32768 - order-3, because even in the range of 10928 to 12280 we
> try to satisfy the calculated min_objects.
>
> As a result the code is simpler and gives more consistent results.
Current code already tries the fraction "1" in the follwing 2 fallback
calls of calc_slab_order(), so trying fraction "2" makes sense to me.
Reviewed-by: Feng Tang <feng.tang@intel.com>
Thanks,
Feng
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 5c287d96b212..f04eb029d85a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4171,23 +4171,17 @@ static inline int calculate_order(unsigned int size)
> * the order can only result in same or less fractional waste, not more.
> *
> * If that fails, we increase the acceptable fraction of waste and try
> - * again.
> + * again. The last iteration with fraction of 1/2 would effectively
> + * accept any waste and give us the order determined by min_objects, as
> + * long as at least single object fits within slub_max_order.
> */
> - for (unsigned int fraction = 16; fraction >= 4; fraction /= 2) {
> + for (unsigned int fraction = 16; fraction > 1; fraction /= 2) {
> order = calc_slab_order(size, min_objects, slub_max_order,
> fraction);
> if (order <= slub_max_order)
> return order;
> }
>
> - /*
> - * We were unable to place multiple objects in a slab. Now
> - * lets see if we can place a single object there.
> - */
> - order = calc_slab_order(size, 1, slub_max_order, 1);
> - if (order <= slub_max_order)
> - return order;
> -
> /*
> * Doh this slab cannot be placed using slub_max_order.
> */
> --
> 2.42.0
>
>
next prev parent reply other threads:[~2023-09-20 13:21 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 14:53 [PATCH 0/4] SLUB: calculate_order() cleanups Vlastimil Babka
2023-09-08 14:53 ` [PATCH 1/4] mm/slub: simplify the last resort slab order calculation Vlastimil Babka
2023-09-19 7:56 ` Feng Tang
2023-09-20 6:38 ` Vlastimil Babka
2023-09-20 7:09 ` Feng Tang
2023-09-08 14:53 ` [PATCH 2/4] mm/slub: remove min_objects loop from calculate_order() Vlastimil Babka
2023-09-08 14:53 ` [PATCH 3/4] mm/slub: attempt to find layouts up to 1/2 waste in calculate_order() Vlastimil Babka
2023-09-20 13:11 ` Feng Tang [this message]
2023-09-08 14:53 ` [PATCH 4/4] mm/slub: refactor calculate_order() and calc_slab_order() Vlastimil Babka
2023-09-11 5:56 ` kernel test robot
2023-09-15 13:36 ` Vlastimil Babka
2023-09-16 1:28 ` Baoquan He
2023-09-22 7:00 ` Vlastimil Babka
2023-09-22 7:29 ` Baoquan He
2023-09-20 13:36 ` Feng Tang
2023-09-22 6:55 ` Vlastimil Babka
2023-09-28 4:46 ` [PATCH 0/4] SLUB: calculate_order() cleanups Jay Patel
2023-10-02 12:38 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZQrvjFuSAO6DHhQ0@feng-clx \
--to=feng.tang@intel.com \
--cc=42.hyeyoo@gmail.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jaypatel@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=patches@lists.linux.dev \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).