From: Ming Lei <ming.lei@redhat.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>
Subject: Re: [PATCH v3 5/7] lib/cgroup_cpus: don't zero cpumasks in group_cpus_evenly() on allocation
Date: Wed, 13 Dec 2023 08:56:18 +0800 [thread overview]
Message-ID: <ZXkBMnWQK3az30iF@fedora> (raw)
In-Reply-To: <20231212042108.682072-6-yury.norov@gmail.com>
On Mon, Dec 11, 2023 at 08:21:05PM -0800, Yury Norov wrote:
> nmsk and npresmsk are both allocated with zalloc_cpumask_var(), but they
> are initialized by copying later in the code, and so may be allocated
> uninitialized.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> ---
> lib/group_cpus.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> index cded3c8ea63b..c7fcd04c87bf 100644
> --- a/lib/group_cpus.c
> +++ b/lib/group_cpus.c
> @@ -347,10 +347,10 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
> int ret = -ENOMEM;
> struct cpumask *masks = NULL;
>
> - if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> + if (!alloc_cpumask_var(&nmsk, GFP_KERNEL))
> return NULL;
`nmsk` is actually used by __group_cpus_evenly() only, and it should be
local variable of __group_cpus_evenly(), can you move its allocation into
__group_cpus_evenly()?
>
> - if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL))
> + if (!alloc_cpumask_var(&npresmsk, GFP_KERNEL))
> goto fail_nmsk;
The above one looks fine, especially `npresmsk` is initialized in
group_cpus_evenly() explicitly.
Thanks,
Ming
next prev parent reply other threads:[~2023-12-13 0:56 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-12 4:21 [PATCH v3 0/7] lib/group_cpus: rework grp_spread_init_one() and make it O(1) Yury Norov
2023-12-12 4:21 ` [PATCH v3 1/7] cpumask: introduce for_each_cpu_and_from() Yury Norov
2023-12-12 4:21 ` [PATCH v3 2/7] lib/group_cpus: optimize inner loop in grp_spread_init_one() Yury Norov
2023-12-12 9:46 ` Ming Lei
2023-12-12 17:04 ` Yury Norov
2023-12-13 0:06 ` Ming Lei
2023-12-25 17:38 ` Yury Norov
2023-12-12 4:21 ` [PATCH v3 3/7] lib/group_cpus: relax atomicity requirement " Yury Norov
2023-12-12 9:50 ` Ming Lei
2023-12-12 16:52 ` Yury Norov
2023-12-13 0:14 ` Ming Lei
2023-12-13 17:03 ` Yury Norov
2023-12-14 0:43 ` Ming Lei
2023-12-12 4:21 ` [PATCH v3 4/7] lib/group_cpus: optimize outer loop " Yury Norov
2023-12-12 4:21 ` [PATCH v3 5/7] lib/cgroup_cpus: don't zero cpumasks in group_cpus_evenly() on allocation Yury Norov
2023-12-13 0:56 ` Ming Lei [this message]
2023-12-12 4:21 ` [PATCH v3 6/7] lib/group_cpus: drop unneeded cpumask_empty() call in __group_cpus_evenly() Yury Norov
2023-12-13 0:59 ` Ming Lei
2023-12-12 4:21 ` [PATCH v3 7/7] lib/group_cpus: simplify grp_spread_init_one() for more Yury Norov
2023-12-13 1:06 ` Ming Lei
2023-12-25 18:03 ` Yury Norov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZXkBMnWQK3az30iF@fedora \
--to=ming.lei@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=tglx@linutronix.de \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox