From: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
To: Ian Rogers <irogers@google.com>
Cc: maddy@linux.vnet.ibm.com,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Nageswara Sastry <rnsastry@linux.ibm.com>,
kjain@linux.ibm.com,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
linux-perf-users@vger.kernel.org, jolsa@kernel.org,
disgoel@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v2 0/4] Fix perf bench numa, futex and epoll to work with machines having #CPUs > 1K
Date: Thu, 7 Apr 2022 10:01:07 +0530 [thread overview]
Message-ID: <CF7DB310-9E7C-4084-9A7C-317D4D4004EF@linux.vnet.ibm.com> (raw)
In-Reply-To: <CAP-5=fXRphB0gU6CxAuj9Fy40sbwub23RbLLo=5LEY=-_D=3+g@mail.gmail.com>
> On 07-Apr-2022, at 6:05 AM, Ian Rogers <irogers@google.com> wrote:
>
> On Wed, Apr 6, 2022 at 10:51 AM Athira Rajeev
> <atrajeev@linux.vnet.ibm.com> wrote:
>>
>> The perf benchmark for collections: numa, futex and epoll
>> hits failure in system configuration with CPU's more than 1024.
>> These benchmarks uses "sched_getaffinity" and "sched_setaffinity"
>> in the code to work with affinity.
>>
>> Example snippet from numa benchmark:
>> <<>>
>> perf: bench/numa.c:302: bind_to_node: Assertion `!(ret)' failed.
>> Aborted (core dumped)
>> <<>>
>>
>> bind_to_node function uses "sched_getaffinity" to save the cpumask.
>> This fails with EINVAL because the default mask size in glibc is 1024.
>>
>> Similarly in futex and epoll benchmark, uses sched_setaffinity during
>> pthread_create with affinity. And since it returns EINVAL in such system
>> configuration, benchmark doesn't run.
>>
>> To overcome this 1024 CPUs mask size limitation of cpu_set_t,
>> change the mask size using the CPU_*_S macros ie, use CPU_ALLOC to
>> allocate cpumask, CPU_ALLOC_SIZE for size, CPU_SET_S to set mask bit.
>>
>> Fix all the relevant places in the code to use mask size which is large
>> enough to represent number of possible CPU's in the system.
>>
>> Fix parse_setup_cpu_list function in numa bench to check if input CPU
>> is online before binding task to that CPU. This is to fix failures where,
>> though CPU number is within max CPU, it could happen that CPU is offline.
>> Here, sched_setaffinity will result in failure when using cpumask having
>> that cpu bit set in the mask.
>>
>> Patch 1 and Patch 2 address fix for perf bench futex and perf bench
>> epoll benchmark. Patch 3 and Patch 4 address fix in perf bench numa
>> benchmark
>>
>> Athira Rajeev (4):
>> tools/perf: Fix perf bench futex to correct usage of affinity for
>> machines with #CPUs > 1K
>> tools/perf: Fix perf bench epoll to correct usage of affinity for
>> machines with #CPUs > 1K
>> tools/perf: Fix perf numa bench to fix usage of affinity for machines
>> with #CPUs > 1K
>> tools/perf: Fix perf bench numa testcase to check if CPU used to bind
>> task is online
>>
>> Changelog:
>> From v1 -> v2:
>> Addressed review comment from Ian Rogers to do
>> CPU_FREE in a cleaner way.
>> Added Tested-by from Disha Goel
>
>
> The whole set:
> Acked-by: Ian Rogers <irogers@google.com>
Thanks for checking Ian.
Athira.
>
> Thanks,
> Ian
>
>> tools/perf/bench/epoll-ctl.c | 25 ++++--
>> tools/perf/bench/epoll-wait.c | 25 ++++--
>> tools/perf/bench/futex-hash.c | 26 ++++--
>> tools/perf/bench/futex-lock-pi.c | 21 +++--
>> tools/perf/bench/futex-requeue.c | 21 +++--
>> tools/perf/bench/futex-wake-parallel.c | 21 +++--
>> tools/perf/bench/futex-wake.c | 22 ++++--
>> tools/perf/bench/numa.c | 105 ++++++++++++++++++-------
>> tools/perf/util/header.c | 43 ++++++++++
>> tools/perf/util/header.h | 1 +
>> 10 files changed, 242 insertions(+), 68 deletions(-)
>>
>> --
>> 2.35.1
next prev parent reply other threads:[~2022-04-07 4:32 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-06 17:51 [PATCH v2 0/4] Fix perf bench numa, futex and epoll to work with machines having #CPUs > 1K Athira Rajeev
2022-04-06 17:51 ` [PATCH v2 1/4] tools/perf: Fix perf bench futex to correct usage of affinity for machines with " Athira Rajeev
2022-04-08 12:27 ` Srikar Dronamraju
2022-04-06 17:51 ` [PATCH v2 2/4] tools/perf: Fix perf bench epoll " Athira Rajeev
2022-04-06 17:51 ` [PATCH v2 3/4] tools/perf: Fix perf numa bench to fix " Athira Rajeev
2022-04-08 12:28 ` Srikar Dronamraju
2022-04-06 17:51 ` [PATCH v2 4/4] tools/perf: Fix perf bench numa testcase to check if CPU used to bind task is online Athira Rajeev
2022-04-08 12:26 ` Srikar Dronamraju
2022-04-09 6:29 ` Athira Rajeev
2022-04-09 15:20 ` Arnaldo Carvalho de Melo
2022-04-11 13:54 ` Athira Rajeev
2022-04-07 0:35 ` [PATCH v2 0/4] Fix perf bench numa, futex and epoll to work with machines having #CPUs > 1K Ian Rogers
2022-04-07 4:31 ` Athira Rajeev [this message]
2022-04-09 15:28 ` Arnaldo Carvalho de Melo
2022-04-09 17:18 ` Arnaldo Carvalho de Melo
2022-04-12 16:33 ` Athira Rajeev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CF7DB310-9E7C-4084-9A7C-317D4D4004EF@linux.vnet.ibm.com \
--to=atrajeev@linux.vnet.ibm.com \
--cc=acme@kernel.org \
--cc=disgoel@linux.vnet.ibm.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=kjain@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.vnet.ibm.com \
--cc=rnsastry@linux.ibm.com \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).