From: "Eric W. Biederman" <ebiederm@xmission.com>
To: Mateusz Guzik <mjguzik@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Kees Cook <kees@kernel.org>,
Josh Triplett <josh@joshtriplett.org>,
Alexander Viro <viro@zeniv.linux.org.uk>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] fs/exec.c: Add fast path for ENOENT on PATH search before allocating mm
Date: Thu, 09 Nov 2023 23:26:23 -0600 [thread overview]
Message-ID: <87a5rmw54w.fsf@email.froward.int.ebiederm.org> (raw)
In-Reply-To: <CAGudoHHb8Fh5UxgMa-hw3Kj=wjMqpdZq2J6869fBgsKXcZOeHA@mail.gmail.com> (Mateusz Guzik's message of "Thu, 9 Nov 2023 13:21:04 +0100")
Mateusz Guzik <mjguzik@gmail.com> writes:
> On 11/9/23, Eric W. Biederman <ebiederm@xmission.com> wrote:
>> Mateusz Guzik <mjguzik@gmail.com> writes:
>>> sched_exec causes migration only for only few % of execs in the bench,
>>> but when it does happen there is tons of overhead elsewhere.
>>>
>>> I expect real programs which get past execve will be prone to
>>> migrating anyway, regardless of what sched_exec is doing.
>>>
>>> That is to say, while sched_exec buggering off here would be nice, I
>>> think for real-world wins the thing to investigate is the overhead
>>> which comes from migration to begin with.
>>
>> I have a vague memory that the idea is that there is a point during exec
>> when it should be much less expensive than normal to allow migration
>> between cpus because all of the old state has gone away.
>>
>> Assuming that is the rationale, if we are getting lock contention
>> then either there is a global lock in there, or there is the potential
>> to pick a less expensive location within exec.
>>
>
> Given the commit below I think the term "migration cost" is overloaded here.
>
> By migration cost in my previous mail I meant the immediate cost
> (stop_one_cpu and so on), but also the aftermath -- for example tlb
> flushes on another CPU when tearing down your now-defunct mm after you
> switched.
>
> For testing purposes I verified commenting out sched_exec and not
> using taskset still gives me about 9.5k ops/s.
>
> I 100% agree should the task be moved between NUMA domains, it makes
> sense to do it when it has the smallest footprint. I don't know what
> the original patch did, the current code just picks a CPU and migrates
> to it, regardless of NUMA considerations. I will note that the goal
> would still be achieved by comparing domains and doing nothing if they
> match.
>
> I think this would be nice to fix, but it is definitely not a big
> deal. I guess the question is to Peter Zijlstra if this sounds
> reasonable.
Perhaps I misread the trace. My point was simply that the sched_exec
seemed to be causing lock contention because what was on one cpu is
now on another cpu, and we are now getting cross cpu lock ping-pongs.
If the sched_exec is causing exec to cause cross cpu lock ping-pongs,
then we can move sched_exec to a better place within exec. It has
already happened once, shortly after it was introduced.
Ultimately we want the sched_exec to be in the cheapest place within
exec that we can find.
Eric
next prev parent reply other threads:[~2023-11-10 18:23 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-16 13:41 [PATCH] fs/exec.c: Add fast path for ENOENT on PATH search before allocating mm Josh Triplett
2022-09-16 14:38 ` Kees Cook
2022-09-16 20:13 ` Josh Triplett
2022-09-17 0:11 ` Kees Cook
2022-09-17 0:50 ` Josh Triplett
2022-09-19 20:02 ` Kees Cook
2022-10-01 16:01 ` Josh Triplett
2022-09-19 14:34 ` Peter Zijlstra
2022-09-22 7:27 ` [fs/exec.c] 0a276ae2d2: BUG:workqueue_lockup-pool kernel test robot
2023-11-07 20:30 ` [PATCH] fs/exec.c: Add fast path for ENOENT on PATH search before allocating mm Kees Cook
2023-11-07 20:51 ` Mateusz Guzik
2023-11-07 21:23 ` Mateusz Guzik
2023-11-07 22:50 ` Kees Cook
2023-11-07 23:08 ` Mateusz Guzik
2023-11-07 23:39 ` Kees Cook
2023-11-08 0:03 ` Mateusz Guzik
2023-11-08 19:25 ` Kees Cook
2023-11-08 19:31 ` Kees Cook
2023-11-08 19:35 ` Mateusz Guzik
2023-11-09 0:17 ` Eric W. Biederman
2023-11-09 12:21 ` Mateusz Guzik
2023-11-10 5:26 ` Eric W. Biederman [this message]
2023-11-07 20:37 ` Kees Cook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87a5rmw54w.fsf@email.froward.int.ebiederm.org \
--to=ebiederm@xmission.com \
--cc=josh@joshtriplett.org \
--cc=kees@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mjguzik@gmail.com \
--cc=peterz@infradead.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox