From: Junxiao Bi <junxiao.bi@oracle.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Matthew Wilcox <willy@infradead.org>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Matthew Wilcox <matthew.wilcox@oracle.com>,
Srinivas Eeda <SRINIVAS.EEDA@oracle.com>,
"joe.jin@oracle.com" <joe.jin@oracle.com>,
Wengang Wang <wen.gang.wang@oracle.com>
Subject: Re: [PATCH] proc: Avoid a thundering herd of threads freeing proc dentries
Date: Fri, 19 Jun 2020 14:56:58 -0700 [thread overview]
Message-ID: <68a1f51b-50bf-0770-2367-c3e1b38be535@oracle.com> (raw)
In-Reply-To: <87k1036k9y.fsf@x220.int.ebiederm.org>
On 6/19/20 10:24 AM, ebiederm@xmission.com wrote:
> Junxiao Bi <junxiao.bi@oracle.com> writes:
>
>> Hi Eric,
>>
>> The patch didn't improve lock contention.
> Which raises the question where is the lock contention coming from.
>
> Especially with my first variant. Only the last thread to be reaped
> would free up anything in the cache.
>
> Can you comment out the call to proc_flush_pid entirely?
Still high lock contention. Collect the following hot path.
74.90% 0.01% proc_race
[kernel.kallsyms] [k]
entry_SYSCALL_64_after_hwframe
|
--74.89%--entry_SYSCALL_64_after_hwframe
|
--74.88%--do_syscall_64
|
|--69.70%--exit_to_usermode_loop
| |
| --69.70%--do_signal
| |
| --69.69%--get_signal
| |
| |--56.30%--do_group_exit
| | |
| | --56.30%--do_exit
| | |
| |
|--27.50%--_raw_write_lock_irq
| | | |
| | |
--27.47%--queued_write_lock_slowpath
| |
| |
| | |
--27.18%--native_queued_spin_lock_slowpath
| | |
| |
|--26.10%--release_task.part.20
| | | |
| | |
--25.60%--_raw_write_lock_irq
| |
| |
| | |
--25.56%--queued_write_lock_slowpath
| |
| |
| | |
--25.23%--native_queued_spin_lock_slowpath
| | |
| | --0.56%--mmput
| | |
| |
--0.55%--exit_mmap
| |
| --13.31%--_raw_spin_lock_irq
| |
| --13.28%--native_queued_spin_lock_slowpath
|
Thanks,
Junxiao.
>
> That will rule out the proc_flush_pid in d_invalidate entirely.
>
> The only candidate I can think of d_invalidate aka (proc_flush_pid) vs ps.
>
> Eric
next prev parent reply other threads:[~2020-06-19 21:59 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-18 22:17 severe proc dentry lock contention Junxiao Bi
2020-06-18 23:39 ` Matthew Wilcox
2020-06-19 0:02 ` Eric W. Biederman
2020-06-19 0:27 ` Junxiao Bi
2020-06-19 3:30 ` Eric W. Biederman
2020-06-19 14:09 ` [PATCH] proc: Avoid a thundering herd of threads freeing proc dentries Eric W. Biederman
2020-06-19 15:56 ` Junxiao Bi
2020-06-19 17:24 ` Eric W. Biederman
2020-06-19 21:56 ` Junxiao Bi [this message]
2020-06-19 22:42 ` Eric W. Biederman
2020-06-20 16:27 ` Matthew Wilcox
2020-06-22 5:15 ` Junxiao Bi
2020-06-22 15:20 ` Eric W. Biederman
2020-06-22 15:48 ` willy
2020-08-17 12:19 ` Eric W. Biederman
2020-06-22 17:16 ` Junxiao Bi
2020-06-23 0:47 ` Matthew Wilcox
2020-06-25 22:11 ` Junxiao Bi
2020-06-22 5:33 ` Masahiro Yamada
2020-06-22 15:13 ` Eric W. Biederman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=68a1f51b-50bf-0770-2367-c3e1b38be535@oracle.com \
--to=junxiao.bi@oracle.com \
--cc=SRINIVAS.EEDA@oracle.com \
--cc=ebiederm@xmission.com \
--cc=joe.jin@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=matthew.wilcox@oracle.com \
--cc=wen.gang.wang@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).