From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Tejun Heo <tj@kernel.org>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] kernfs: release kernfs_mutex before the inode allocation
Date: Wed, 17 Nov 2021 07:44:44 +0100 [thread overview]
Message-ID: <YZSk3DECnnknOu5T@kroah.com> (raw)
In-Reply-To: <YZQkQcrldGFwqV/r@google.com>
On Tue, Nov 16, 2021 at 01:36:01PM -0800, Minchan Kim wrote:
> On Tue, Nov 16, 2021 at 08:49:46PM +0100, Greg Kroah-Hartman wrote:
> > On Tue, Nov 16, 2021 at 11:43:17AM -0800, Minchan Kim wrote:
> > > The kernfs implementation has big lock granularity(kernfs_rwsem) so
> > > every kernfs-based(e.g., sysfs, cgroup, dmabuf) fs are able to compete
> > > the lock. Thus, if one of userspace goes the sleep under holding
> > > the lock for a long time, rest of them should wait it. A example is
> > > the holder goes direct reclaim with the lock since it needs memory
> > > allocation. Let's fix it at common technique that release the lock
> > > and then allocate the memory. Fortunately, kernfs looks like have
> > > an refcount so I hope it's fine.
> > >
> > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > ---
> > > fs/kernfs/dir.c | 14 +++++++++++---
> > > fs/kernfs/inode.c | 2 +-
> > > fs/kernfs/kernfs-internal.h | 1 +
> > > 3 files changed, 13 insertions(+), 4 deletions(-)
> >
> > What workload hits this lock to cause it to be noticable?
>
> A app launching since it was dropping the frame since the
> latency was too long.
How does running a program interact with kernfs filesystems? Which
one(s)?
> > There was a bunch of recent work in this area to make this much more
> > fine-grained, and the theoritical benchmarks that people created (adding
> > 10s of thousands of scsi disks at boot time) have gotten better.
> >
> > But in that work, no one could find a real benchmark or use case that
> > anyone could even notice this type of thing. What do you have that
> > shows this?
>
> https://developer.android.com/studio/command-line/perfetto
> https://perfetto.dev/docs/data-sources/cpu-scheduling
That is links to a tool, not a test we can run ourselves.
Or how about the output of that tool?
> Android has perfetto tracing system and can show where processes
> were stuck. This case was the lock since holder was in direct reclaim
> path.
Reclaim of what? What is the interaction here with kernfs? Normally
this filesystem is not on any "fast paths" that I know of.
More specifics would be nice :)
thanks,
greg k-h
next prev parent reply other threads:[~2021-11-17 6:44 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-16 19:43 [RFC PATCH] kernfs: release kernfs_mutex before the inode allocation Minchan Kim
2021-11-16 19:49 ` Greg Kroah-Hartman
2021-11-16 21:36 ` Minchan Kim
2021-11-17 6:44 ` Greg Kroah-Hartman [this message]
2021-11-17 7:27 ` Minchan Kim
2021-11-17 7:39 ` Greg Kroah-Hartman
2021-11-17 21:43 ` Minchan Kim
2021-11-17 21:45 ` Tejun Heo
2021-11-17 22:13 ` Minchan Kim
2021-11-17 22:23 ` Tejun Heo
2021-11-18 1:55 ` Minchan Kim
2021-11-18 16:35 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YZSk3DECnnknOu5T@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox