linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@redhat.com>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: listing knfsd-held locks and opens
Date: Mon, 10 Dec 2018 18:35:02 -0500	[thread overview]
Message-ID: <ff22a1181606615ae9eb686f39ee5eded341c35b.camel@redhat.com> (raw)
In-Reply-To: <20181210195347.GA4392@fieldses.org>

On Mon, 2018-12-10 at 14:53 -0500, J. Bruce Fields wrote:
> On Mon, Dec 10, 2018 at 02:23:10PM -0500, J. Bruce Fields wrote:
> > On Mon, Dec 10, 2018 at 01:12:31PM -0500, Jeff Layton wrote:
> > > On Mon, 2018-12-10 at 12:47 -0500, J. Bruce Fields wrote:
> > > > We've got a long-standing complaint that tools like lsof, when run on an
> > > > NFS server, overlook opens and locks held by NFS clients.
> > > > 
> > > > The information's all there, it's just a question of how to expose it.
> > > > 
> > > > Easiest might be a single flat file like /proc/locks, but I've always
> > > > hoped we could do something slightly more structured, using a
> > > > subdirectory per NFS client.
> > > > 
> > > > Jeff Layton looked into this several years ago.  I don't remember if
> > > > there was some particular issue or if he just got bogged down in VFS
> > > > details.
> > > > 
> > > 
> > > I think I had a patch that generated a single flat file for locks, but
> > > you wanted to present a directory or file per-client, and I just never
> > > got around to reworking the earlier patch.
> > 
> > Oh, OK, makes sense.
> 
> (But, um, if anyone has a good starting point to recommend to me here,
> I'm interested.  E.g. another pseudofs that's a good example to follow.)
> 

I looked for the branch, but I can't find it now. It may be possible to
find my original posting of it on the mailing list, but it has been
years. I'm pretty sure it'd be badly bitrotted by now anyway.

Where do you intend for this to live? Do you plan to build a new
hierarchy under /proc/fs/nfsd, or use something like sysfs or debugfs?

> I also had some idea that we might eventually also benefit from some
> two-way communication.  But the only idea I had there was some sort of
> "destroy this client now" operation, which is probably less important
> for NFSv4 state, since it gets cleaned up automatically on lease expiry.
> 

Per client cancellation sounds like a nice feature. The fault injection
code had some (less granular) stuff for killing off live clients. It may
be worth going over that.

-- 
Jeff Layton <jlayton@redhat.com>

      reply	other threads:[~2018-12-10 23:35 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-10 17:47 listing knfsd-held locks and opens J. Bruce Fields
2018-12-10 17:49 ` Chuck Lever
2018-12-10 19:00   ` Bruce Fields
2018-12-10 18:12 ` Jeff Layton
2018-12-10 19:23   ` J. Bruce Fields
2018-12-10 19:53     ` J. Bruce Fields
2018-12-10 23:35       ` Jeff Layton [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ff22a1181606615ae9eb686f39ee5eded341c35b.camel@redhat.com \
    --to=jlayton@redhat.com \
    --cc=bfields@fieldses.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).