public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Staubach <staubach@redhat.com>
To: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: chucklever@gmail.com, Andrew Bell <andrew.bell.ia@gmail.com>,
	linux-nfs@vger.kernel.org
Subject: Re: Performance Diagnosis
Date: Tue, 15 Jul 2008 15:55:53 -0400	[thread overview]
Message-ID: <487D00C9.1010305@redhat.com> (raw)
In-Reply-To: <1216150552.7981.48.camel@localhost>

Trond Myklebust wrote:
> On Tue, 2008-07-15 at 15:21 -0400, Peter Staubach wrote:
>   
>> The connection manager would seem to be a RPC level thing, although
>> I haven't thought through the ramifications of the NFSv4.1 stuff
>> and how it might impact a connection manager sufficiently.
>>     
>
> We already have the scheme that shuts down connections on inactive RPC
> clients after a suitable timeout period, so the only gains I can see
> would have to involve shutting down connections on active clients.
>
> At that point, the danger isn't with NFSv4.1, it is rather with
> NFSv2/3/4.0... Specifically, their lack of good replay cache semantics
> mean that you have to be very careful about schemes that involve
> shutting down connections on active RPC clients.

It seems to me that as long as we don't shut down a connection
which is actively being used for an outstanding request, then
we shouldn't have any larger problems with the duplicate caches
on servers than we do now.

We can do this easily enough by reference counting the connection
state and then only closing connections which are not being
referenced.

I definitely agree, shutting down a connection which is being used
is just inviting trouble.

A gain would be that we could reduce the numbers of connections on
active clients if we could disassociate a connection with a
particular mounted file system.  As long as we can achieve maximum
network bandwidth through a single connection, then we don't need
more than one connection per server.

We could handle the case where the client was talking to more
servers than it had connection space for by forcibly, but safely
closing connections to servers and then using the space for a
new connection to a server.  We could do this in the connection
manager by checking to see if there was an available connection
which was not marked as in the process of being closed.  If so,
then it just enters the fray as needing a connection and am
working like all of the others.

The algorithm could look something like:

top:
    Look for a connection to the right server which is not marked
        as being closed.
    If one was found, then increment its reference count and
       return it.
    Attempt to create a new connect,
    If this works, then increment its reference count and
       return it.
    Find a connection to be closed, either one not being currently
       used or via some heuristic like round-robin.
    If this connection is not actively being used, then close it
       and go to top.
    Mark the connection as being closed, wait until it is closed,
       and then go to top.

I know that this is rough and there are several races that I
glossed over, but hopefully, this will outline the general bones
of a solution.

When the system is having recycle connections, it may slow down,
but at least it will work and not have things just fail.

    Thanx...

       ps

  reply	other threads:[~2008-07-15 19:56 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-15 15:34 Performance Diagnosis Andrew Bell
     [not found] ` <e80abd30807150834m47a1b86cle39885150f1d5bfd-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 15:49   ` Chuck Lever
2008-07-15 15:58   ` Peter Staubach
2008-07-15 16:23     ` Chuck Lever
     [not found]       ` <76bd70e30807150923r31027edxb0394a220bbe879b-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 16:34         ` Andrew Bell
     [not found]           ` <e80abd30807150934tc14e793ydd7aae44b4c3111b-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 17:20             ` Chuck Lever
2008-07-15 17:44         ` Peter Staubach
2008-07-15 18:17           ` Chuck Lever
     [not found]             ` <76bd70e30807151117g520f22cj1dfe26b971987d38-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 18:51               ` Trond Myklebust
2008-07-15 19:21                 ` Peter Staubach
2008-07-15 19:35                   ` Trond Myklebust
2008-07-15 19:55                     ` Peter Staubach [this message]
2008-07-15 20:27                       ` Trond Myklebust
2008-07-15 20:48                         ` Peter Staubach
2008-07-15 21:15                       ` Talpey, Thomas
2008-07-16  7:35                     ` Benny Halevy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=487D00C9.1010305@redhat.com \
    --to=staubach@redhat.com \
    --cc=andrew.bell.ia@gmail.com \
    --cc=chucklever@gmail.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=trond.myklebust@fys.uio.no \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox