public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Staubach <staubach@redhat.com>
To: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: chucklever@gmail.com, Andrew Bell <andrew.bell.ia@gmail.com>,
	linux-nfs@vger.kernel.org
Subject: Re: Performance Diagnosis
Date: Tue, 15 Jul 2008 16:48:48 -0400	[thread overview]
Message-ID: <487D0D30.6030200@redhat.com> (raw)
In-Reply-To: <1216153651.7981.57.camel@localhost>

Trond Myklebust wrote:
> On Tue, 2008-07-15 at 15:55 -0400, Peter Staubach wrote:
>   
>> It seems to me that as long as we don't shut down a connection
>> which is actively being used for an outstanding request, then
>> we shouldn't have any larger problems with the duplicate caches
>> on servers than we do now.
>>
>> We can do this easily enough by reference counting the connection
>> state and then only closing connections which are not being
>> referenced.
>>     
>
> Agreed.
>
>   
>> A gain would be that we could reduce the numbers of connections on
>> active clients if we could disassociate a connection with a
>> particular mounted file system.  As long as we can achieve maximum
>> network bandwidth through a single connection, then we don't need
>> more than one connection per server.
>>     
>
> Isn't that pretty much the norm today anyway? The only call to
> rpc_create() that I can find is made when creating the nfs_client
> structure. All other NFS-related rpc connections are created as clones
> of the above shared structure, and thus share the same rpc_xprt.
>
>   

Well, it is the norm for the shared superblock situation, yes.


> I'm not sure that we want to share connections in the cases where we
> can't share the same nfs_client, since that usually means that RPC level
> parameters such as timeout values, NFS protocol versions differ.
>
>   

I think the TCP connection can be managed independent of these
things.

>> We could handle the case where the client was talking to more
>> servers than it had connection space for by forcibly, but safely
>> closing connections to servers and then using the space for a
>> new connection to a server.  We could do this in the connection
>> manager by checking to see if there was an available connection
>> which was not marked as in the process of being closed.  If so,
>> then it just enters the fray as needing a connection and am
>> working like all of the others.
>>
>> The algorithm could look something like:
>>
>> top:
>>     Look for a connection to the right server which is not marked
>>         as being closed.
>>     If one was found, then increment its reference count and
>>        return it.
>>     Attempt to create a new connect,
>>     If this works, then increment its reference count and
>>        return it.
>>     Find a connection to be closed, either one not being currently
>>        used or via some heuristic like round-robin.
>>     If this connection is not actively being used, then close it
>>        and go to top.
>>     Mark the connection as being closed, wait until it is closed,
>>        and then go to top.
>>     
>
> Actually, what you really want to do is look at whether or not any of
> the rpc slots are in use or not. If they aren't, then you are free to
> close the connection, if not, go to the next.
>
>   

I think that we would still need to be able to handle the
situation where we needed a connection and all connections
appeared to be in use.  I think that the ability to force
a connection to become unused would be required.


> Unfortunately, you still can't get rid of the 2 minute TIME_WAIT state
> in the case of a TCP connection, so I'm not sure how useful this will
> turn out to be...

Well, there is that...  :-)

It sure seems like there has to be some way of dealing with that
thing.

    Thanx...

       ps


  reply	other threads:[~2008-07-15 20:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-15 15:34 Performance Diagnosis Andrew Bell
     [not found] ` <e80abd30807150834m47a1b86cle39885150f1d5bfd-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 15:49   ` Chuck Lever
2008-07-15 15:58   ` Peter Staubach
2008-07-15 16:23     ` Chuck Lever
     [not found]       ` <76bd70e30807150923r31027edxb0394a220bbe879b-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 16:34         ` Andrew Bell
     [not found]           ` <e80abd30807150934tc14e793ydd7aae44b4c3111b-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 17:20             ` Chuck Lever
2008-07-15 17:44         ` Peter Staubach
2008-07-15 18:17           ` Chuck Lever
     [not found]             ` <76bd70e30807151117g520f22cj1dfe26b971987d38-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-07-15 18:51               ` Trond Myklebust
2008-07-15 19:21                 ` Peter Staubach
2008-07-15 19:35                   ` Trond Myklebust
2008-07-15 19:55                     ` Peter Staubach
2008-07-15 20:27                       ` Trond Myklebust
2008-07-15 20:48                         ` Peter Staubach [this message]
2008-07-15 21:15                       ` Talpey, Thomas
2008-07-16  7:35                     ` Benny Halevy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=487D0D30.6030200@redhat.com \
    --to=staubach@redhat.com \
    --cc=andrew.bell.ia@gmail.com \
    --cc=chucklever@gmail.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=trond.myklebust@fys.uio.no \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox