linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Should we be aggressively invalidating cache when using -onolock?
       [not found] <1103741.22.1284726314119.JavaMail.sprabhu@dhcp-1-233.fab.redhat.com>
@ 2010-09-17 12:26 ` Sachin Prabhu
  2010-09-17 17:46   ` J. Bruce Fields
  0 siblings, 1 reply; 12+ messages in thread
From: Sachin Prabhu @ 2010-09-17 12:26 UTC (permalink / raw)
  To: linux-nfs

[-- Attachment #1: Type: text/plain, Size: 1189 bytes --]

We came across an issue where the performance of an application using flocks on RHEL 4(2.6.9 kernel) was far better when compared to the performance of the same application on RHEL 5(2.6.18 kernel). The nfs client behavior when performing flocks on RHEL 4 and RHEL 5 differ. To ensure we had a level playing field, we repeated the tests using the mount option -o nolock. 

The performance on RHEL 5 improved slightly but was still pretty bad when compared to performance on RHEL 4. On closer observation, it was seen that there are a large number of READ requests on RHEL 5 while on RHEL 4 there were hardly any. This difference in behavior was caused by the code which invalidates the cache in the do_setlk() function and results in the RHEL 5 client performing a large number of READ requests.

In this case, the files were only being accessed by one client. This is why the nolock mount option was used. When running such workloads, the aggressive invalidation of the cache is unnecessary. This patch improves the performance in such a scenario.

Is this a good idea?

The patch will need to be respinned to accomodate Suresh Jayaraman's patch introducing '-olocal_lock'.

Sachin Prabhu

[-- Attachment #2: bz633834.patch --]
[-- Type: text/x-patch, Size: 2293 bytes --]

nfs: Skip zapping caches when using -o nolock

When using -onolock, it is assumed that the file will not be accessed/modified from multiple sources. In such cases, aggressive invalidation of cache is not required.

Signed-off-by: Sachin S. Prabhu <sprabhu@redhat.com>

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index eb51bd6..bfd9c1a 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -733,24 +733,22 @@ static int do_vfs_lock(struct file *file, struct file_lock *fl)
 static int do_unlk(struct file *filp, int cmd, struct file_lock *fl)
 {
 	struct inode *inode = filp->f_mapping->host;
-	int status;
+
+	/* NOTE: special case
+	 * If we're signalled while cleaning up locks on process exit, we
+	 * still need to complete the unlock.
+	 */
+
+	/* Use local locking if mounted with "-onolock" */
+	if (NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM)
+		return do_vfs_lock(filp, fl);
 
 	/*
 	 * Flush all pending writes before doing anything
 	 * with locks..
 	 */
 	nfs_sync_mapping(filp->f_mapping);
-
-	/* NOTE: special case
-	 * 	If we're signalled while cleaning up locks on process exit, we
-	 * 	still need to complete the unlock.
-	 */
-	/* Use local locking if mounted with "-onolock" */
-	if (!(NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM))
-		status = NFS_PROTO(inode)->lock(filp, cmd, fl);
-	else
-		status = do_vfs_lock(filp, fl);
-	return status;
+	return NFS_PROTO(inode)->lock(filp, cmd, fl);
 }
 
 static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
@@ -759,6 +757,15 @@ static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
 	int status;
 
 	/*
+	 * Use local locking and skip cache writeback or invalidation
+	 * if mounted with "-onolock"
+	 */
+	if (NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM) {
+		status = do_vfs_lock(filp, fl);
+		goto out;
+	}
+
+	/*
 	 * Flush all pending writes before doing anything
 	 * with locks..
 	 */
@@ -766,11 +773,7 @@ static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
 	if (status != 0)
 		goto out;
 
-	/* Use local locking if mounted with "-onolock" */
-	if (!(NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM))
-		status = NFS_PROTO(inode)->lock(filp, cmd, fl);
-	else
-		status = do_vfs_lock(filp, fl);
+	status = NFS_PROTO(inode)->lock(filp, cmd, fl);
 	if (status < 0)
 		goto out;
 	/*

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-17 12:26 ` Sachin Prabhu
@ 2010-09-17 17:46   ` J. Bruce Fields
  2010-09-18 11:09     ` Jeff Layton
  0 siblings, 1 reply; 12+ messages in thread
From: J. Bruce Fields @ 2010-09-17 17:46 UTC (permalink / raw)
  To: Sachin Prabhu; +Cc: linux-nfs

On Fri, Sep 17, 2010 at 08:26:39AM -0400, Sachin Prabhu wrote:
> We came across an issue where the performance of an application using flocks on RHEL 4(2.6.9 kernel) was far better when compared to the performance of the same application on RHEL 5(2.6.18 kernel). The nfs client behavior when performing flocks on RHEL 4 and RHEL 5 differ. To ensure we had a level playing field, we repeated the tests using the mount option -o nolock. 
> 
> The performance on RHEL 5 improved slightly but was still pretty bad when compared to performance on RHEL 4. On closer observation, it was seen that there are a large number of READ requests on RHEL 5 while on RHEL 4 there were hardly any. This difference in behavior was caused by the code which invalidates the cache in the do_setlk() function and results in the RHEL 5 client performing a large number of READ requests.
> 
> In this case, the files were only being accessed by one client. This is why the nolock mount option was used. When running such workloads, the aggressive invalidation of the cache is unnecessary. This patch improves the performance in such a scenario.

Makes sense to me.

(Is it possible that somebody might depend on lock/unlock to keep their
meaning of "invalidate cache/flush changes" even when they don't care
bout checking for inter-client lock conflicts?  That sounds like an odd
use case to me.)

--b.

> 
> Is this a good idea?
> 
> The patch will need to be respinned to accomodate Suresh Jayaraman's patch introducing '-olocal_lock'.
> 
> Sachin Prabhu

> nfs: Skip zapping caches when using -o nolock
> 
> When using -onolock, it is assumed that the file will not be accessed/modified from multiple sources. In such cases, aggressive invalidation of cache is not required.
> 
> Signed-off-by: Sachin S. Prabhu <sprabhu@redhat.com>
> 
> diff --git a/fs/nfs/file.c b/fs/nfs/file.c
> index eb51bd6..bfd9c1a 100644
> --- a/fs/nfs/file.c
> +++ b/fs/nfs/file.c
> @@ -733,24 +733,22 @@ static int do_vfs_lock(struct file *file, struct file_lock *fl)
>  static int do_unlk(struct file *filp, int cmd, struct file_lock *fl)
>  {
>  	struct inode *inode = filp->f_mapping->host;
> -	int status;
> +
> +	/* NOTE: special case
> +	 * If we're signalled while cleaning up locks on process exit, we
> +	 * still need to complete the unlock.
> +	 */
> +
> +	/* Use local locking if mounted with "-onolock" */
> +	if (NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM)
> +		return do_vfs_lock(filp, fl);
>  
>  	/*
>  	 * Flush all pending writes before doing anything
>  	 * with locks..
>  	 */
>  	nfs_sync_mapping(filp->f_mapping);
> -
> -	/* NOTE: special case
> -	 * 	If we're signalled while cleaning up locks on process exit, we
> -	 * 	still need to complete the unlock.
> -	 */
> -	/* Use local locking if mounted with "-onolock" */
> -	if (!(NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM))
> -		status = NFS_PROTO(inode)->lock(filp, cmd, fl);
> -	else
> -		status = do_vfs_lock(filp, fl);
> -	return status;
> +	return NFS_PROTO(inode)->lock(filp, cmd, fl);
>  }
>  
>  static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
> @@ -759,6 +757,15 @@ static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
>  	int status;
>  
>  	/*
> +	 * Use local locking and skip cache writeback or invalidation
> +	 * if mounted with "-onolock"
> +	 */
> +	if (NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM) {
> +		status = do_vfs_lock(filp, fl);
> +		goto out;
> +	}
> +
> +	/*
>  	 * Flush all pending writes before doing anything
>  	 * with locks..
>  	 */
> @@ -766,11 +773,7 @@ static int do_setlk(struct file *filp, int cmd, struct file_lock *fl)
>  	if (status != 0)
>  		goto out;
>  
> -	/* Use local locking if mounted with "-onolock" */
> -	if (!(NFS_SERVER(inode)->flags & NFS_MOUNT_NONLM))
> -		status = NFS_PROTO(inode)->lock(filp, cmd, fl);
> -	else
> -		status = do_vfs_lock(filp, fl);
> +	status = NFS_PROTO(inode)->lock(filp, cmd, fl);
>  	if (status < 0)
>  		goto out;
>  	/*


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-17 17:46   ` J. Bruce Fields
@ 2010-09-18 11:09     ` Jeff Layton
  2010-09-19 18:53       ` J. Bruce Fields
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2010-09-18 11:09 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Sachin Prabhu, linux-nfs

On Fri, 17 Sep 2010 13:46:44 -0400
"J. Bruce Fields" <bfields@fieldses.org> wrote:

> On Fri, Sep 17, 2010 at 08:26:39AM -0400, Sachin Prabhu wrote:
> > We came across an issue where the performance of an application using flocks on RHEL 4(2.6.9 kernel) was far better when compared to the performance of the same application on RHEL 5(2.6.18 kernel). The nfs client behavior when performing flocks on RHEL 4 and RHEL 5 differ. To ensure we had a level playing field, we repeated the tests using the mount option -o nolock. 
> > 
> > The performance on RHEL 5 improved slightly but was still pretty bad when compared to performance on RHEL 4. On closer observation, it was seen that there are a large number of READ requests on RHEL 5 while on RHEL 4 there were hardly any. This difference in behavior was caused by the code which invalidates the cache in the do_setlk() function and results in the RHEL 5 client performing a large number of READ requests.
> > 
> > In this case, the files were only being accessed by one client. This is why the nolock mount option was used. When running such workloads, the aggressive invalidation of the cache is unnecessary. This patch improves the performance in such a scenario.
> 
> Makes sense to me.
> 

Agreed. This is potentially a huge performance win for some workloads.

> (Is it possible that somebody might depend on lock/unlock to keep their
> meaning of "invalidate cache/flush changes" even when they don't care
> bout checking for inter-client lock conflicts?  That sounds like an odd
> use case to me.)
> 

Aye, there's the rub. It might cause anyone doing this to regress. My
gut feeling is that those people are a minority if any exist at all.
The consequences of such a change for them could be pretty ugly but I'm
not sure we owe them any consistency guarantees in such a case.

-- 
Jeff Layton <jlayton@redhat.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-18 11:09     ` Jeff Layton
@ 2010-09-19 18:53       ` J. Bruce Fields
  2010-09-20 14:41         ` Chuck Lever
  0 siblings, 1 reply; 12+ messages in thread
From: J. Bruce Fields @ 2010-09-19 18:53 UTC (permalink / raw)
  To: Jeff Layton; +Cc: Sachin Prabhu, linux-nfs

On Sat, Sep 18, 2010 at 07:09:32AM -0400, Jeff Layton wrote:
> On Fri, 17 Sep 2010 13:46:44 -0400
> "J. Bruce Fields" <bfields@fieldses.org> wrote:
> 
> > On Fri, Sep 17, 2010 at 08:26:39AM -0400, Sachin Prabhu wrote:
> > > We came across an issue where the performance of an application using flocks on RHEL 4(2.6.9 kernel) was far better when compared to the performance of the same application on RHEL 5(2.6.18 kernel). The nfs client behavior when performing flocks on RHEL 4 and RHEL 5 differ. To ensure we had a level playing field, we repeated the tests using the mount option -o nolock. 
> > > 
> > > The performance on RHEL 5 improved slightly but was still pretty bad when compared to performance on RHEL 4. On closer observation, it was seen that there are a large number of READ requests on RHEL 5 while on RHEL 4 there were hardly any. This difference in behavior was caused by the code which invalidates the cache in the do_setlk() function and results in the RHEL 5 client performing a large number of READ requests.
> > > 
> > > In this case, the files were only being accessed by one client. This is why the nolock mount option was used. When running such workloads, the aggressive invalidation of the cache is unnecessary. This patch improves the performance in such a scenario.
> > 
> > Makes sense to me.
> > 
> 
> Agreed. This is potentially a huge performance win for some workloads.
> 
> > (Is it possible that somebody might depend on lock/unlock to keep their
> > meaning of "invalidate cache/flush changes" even when they don't care
> > bout checking for inter-client lock conflicts?  That sounds like an odd
> > use case to me.)
> > 
> 
> Aye, there's the rub. It might cause anyone doing this to regress. My
> gut feeling is that those people are a minority if any exist at all.
> The consequences of such a change for them could be pretty ugly but I'm
> not sure we owe them any consistency guarantees in such a case.

Yeah, I haven't seen any documentation of the
revalidate-caches-but-don't-lock mode that has been accidentally
implemented here, and I don't think anyone could sensibly depend on it.

--b.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-19 18:53       ` J. Bruce Fields
@ 2010-09-20 14:41         ` Chuck Lever
  2010-09-20 18:25           ` J. Bruce Fields
  0 siblings, 1 reply; 12+ messages in thread
From: Chuck Lever @ 2010-09-20 14:41 UTC (permalink / raw)
  To: J. Bruce Fields, Trond Myklebust; +Cc: Jeff Layton, Sachin Prabhu, linux-nfs


On Sep 19, 2010, at 2:53 PM, J. Bruce Fields wrote:

> On Sat, Sep 18, 2010 at 07:09:32AM -0400, Jeff Layton wrote:
>> On Fri, 17 Sep 2010 13:46:44 -0400
>> "J. Bruce Fields" <bfields@fieldses.org> wrote:
>> 
>>> On Fri, Sep 17, 2010 at 08:26:39AM -0400, Sachin Prabhu wrote:
>>>> We came across an issue where the performance of an application using flocks on RHEL 4(2.6.9 kernel) was far better when compared to the performance of the same application on RHEL 5(2.6.18 kernel). The nfs client behavior when performing flocks on RHEL 4 and RHEL 5 differ. To ensure we had a level playing field, we repeated the tests using the mount option -o nolock. 
>>>> 
>>>> The performance on RHEL 5 improved slightly but was still pretty bad when compared to performance on RHEL 4. On closer observation, it was seen that there are a large number of READ requests on RHEL 5 while on RHEL 4 there were hardly any. This difference in behavior was caused by the code which invalidates the cache in the do_setlk() function and results in the RHEL 5 client performing a large number of READ requests.
>>>> 
>>>> In this case, the files were only being accessed by one client. This is why the nolock mount option was used. When running such workloads, the aggressive invalidation of the cache is unnecessary. This patch improves the performance in such a scenario.
>>> 
>>> Makes sense to me.
>>> 
>> 
>> Agreed. This is potentially a huge performance win for some workloads.
>> 
>>> (Is it possible that somebody might depend on lock/unlock to keep their
>>> meaning of "invalidate cache/flush changes" even when they don't care
>>> bout checking for inter-client lock conflicts?  That sounds like an odd
>>> use case to me.)
>>> 
>> 
>> Aye, there's the rub. It might cause anyone doing this to regress. My
>> gut feeling is that those people are a minority if any exist at all.
>> The consequences of such a change for them could be pretty ugly but I'm
>> not sure we owe them any consistency guarantees in such a case.
> 
> Yeah, I haven't seen any documentation of the
> revalidate-caches-but-don't-lock mode that has been accidentally
> implemented here, and I don't think anyone could sensibly depend on it.

At one point long ago, I had asked Trond if we could get rid of the cache-invalidation-on-lock behavior if "-onolock" was in effect.  He said at the time that this would eliminate the only recourse applications have for invalidating the data cache in case it was stale, and NACK'd the request.

I suggested introducing a new mount option called "llock" that would be semantically the same as "llock" on other operating systems, to do this.  It never went anywhere.

We now seem to have a fresh opportunity to address this issue with the recent addition of "local_lock".  Can we augment this option or add another which allows better control of caching behavior during a file lock?

It also seems to me that if RHEL 4 is _not_ invalidating on lock, then it is not working as designed.  AFAIK the Linux NFS client has always invalidated a file's data cache on lock.  Did I misread something?

-- 
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
       [not found] <14128115.54.1284995685991.JavaMail.sprabhu@dhcp-1-233.fab.redhat.com>
@ 2010-09-20 15:15 ` Sachin Prabhu
  2010-09-20 15:19   ` Chuck Lever
  0 siblings, 1 reply; 12+ messages in thread
From: Sachin Prabhu @ 2010-09-20 15:15 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Jeff Layton, linux-nfs, J. Bruce Fields, Trond Myklebust

----- "Chuck Lever" <chuck.lever@oracle.com> wrote:
> It also seems to me that if RHEL 4 is _not_ invalidating on lock, then
> it is not working as designed.  AFAIK the Linux NFS client has always
> invalidated a file's data cache on lock.  Did I misread something?
> 

The flock support for NFS was only implemented in the 2.6.12 kernel. Hence on the RHEL 4 kernel ie 2.6.9 nfs_file_operations->flock is NULL and any flock operations performed by the application was only applicable on that node. No part of the NFS client code was executed for the flock() operation. 

Sachin Prabhu

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-20 15:15 ` Should we be aggressively invalidating cache when using -onolock? Sachin Prabhu
@ 2010-09-20 15:19   ` Chuck Lever
  2010-09-20 15:34     ` Sachin Prabhu
  0 siblings, 1 reply; 12+ messages in thread
From: Chuck Lever @ 2010-09-20 15:19 UTC (permalink / raw)
  To: Sachin Prabhu; +Cc: Jeff Layton, linux-nfs, J. Bruce Fields, Trond Myklebust


On Sep 20, 2010, at 11:15 AM, Sachin Prabhu wrote:

> ----- "Chuck Lever" <chuck.lever@oracle.com> wrote:
>> It also seems to me that if RHEL 4 is _not_ invalidating on lock, then
>> it is not working as designed.  AFAIK the Linux NFS client has always
>> invalidated a file's data cache on lock.  Did I misread something?
>> 
> 
> The flock support for NFS was only implemented in the 2.6.12 kernel. Hence on the RHEL 4 kernel ie 2.6.9 nfs_file_operations->flock is NULL and any flock operations performed by the application was only applicable on that node. No part of the NFS client code was executed for the flock() operation. 

I see, so in RHEL 4, the original fcntl(2) invalidate-on-lock behavior is working correctly, but flock(2) is not, since flock(2) wasn't supported in 2.6.9's NFS client.  Correct?

-- 
chuck[dot]lever[at]oracle[dot]com





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-20 15:19   ` Chuck Lever
@ 2010-09-20 15:34     ` Sachin Prabhu
  0 siblings, 0 replies; 12+ messages in thread
From: Sachin Prabhu @ 2010-09-20 15:34 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Jeff Layton, linux-nfs, J. Bruce Fields, Trond Myklebust


----- "Chuck Lever" <chuck.lever@oracle.com> wrote:

> On Sep 20, 2010, at 11:15 AM, Sachin Prabhu wrote:
> 
> > ----- "Chuck Lever" <chuck.lever@oracle.com> wrote:
> >> It also seems to me that if RHEL 4 is _not_ invalidating on lock,
> then
> >> it is not working as designed.  AFAIK the Linux NFS client has
> always
> >> invalidated a file's data cache on lock.  Did I misread something?
> >> 
> > 
> > The flock support for NFS was only implemented in the 2.6.12 kernel.
> Hence on the RHEL 4 kernel ie 2.6.9 nfs_file_operations->flock is NULL
> and any flock operations performed by the application was only
> applicable on that node. No part of the NFS client code was executed
> for the flock() operation. 
> 
> I see, so in RHEL 4, the original fcntl(2) invalidate-on-lock behavior
> is working correctly, but flock(2) is not, since flock(2) wasn't
> supported in 2.6.9's NFS client.  Correct?
> 
Yes.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-20 14:41         ` Chuck Lever
@ 2010-09-20 18:25           ` J. Bruce Fields
  2010-10-05 14:27             ` Jeff Layton
  0 siblings, 1 reply; 12+ messages in thread
From: J. Bruce Fields @ 2010-09-20 18:25 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Trond Myklebust, Jeff Layton, Sachin Prabhu, linux-nfs

On Mon, Sep 20, 2010 at 10:41:59AM -0400, Chuck Lever wrote:
> At one point long ago, I had asked Trond if we could get rid of the
> cache-invalidation-on-lock behavior if "-onolock" was in effect.  He
> said at the time that this would eliminate the only recourse
> applications have for invalidating the data cache in case it was
> stale, and NACK'd the request.

Argh.  I guess I can see the argument, though.

> I suggested introducing a new mount option called "llock" that would
> be semantically the same as "llock" on other operating systems, to do
> this.  It never went anywhere.
> 
> We now seem to have a fresh opportunity to address this issue with the
> recent addition of "local_lock".  Can we augment this option or add
> another which allows better control of caching behavior during a file
> lock?

I wouldn't stand in the way, but it does start to sound like a rather
confusing array of choices.

--b.

> 
> It also seems to me that if RHEL 4 is _not_ invalidating on lock, then
> it is not working as designed.  AFAIK the Linux NFS client has always
> invalidated a file's data cache on lock.  Did I misread something?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-09-20 18:25           ` J. Bruce Fields
@ 2010-10-05 14:27             ` Jeff Layton
  2010-10-05 15:19               ` Suresh Jayaraman
       [not found]               ` <20101005102752.67b75416-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
  0 siblings, 2 replies; 12+ messages in thread
From: Jeff Layton @ 2010-10-05 14:27 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Chuck Lever, Trond Myklebust, Sachin Prabhu, linux-nfs

On Mon, 20 Sep 2010 14:25:36 -0400
"J. Bruce Fields" <bfields@fieldses.org> wrote:

> On Mon, Sep 20, 2010 at 10:41:59AM -0400, Chuck Lever wrote:
> > At one point long ago, I had asked Trond if we could get rid of the
> > cache-invalidation-on-lock behavior if "-onolock" was in effect.  He
> > said at the time that this would eliminate the only recourse
> > applications have for invalidating the data cache in case it was
> > stale, and NACK'd the request.
> 
> Argh.  I guess I can see the argument, though.
> 
> > I suggested introducing a new mount option called "llock" that would
> > be semantically the same as "llock" on other operating systems, to do
> > this.  It never went anywhere.
> > 
> > We now seem to have a fresh opportunity to address this issue with the
> > recent addition of "local_lock".  Can we augment this option or add
> > another which allows better control of caching behavior during a file
> > lock?
> 
> I wouldn't stand in the way, but it does start to sound like a rather
> confusing array of choices.
> 

I can sort of see the argument too, but on the other hand...does anyone
*really* use locks in this way? If we want a mechanism to allow the
client to force cache invalidation on an inode it seems like we'd be
better off with an interface for that purpose only (dare I say
ioctl? :).

Piggybacking this behavior into the locking interfaces seems like it
punishes -o nolock performance for the benefit of some questionable
usage patterns.

Mixing this in with -o local_lock also seems confusing, but if we want
to do that it's probably best to make that call before any kernels ship
with -o local_lock.

Trond, care to weigh in on this?
-- 
Jeff Layton <jlayton@redhat.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
  2010-10-05 14:27             ` Jeff Layton
@ 2010-10-05 15:19               ` Suresh Jayaraman
       [not found]               ` <20101005102752.67b75416-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
  1 sibling, 0 replies; 12+ messages in thread
From: Suresh Jayaraman @ 2010-10-05 15:19 UTC (permalink / raw)
  To: Jeff Layton
  Cc: J. Bruce Fields, Chuck Lever, Trond Myklebust, Sachin Prabhu,
	linux-nfs

On 10/05/2010 07:57 PM, Jeff Layton wrote:
> On Mon, 20 Sep 2010 14:25:36 -0400
> "J. Bruce Fields" <bfields@fieldses.org> wrote:
> 
>> On Mon, Sep 20, 2010 at 10:41:59AM -0400, Chuck Lever wrote:
>>> At one point long ago, I had asked Trond if we could get rid of the
>>> cache-invalidation-on-lock behavior if "-onolock" was in effect.  He
>>> said at the time that this would eliminate the only recourse
>>> applications have for invalidating the data cache in case it was
>>> stale, and NACK'd the request.
>>
>> Argh.  I guess I can see the argument, though.
>>
>>> I suggested introducing a new mount option called "llock" that would
>>> be semantically the same as "llock" on other operating systems, to do
>>> this.  It never went anywhere.
>>>
>>> We now seem to have a fresh opportunity to address this issue with the
>>> recent addition of "local_lock".  Can we augment this option or add
>>> another which allows better control of caching behavior during a file
>>> lock?
>>
>> I wouldn't stand in the way, but it does start to sound like a rather
>> confusing array of choices.
>>
> 
> I can sort of see the argument too, but on the other hand...does anyone
> *really* use locks in this way? If we want a mechanism to allow the
> client to force cache invalidation on an inode it seems like we'd be
> better off with an interface for that purpose only (dare I say
> ioctl? :).
> 
> Piggybacking this behavior into the locking interfaces seems like it
> punishes -o nolock performance for the benefit of some questionable
> usage patterns.
> 

+ 1

> Mixing this in with -o local_lock also seems confusing, but if we want

I too think it would be confusing and unwarranted. A separate interface
would be a better choice IMHO..



-- 
Suresh Jayaraman

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Should we be aggressively invalidating cache when using -onolock?
       [not found]               ` <20101005102752.67b75416-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
@ 2010-10-20 10:42                 ` Sachin Prabhu
  0 siblings, 0 replies; 12+ messages in thread
From: Sachin Prabhu @ 2010-10-20 10:42 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Chuck Lever, linux-nfs, J. Bruce Fields, Jeff Layton


----- "Jeff Layton" <jlayton@redhat.com> wrote:

> On Mon, 20 Sep 2010 14:25:36 -0400
> "J. Bruce Fields" <bfields@fieldses.org> wrote:
> 
> > On Mon, Sep 20, 2010 at 10:41:59AM -0400, Chuck Lever wrote:
> > > At one point long ago, I had asked Trond if we could get rid of
> the
> > > cache-invalidation-on-lock behavior if "-onolock" was in effect. 
> He
> > > said at the time that this would eliminate the only recourse
> > > applications have for invalidating the data cache in case it was
> > > stale, and NACK'd the request.
> > 
> > Argh.  I guess I can see the argument, though.
> > 
> > > I suggested introducing a new mount option called "llock" that
> would
> > > be semantically the same as "llock" on other operating systems, to
> do
> > > this.  It never went anywhere.
> > > 
> > > We now seem to have a fresh opportunity to address this issue with
> the
> > > recent addition of "local_lock".  Can we augment this option or
> add
> > > another which allows better control of caching behavior during a
> file
> > > lock?
> > 
> > I wouldn't stand in the way, but it does start to sound like a
> rather
> > confusing array of choices.
> > 
> 
> I can sort of see the argument too, but on the other hand...does
> anyone
> *really* use locks in this way? If we want a mechanism to allow the
> client to force cache invalidation on an inode it seems like we'd be
> better off with an interface for that purpose only (dare I say
> ioctl? :).
> 
> Piggybacking this behavior into the locking interfaces seems like it
> punishes -o nolock performance for the benefit of some questionable
> usage patterns.
> 
> Mixing this in with -o local_lock also seems confusing, but if we
> want
> to do that it's probably best to make that call before any kernels
> ship
> with -o local_lock.
> 
> Trond, care to weigh in on this?


Trond, 

What do you think about this issue?

Sachin Prabhu

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2010-10-20 10:42 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <14128115.54.1284995685991.JavaMail.sprabhu@dhcp-1-233.fab.redhat.com>
2010-09-20 15:15 ` Should we be aggressively invalidating cache when using -onolock? Sachin Prabhu
2010-09-20 15:19   ` Chuck Lever
2010-09-20 15:34     ` Sachin Prabhu
     [not found] <1103741.22.1284726314119.JavaMail.sprabhu@dhcp-1-233.fab.redhat.com>
2010-09-17 12:26 ` Sachin Prabhu
2010-09-17 17:46   ` J. Bruce Fields
2010-09-18 11:09     ` Jeff Layton
2010-09-19 18:53       ` J. Bruce Fields
2010-09-20 14:41         ` Chuck Lever
2010-09-20 18:25           ` J. Bruce Fields
2010-10-05 14:27             ` Jeff Layton
2010-10-05 15:19               ` Suresh Jayaraman
     [not found]               ` <20101005102752.67b75416-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2010-10-20 10:42                 ` Sachin Prabhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).