* [Ocfs2-devel] [patch 2/8] ocfs2: flock: drop cross-node lock when failed locally
@ 2014-03-19 21:10 akpm at linux-foundation.org
2014-03-31 1:30 ` Mark Fasheh
0 siblings, 1 reply; 3+ messages in thread
From: akpm at linux-foundation.org @ 2014-03-19 21:10 UTC (permalink / raw)
To: ocfs2-devel
From: Wengang Wang <wen.gang.wang@oracle.com>
Subject: ocfs2: flock: drop cross-node lock when failed locally
ocfs2_do_flock() calls ocfs2_file_lock() to get the cross-node clock and
then call flock_lock_file_wait() to compete with local processes. In case
flock_lock_file_wait() failed, say -ENOMEM, clean up work is not done.
This patch adds the cleanup --drop the cross-node lock which was just
granted.
[akpm at linux-foundation.org: coding-style fixes]
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Mark Fasheh <mfasheh@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
fs/ocfs2/locks.c | 2 ++
1 file changed, 2 insertions(+)
diff -puN fs/ocfs2/locks.c~ocfs2-flock-drop-cross-node-lock-when-failed-locally fs/ocfs2/locks.c
--- a/fs/ocfs2/locks.c~ocfs2-flock-drop-cross-node-lock-when-failed-locally
+++ a/fs/ocfs2/locks.c
@@ -82,6 +82,8 @@ static int ocfs2_do_flock(struct file *f
}
ret = flock_lock_file_wait(file, fl);
+ if (ret)
+ ocfs2_file_unlock(file);
out:
mutex_unlock(&fp->fp_mutex);
_
^ permalink raw reply [flat|nested] 3+ messages in thread
* [Ocfs2-devel] [patch 2/8] ocfs2: flock: drop cross-node lock when failed locally
2014-03-19 21:10 [Ocfs2-devel] [patch 2/8] ocfs2: flock: drop cross-node lock when failed locally akpm at linux-foundation.org
@ 2014-03-31 1:30 ` Mark Fasheh
2014-03-31 1:39 ` Wengang
0 siblings, 1 reply; 3+ messages in thread
From: Mark Fasheh @ 2014-03-31 1:30 UTC (permalink / raw)
To: ocfs2-devel
On Wed, Mar 19, 2014 at 02:10:00PM -0700, Andrew Morton wrote:
> From: Wengang Wang <wen.gang.wang@oracle.com>
> Subject: ocfs2: flock: drop cross-node lock when failed locally
>
> ocfs2_do_flock() calls ocfs2_file_lock() to get the cross-node clock and
> then call flock_lock_file_wait() to compete with local processes. In case
> flock_lock_file_wait() failed, say -ENOMEM, clean up work is not done.
> This patch adds the cleanup --drop the cross-node lock which was just
> granted.
Out of curiousity was this a bug someone hit, or did you catch this
via code review.
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
--Mark
--
Mark Fasheh
^ permalink raw reply [flat|nested] 3+ messages in thread
* [Ocfs2-devel] [patch 2/8] ocfs2: flock: drop cross-node lock when failed locally
2014-03-31 1:30 ` Mark Fasheh
@ 2014-03-31 1:39 ` Wengang
0 siblings, 0 replies; 3+ messages in thread
From: Wengang @ 2014-03-31 1:39 UTC (permalink / raw)
To: ocfs2-devel
Hi Mark,
I find it by code review.
thanks,
wengang
? 2014?03?31? 09:30, Mark Fasheh ??:
> On Wed, Mar 19, 2014 at 02:10:00PM -0700, Andrew Morton wrote:
>> From: Wengang Wang <wen.gang.wang@oracle.com>
>> Subject: ocfs2: flock: drop cross-node lock when failed locally
>>
>> ocfs2_do_flock() calls ocfs2_file_lock() to get the cross-node clock and
>> then call flock_lock_file_wait() to compete with local processes. In case
>> flock_lock_file_wait() failed, say -ENOMEM, clean up work is not done.
>> This patch adds the cleanup --drop the cross-node lock which was just
>> granted.
> Out of curiousity was this a bug someone hit, or did you catch this
> via code review.
>
> Reviewed-by: Mark Fasheh <mfasheh@suse.de>
> --Mark
>
> --
> Mark Fasheh
>
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-03-31 1:39 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-19 21:10 [Ocfs2-devel] [patch 2/8] ocfs2: flock: drop cross-node lock when failed locally akpm at linux-foundation.org
2014-03-31 1:30 ` Mark Fasheh
2014-03-31 1:39 ` Wengang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).