public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] fs/afs/flock and fs/locks: Fix possible sleep-in-atomic bugs in posix_lock_file
@ 2017-10-07  9:55 Jia-Ju Bai
  2017-10-07 10:36 ` Jeff Layton
  0 siblings, 1 reply; 5+ messages in thread
From: Jia-Ju Bai @ 2017-10-07  9:55 UTC (permalink / raw)
  To: dhowells, viro, jlayton, bfields
  Cc: linux-fsdevel, linux-afs, linux-kernel, Jia-Ju Bai

The kernel may sleep under a spinlock, and the function call paths are:
afs_do_unlk (acquire the spinlock)
  posix_lock_file
    posix_lock_inode (fs/locks.c)
      locks_get_lock_context
        kmem_cache_alloc(GFP_KERNEL) --> may sleep

afs_do_setlk (acquire the spinlock)
  posix_lock_file
    posix_lock_inode (fs/locks.c)
      locks_get_lock_context
        kmem_cache_alloc(GFP_KERNEL) --> may sleep

To fix them, GFP_KERNEL is replaced with GFP_ATOMIC.
These bugs are found by my static analysis tool and my code review.

Signed-off-by: Jia-Ju Bai <baijiaju1990@163.com>
---
 fs/locks.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/locks.c b/fs/locks.c
index 1bd71c4..975cc62 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -222,7 +222,7 @@ struct file_lock_list_struct {
 	if (likely(ctx) || type == F_UNLCK)
 		goto out;
 
-	ctx = kmem_cache_alloc(flctx_cache, GFP_KERNEL);
+	ctx = kmem_cache_alloc(flctx_cache, GFP_ATOMIC);
 	if (!ctx)
 		goto out;
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-10-11 13:45 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-07  9:55 [PATCH] fs/afs/flock and fs/locks: Fix possible sleep-in-atomic bugs in posix_lock_file Jia-Ju Bai
2017-10-07 10:36 ` Jeff Layton
2017-10-08  1:07   ` J. Bruce Fields
2017-10-11  9:47     ` David Howells
2017-10-11 13:45       ` J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox