From: Harshula <harshula@redhat.com>
To: linux-nfs@vger.kernel.org
Cc: jlayton@redhat.com
Subject: rpciod process is blocked in nfs_release_page waiting for nfs_commit_inode to complete
Date: Fri, 15 Jun 2012 22:54:10 +1000 [thread overview]
Message-ID: <1339764850.30233.11.camel@serendib> (raw)
Hi All,
Can I please get your feedback on the following?
rpciod is blocked:
------------------------------------------------------------
crash> bt 2507
PID: 2507 TASK: ffff88103691ab40 CPU: 14 COMMAND: "rpciod/14"
#0 [ffff8810343bf2f0] schedule at ffffffff814dabd9
#1 [ffff8810343bf3b8] nfs_wait_bit_killable at ffffffffa038fc04 [nfs]
#2 [ffff8810343bf3c8] __wait_on_bit at ffffffff814dbc2f
#3 [ffff8810343bf418] out_of_line_wait_on_bit at ffffffff814dbcd8
#4 [ffff8810343bf488] nfs_commit_inode at ffffffffa039e0c1 [nfs]
#5 [ffff8810343bf4f8] nfs_release_page at ffffffffa038bef6 [nfs]
#6 [ffff8810343bf528] try_to_release_page at ffffffff8110c670
#7 [ffff8810343bf538] shrink_page_list.clone.0 at ffffffff81126271
#8 [ffff8810343bf668] shrink_inactive_list at ffffffff81126638
#9 [ffff8810343bf818] shrink_zone at ffffffff8112788f
#10 [ffff8810343bf8c8] do_try_to_free_pages at ffffffff81127b1e
#11 [ffff8810343bf958] try_to_free_pages at ffffffff8112812f
#12 [ffff8810343bfa08] __alloc_pages_nodemask at ffffffff8111fdad
#13 [ffff8810343bfb28] kmem_getpages at ffffffff81159942
#14 [ffff8810343bfb58] fallback_alloc at ffffffff8115a55a
#15 [ffff8810343bfbd8] ____cache_alloc_node at ffffffff8115a2d9
#16 [ffff8810343bfc38] kmem_cache_alloc at ffffffff8115b09b
#17 [ffff8810343bfc78] sk_prot_alloc at ffffffff81411808
#18 [ffff8810343bfcb8] sk_alloc at ffffffff8141197c
#19 [ffff8810343bfce8] inet_create at ffffffff81483ba6
#20 [ffff8810343bfd38] __sock_create at ffffffff8140b4a7
#21 [ffff8810343bfd98] xs_create_sock at ffffffffa01f649b [sunrpc]
#22 [ffff8810343bfdd8] xs_tcp_setup_socket at ffffffffa01f6965 [sunrpc]
#23 [ffff8810343bfe38] worker_thread at ffffffff810887d0
#24 [ffff8810343bfee8] kthread at ffffffff8108dd96
#25 [ffff8810343bff48] kernel_thread at ffffffff8100c1ca
------------------------------------------------------------
nfs_release_page() gives kswapd process an exemption from being blocked.
Should we do the same for rpciod. Otherwise rpciod could end up in a
deadlock where it can not continue without freeing memory that will only
become available when rpciod does its work:
------------------------------------------------------------
479 /*
480 * Attempt to release the private state associated with a page
481 * - Called if either PG_private or PG_fscache is set on the page
482 * - Caller holds page lock
483 * - Return true (may release page) or false (may not)
484 */
485 static int nfs_release_page(struct page *page, gfp_t gfp)
486 {
487 struct address_space *mapping = page->mapping;
488
489 dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page);
490
491 /* Only do I/O if gfp is a superset of GFP_KERNEL */
492 if (mapping && (gfp & GFP_KERNEL) == GFP_KERNEL) {
493 int how = FLUSH_SYNC;
494
495 /* Don't let kswapd deadlock waiting for OOM RPC calls */
496 if (current_is_kswapd())
497 how = 0;
498 nfs_commit_inode(mapping->host, how);
499 }
500 /* If PagePrivate() is set, then the page is not freeable */
501 if (PagePrivate(page))
502 return 0;
503 return nfs_fscache_release_page(page, gfp);
504 }
------------------------------------------------------------
Another option is to change the priority of the memory allocation:
net/ipv4/af_inet.c
------------------------------------------------------------
265 static int inet_create(struct net *net, struct socket *sock, int
protocol,
266 int kern)
267 {
...
345 sk = sk_alloc(net, PF_INET, GFP_KERNEL, answer_prot);
------------------------------------------------------------
Considering this is generic net code, I assume the GFP_KERNEL will not
be replaced with GFP_ATOMIC.
NOTE, this is on RHEL 6.1 kernel 2.6.32-131.6.1 and apparently uses
'legacy' workqueue code.
cya,
#
next reply other threads:[~2012-06-15 12:54 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-15 12:54 Harshula [this message]
2012-06-15 13:21 ` rpciod process is blocked in nfs_release_page waiting for nfs_commit_inode to complete Jeff Layton
2012-06-15 21:25 ` Myklebust, Trond
2012-06-20 14:14 ` Jeff Layton
2012-06-28 14:31 ` Mel Gorman
2012-06-28 14:55 ` Jeff Layton
2012-06-27 15:54 ` Jeff Layton
2012-06-27 18:43 ` Myklebust, Trond
2012-06-27 19:28 ` Jeff Layton
2012-06-27 19:37 ` Myklebust, Trond
2012-06-27 20:18 ` Jeff Layton
2012-06-27 19:56 ` Myklebust, Trond
2012-06-27 20:19 ` Jeff Layton
2012-06-27 20:46 ` Myklebust, Trond
2012-06-28 13:19 ` Jeff Layton
2012-06-28 13:40 ` Myklebust, Trond
2012-06-28 14:47 ` Jeff Layton
2012-06-28 14:51 ` [RFC PATCH] nfs: do non-blocking commit in releasepage if we're attempting to reconnect the socket Jeff Layton
2012-06-28 15:02 ` Myklebust, Trond
2012-06-28 15:25 ` [RFC PATCH v2] nfs: skip commit in releasepage if we're freeing memory for fs-related reasons Jeff Layton
2012-07-02 15:17 ` Jeff Layton
2012-07-02 19:40 ` Myklebust, Trond
2012-07-03 11:20 ` Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1339764850.30233.11.camel@serendib \
--to=harshula@redhat.com \
--cc=jlayton@redhat.com \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).