From mboxrd@z Thu Jan 1 00:00:00 1970 From: Al Viro Subject: [RFC][PATCH] nfsd regression since delayed fput() Date: Wed, 16 Oct 2013 19:52:09 +0100 Message-ID: <20131016185209.GO13318@ZenIV.linux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-fsdevel@vger.kernel.org To: Linus Torvalds Return-path: Received: from zeniv.linux.org.uk ([195.92.253.2]:55064 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761063Ab3JPSwL (ORCPT ); Wed, 16 Oct 2013 14:52:11 -0400 Content-Disposition: inline Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Background: nfsd v[23] had throughput regression since delayed fput went in; every read or write ends up doing fput() and we get a pair of extra context switches out of that (plus quite a bit of work in queue_work itselfi, apparently). Use of schedule_delayed_work() gives it a chance to accumulate a bit before we do __fput() on all of them. I'm not too happy about that solution, but... on at least one real-world setup it reverts about 10% throughput loss we got from switch to delayed fput. If anybody has better ideas, I'll gladly take those instead - this approach feels kludgy ;-/ Signed-off-by: Al Viro --- diff --git a/fs/file_table.c b/fs/file_table.c index abdd15a..0ca5fa4 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -317,7 +317,7 @@ void fput(struct file *file) } if (llist_add(&file->f_u.fu_llist, &delayed_fput_list)) - schedule_work(&delayed_fput_work); + schedule_delayed_work(&delayed_fput_work, 1); } }