All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@redhat.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>,
	"Vitaly V. Bursov" <vitalyb@telenet.dn.ua>,
	linux-kernel@vger.kernel.org, bfields@fieldses.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases
Date: Tue, 11 Nov 2008 16:59:31 -0500	[thread overview]
Message-ID: <20081111165931.6f98401b@tleilax.poochiereds.net> (raw)
In-Reply-To: <20081111164104.48f4dbd8@tleilax.poochiereds.net>

On Tue, 11 Nov 2008 16:41:04 -0500
Jeff Layton <jlayton@redhat.com> wrote:

> On Tue, 11 Nov 2008 14:36:07 -0500
> Jeff Moyer <jmoyer@redhat.com> wrote:
> 
> > Jens Axboe <jens.axboe@oracle.com> writes:
> > 
> > > OK, that looks better. Can I talk you into just trying this little
> > > patch, just to see what kind of performance that yields? Remove the cfq
> > > patch first. I would have patched nfsd only, but this is just a quick'n
> > > dirty.
> > 
> > I went ahead and gave it a shot.  The updated CFQ patch with no I/O
> > context sharing does about 40MB/s reading a 1GB file.  Backing that
> > patch out, and then adding the patch to share io_context's between
> > kthreads yields 45MB/s.
> > 
> 
> Here's a quick and dirty patch to make all of the nfsd's have the same
> io_context. Comments appreciated -- I'm not that familiar with the IO
> scheduling code. If this looks good, I'll clean it up, add some
> comments and formally send it to Bruce.
> 

No sooner than I send it out than I find a bug. We need to eventually
put the io_context reference we get. This should be more correct:

----------------[snip]-------------------

>From d0ee67045a12c677883f77791c6f260588c7b41f Mon Sep 17 00:00:00 2001
From: Jeff Layton <jlayton@redhat.com>
Date: Tue, 11 Nov 2008 16:54:16 -0500
Subject: [PATCH] knfsd: make all nfsd threads share an io_context

This apparently makes the I/O scheduler treat the threads as a group
which helps throughput when sequential I/O is multiplexed over several
nfsd's.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 fs/nfsd/nfssvc.c |   30 ++++++++++++++++++++++++++++++
 1 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 07e4f5d..5cd99f9 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -22,6 +22,7 @@
 #include <linux/freezer.h>
 #include <linux/fs_struct.h>
 #include <linux/kthread.h>
+#include <linux/iocontext.h>
 
 #include <linux/sunrpc/types.h>
 #include <linux/sunrpc/stats.h>
@@ -42,6 +43,7 @@ static int			nfsd(void *vrqstp);
 struct timeval			nfssvc_boot;
 static atomic_t			nfsd_busy;
 static unsigned long		nfsd_last_call;
+static struct io_context	*nfsd_io_context;
 static DEFINE_SPINLOCK(nfsd_call_lock);
 
 /*
@@ -173,6 +175,10 @@ static void nfsd_last_thread(struct svc_serv *serv)
 	nfsd_serv = NULL;
 	nfsd_racache_shutdown();
 	nfs4_state_shutdown();
+	if (nfsd_io_context) {
+		put_io_context(nfsd_io_context);
+		nfsd_io_context = NULL;
+	}
 
 	printk(KERN_WARNING "nfsd: last server has exited, flushing export "
 			    "cache\n");
@@ -398,6 +404,28 @@ update_thread_usage(int busy_threads)
 }
 
 /*
+ * should be called while holding nfsd_mutex
+ */
+static void
+nfsd_set_io_context(void)
+{
+	int cpu, node;
+
+	if (!nfsd_io_context) {
+		cpu = get_cpu();
+		node = cpu_to_node(cpu);
+		put_cpu();
+
+		/*
+		 * get_io_context can return NULL if the alloc_context fails.
+		 * That's not technically fatal here, so we don't bother to
+		 * check for it.
+		 */
+		nfsd_io_context = get_io_context(GFP_KERNEL, node);
+	} else
+		copy_io_context(&current->io_context, &nfsd_io_context);
+}
+/*
  * This is the NFS server kernel thread
  */
 static int
@@ -410,6 +438,8 @@ nfsd(void *vrqstp)
 	/* Lock module and set up kernel thread */
 	mutex_lock(&nfsd_mutex);
 
+	nfsd_set_io_context();
+
 	/* At this point, the thread shares current->fs
 	 * with the init process. We need to create files with a
 	 * umask of 0 instead of init's umask. */
-- 
1.5.5.1


  reply	other threads:[~2008-11-11 21:59 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-09 18:04 Slow file transfer speeds with CFQ IO scheduler in some cases Vitaly V. Bursov
2008-11-09 18:30 ` Alexey Dobriyan
2008-11-09 18:32   ` Vitaly V. Bursov
2008-11-10 10:44 ` Jens Axboe
2008-11-10 13:51   ` Jeff Moyer
2008-11-10 13:56     ` Jens Axboe
2008-11-10 17:16       ` Vitaly V. Bursov
2008-11-10 17:35         ` Jens Axboe
2008-11-10 18:27           ` Vitaly V. Bursov
2008-11-10 18:29             ` Jens Axboe
2008-11-10 18:39               ` Jeff Moyer
2008-11-10 18:42               ` Jens Axboe
2008-11-10 21:51             ` Jeff Moyer
2008-11-11  9:34               ` Jens Axboe
2008-11-11  9:35                 ` Jens Axboe
2008-11-11 11:52                   ` Jens Axboe
2008-11-11 16:48                     ` Jeff Moyer
2008-11-11 18:08                       ` Jens Axboe
2008-11-11 16:53                     ` Vitaly V. Bursov
2008-11-11 18:06                       ` Jens Axboe
2008-11-11 19:36                         ` Jeff Moyer
2008-11-11 21:41                           ` Jeff Layton
2008-11-11 21:59                             ` Jeff Layton [this message]
2008-11-12 12:20                               ` Jens Axboe
2008-11-12 12:45                                 ` Jeff Layton
2008-11-12 12:54                                   ` Christoph Hellwig
2008-11-11 19:42                         ` Vitaly V. Bursov
2008-11-12 18:32       ` Jeff Moyer
2008-11-12 19:02         ` Jens Axboe
2008-11-13  8:51           ` Wu Fengguang
2008-11-13  8:54             ` Jens Axboe
2008-11-14  1:36               ` Wu Fengguang
2008-11-25 11:02                 ` Vladislav Bolkhovitin
2008-11-25 11:25                   ` Wu Fengguang
2008-11-25 15:21                   ` Jeff Moyer
2008-11-25 16:17                     ` Vladislav Bolkhovitin
2008-11-13 18:46             ` Vitaly V. Bursov
2008-11-25 10:59             ` Vladislav Bolkhovitin
2008-11-25 11:30               ` Wu Fengguang
2008-11-25 11:41                 ` Vladislav Bolkhovitin
2008-11-25 11:49                   ` Wu Fengguang
2008-11-25 12:03                     ` Vladislav Bolkhovitin
2008-11-25 12:09                       ` Vladislav Bolkhovitin
2008-11-25 12:15                         ` Wu Fengguang
2008-11-27 17:46                           ` Vladislav Bolkhovitin
     [not found]                             ` <492EDCFB.7080302-d+Crzxg7Rs0@public.gmane.org>
2008-11-28  0:48                               ` Wu Fengguang
2008-11-28  0:48                                 ` Wu Fengguang
2009-02-12 18:35                                 ` Vladislav Bolkhovitin
2009-02-13  1:57                                   ` Wu Fengguang
2009-02-13 20:08                                     ` Vladislav Bolkhovitin
2009-02-13 20:08                                       ` Vladislav Bolkhovitin
     [not found]                                       ` <4995D339.5050502-d+Crzxg7Rs0@public.gmane.org>
2009-02-16  2:34                                         ` Wu Fengguang
2009-02-16  2:34                                           ` Wu Fengguang
2009-02-17 19:03                                           ` Vladislav Bolkhovitin
2009-02-17 19:03                                             ` Vladislav Bolkhovitin
2009-02-18 18:14                                             ` Vladislav Bolkhovitin
2009-02-19  1:35                                             ` Wu Fengguang
2009-02-17 19:01                                     ` Vladislav Bolkhovitin
2009-02-17 19:01                                       ` Vladislav Bolkhovitin
2009-02-19  2:05                                       ` Wu Fengguang
2009-03-19 17:44                                         ` Vladislav Bolkhovitin
2009-03-20  8:53                                           ` Vladislav Bolkhovitin
2009-03-23  1:42                                           ` Wu Fengguang
2009-04-21 18:18                                             ` Vladislav Bolkhovitin
2009-04-24  8:43                                               ` Wu Fengguang
2009-05-12 18:13                                                 ` Vladislav Bolkhovitin
     [not found]                                   ` <49946BE6.1040005-d+Crzxg7Rs0@public.gmane.org>
2009-02-17 19:01                                     ` Vladislav Bolkhovitin
2009-02-17 19:01                                       ` Vladislav Bolkhovitin
     [not found]                                       ` <499B0979.8050006-d+Crzxg7Rs0@public.gmane.org>
2009-02-19  1:38                                         ` Wu Fengguang
2009-02-19  1:38                                           ` Wu Fengguang
2008-11-24 15:33           ` Jeff Moyer
2008-11-24 18:13             ` Jens Axboe
2008-11-24 18:50               ` Jeff Moyer
2008-11-24 18:51                 ` Jens Axboe
2008-11-13  6:54         ` Vitaly V. Bursov
2008-11-13 14:32           ` Jeff Moyer
2008-11-13 18:33             ` Vitaly V. Bursov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081111165931.6f98401b@tleilax.poochiereds.net \
    --to=jlayton@redhat.com \
    --cc=bfields@fieldses.org \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vitalyb@telenet.dn.ua \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.