public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Vitaly V. Bursov" <vitalyb@telenet.dn.ua>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>, linux-kernel@vger.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases
Date: Thu, 13 Nov 2008 20:33:28 +0200	[thread overview]
Message-ID: <491C72F8.9030605@telenet.dn.ua> (raw)
In-Reply-To: <x49skpvg0hw.fsf@segfault.boston.devel.redhat.com>

Jeff Moyer wrote:

>> It's 2.6.18-openvz-rhel5 kernel gives me 9MB/s, and with 2.6.27 I get ~40-50MB/s
>> instead of 80-90 MB/s as there should be no bottlenecks except the network.
> 
> Reading back through your original problem report, I'm not seeing what
> your numbers were with deadline; you simply mentioned that it "fixed"
> the problem.  Are you sure you'll get 80-90MB/s for this?  The local
> disks in my configuration, when performing a dd on the server system,
> can produce numbers around 85 MB/s, yet the NFS performance is around 65
> MB/s (and this is a gigabit network).

I have pair of 1TB HDDs and each of them is able to deliver around
100MB/s for sequental reads and PCI-E Ethernet adapter not sharing
a bus with SATA controller.

2.6.18-openvz-rhel5, loopback mounted nfs with deadline gives
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 7.12014 s, 147 MB/s

and same system via network:
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 14.3197 s, 73.2 MB/s

and network+cfq:
dd if=samefile of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 59.3116 s, 17.7 MB/s

and network+file cached on server side:
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 12.4204 s, 84.4 MB/s

Well, 73 is still not 80, but way much better than 17 (or,
even worse - 9)

I have 8 NFS threads by default here.

I got 17 MB here because of HZ=1000, also 2.6.27 performed
better in every heavy-nfs-transfer test so far

Changed network parateters:

net.core.wmem_default = 1048576
net.core.wmem_max = 1048576
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576

net.ipv4.tcp_mem = 1048576 1048576 4194304
net.ipv4.tcp_rmem = 1048576 1048576 4194304
net.ipv4.tcp_rmem = 1048576 1048576 4194304
net.ipv4.tcp_wmem = 1048576 1048576 4194304
net.ipv4.tcp_wmem = 1048576 1048576 4194304

MTU: 6000

Sorry, I didn't mention these in original post.

>>> Single dd performing a cold cache read of a 1GB file from an
>>> nfs server.  read_ahead_kb is 128 (the default) for all tests.
>>> cfq-cc denotes that the cfq scheduler was patched with the close
>>> cooperator patch.  All numbers are in MB/s.
>>>
>>> nfsd threads|   1  |  2   |  4   |  8  
>>> ----------------------------------------
>>> deadline    | 65.3 | 52.2 | 46.7 | 46.1
>>> cfq         | 64.1 | 57.8 | 53.3 | 46.9
>>> cfq-cc      | 65.7 | 55.8 | 52.1 | 40.3
>>>
>>> So, in my configuration, cfq and deadline both degrade in performance as
>>> the number of nfsd threads is increased.  The close cooperator patch
>>> seems to hurt a bit more at 8 threads, instead of helping;  I'm not sure
>>> why that is.
>> Interesting, I'll try to change nfsd threads number and see how it performs
>> on my setup. Setting it to 1 seems like a good idea for cfq and a non-high-end
>> hardware.
> 
> I think you're looking at this backwards.  I'm no nfsd tuning expert,
> but I'm pretty sure that you would tune the numbe,r of threads based on
> the number of active clients and the amount of memory on the server
> (since each thread has to reserve memory for incoming requests).

I understand this. It's just one of the parameters I completely missed
out of my sight :)

>> I'll look into it this evening.
> 
> The real reason I tried varying the number of nfsd threads was to show,
> at least for CFQ, that spreading a sequential I/O load across multiple
> threads would result in suboptimal performance.  What I found, instead,
> was that it hurt performance for cfq *and* deadline (and that the close
> cooperator patches did not help in this regard).  This tells me that
> there is something else which is affecting the performance.  What that
> something is I don't know, I think we'd have to take a closer look at
> what's going on on the server to figure it out.
> 

I've tested it also...

loopback:
nfsd threads |  1 |  2 |  4 |  8 |  16
----------------------------------------
deadline-vz  |  97|  92| 128| 145| 148
deadline     | 145| 160| 173| 170| 150
cfq-cc       | 137| 150| 167| 157| 133
cfq          |  26|  28|  34|  38|  38

network:
nfsd threads |  1 |  2 |   4|   8|  16
----------------------------------------
deadline-vz  |  68|  69|  75|  73|  72
deadline     |  91|  89|  87|  88|  84
cfq-cc       |  91|  89|  88|  82|  74
cfq          |  25|  28|  32|  36|  34


deadline-vz - deadline with 2.6.18-openvz-rhel5 kernel
deadline, cfq, cfq-cc - linux-2.6.27.5

Yep, it's not that simple as I thought...

-- 
Regards,
Vitaly

      reply	other threads:[~2008-11-13 18:33 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-09 18:04 Slow file transfer speeds with CFQ IO scheduler in some cases Vitaly V. Bursov
2008-11-09 18:30 ` Alexey Dobriyan
2008-11-09 18:32   ` Vitaly V. Bursov
2008-11-10 10:44 ` Jens Axboe
2008-11-10 13:51   ` Jeff Moyer
2008-11-10 13:56     ` Jens Axboe
2008-11-10 17:16       ` Vitaly V. Bursov
2008-11-10 17:35         ` Jens Axboe
2008-11-10 18:27           ` Vitaly V. Bursov
2008-11-10 18:29             ` Jens Axboe
2008-11-10 18:39               ` Jeff Moyer
2008-11-10 18:42               ` Jens Axboe
2008-11-10 21:51             ` Jeff Moyer
2008-11-11  9:34               ` Jens Axboe
2008-11-11  9:35                 ` Jens Axboe
2008-11-11 11:52                   ` Jens Axboe
2008-11-11 16:48                     ` Jeff Moyer
2008-11-11 18:08                       ` Jens Axboe
2008-11-11 16:53                     ` Vitaly V. Bursov
2008-11-11 18:06                       ` Jens Axboe
2008-11-11 19:36                         ` Jeff Moyer
2008-11-11 21:41                           ` Jeff Layton
2008-11-11 21:59                             ` Jeff Layton
2008-11-12 12:20                               ` Jens Axboe
2008-11-12 12:45                                 ` Jeff Layton
2008-11-12 12:54                                   ` Christoph Hellwig
2008-11-11 19:42                         ` Vitaly V. Bursov
2008-11-12 18:32       ` Jeff Moyer
2008-11-12 19:02         ` Jens Axboe
2008-11-13  8:51           ` Wu Fengguang
2008-11-13  8:54             ` Jens Axboe
2008-11-14  1:36               ` Wu Fengguang
2008-11-25 11:02                 ` Vladislav Bolkhovitin
2008-11-25 11:25                   ` Wu Fengguang
2008-11-25 15:21                   ` Jeff Moyer
2008-11-25 16:17                     ` Vladislav Bolkhovitin
2008-11-13 18:46             ` Vitaly V. Bursov
2008-11-25 10:59             ` Vladislav Bolkhovitin
2008-11-25 11:30               ` Wu Fengguang
2008-11-25 11:41                 ` Vladislav Bolkhovitin
2008-11-25 11:49                   ` Wu Fengguang
2008-11-25 12:03                     ` Vladislav Bolkhovitin
2008-11-25 12:09                       ` Vladislav Bolkhovitin
2008-11-25 12:15                         ` Wu Fengguang
2008-11-27 17:46                           ` Vladislav Bolkhovitin
2008-11-28  0:48                             ` Wu Fengguang
2009-02-12 18:35                               ` Vladislav Bolkhovitin
2009-02-13  1:57                                 ` Wu Fengguang
2009-02-13 20:08                                   ` Vladislav Bolkhovitin
2009-02-16  2:34                                     ` Wu Fengguang
2009-02-17 19:03                                       ` Vladislav Bolkhovitin
2009-02-18 18:14                                         ` Vladislav Bolkhovitin
2009-02-19  1:35                                         ` Wu Fengguang
2009-02-17 19:01                                   ` Vladislav Bolkhovitin
2009-02-19  2:05                                     ` Wu Fengguang
2009-03-19 17:44                                       ` Vladislav Bolkhovitin
2009-03-20  8:53                                         ` Vladislav Bolkhovitin
2009-03-23  1:42                                         ` Wu Fengguang
2009-04-21 18:18                                           ` Vladislav Bolkhovitin
2009-04-24  8:43                                             ` Wu Fengguang
2009-05-12 18:13                                               ` Vladislav Bolkhovitin
2009-02-17 19:01                                 ` Vladislav Bolkhovitin
2009-02-19  1:38                                   ` Wu Fengguang
2008-11-24 15:33           ` Jeff Moyer
2008-11-24 18:13             ` Jens Axboe
2008-11-24 18:50               ` Jeff Moyer
2008-11-24 18:51                 ` Jens Axboe
2008-11-13  6:54         ` Vitaly V. Bursov
2008-11-13 14:32           ` Jeff Moyer
2008-11-13 18:33             ` Vitaly V. Bursov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=491C72F8.9030605@telenet.dn.ua \
    --to=vitalyb@telenet.dn.ua \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox