public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vladislav Bolkhovitin <vst@vlnb.net>
To: Wu Fengguang <wfg@linux.intel.com>
Cc: Jens Axboe <jens.axboe@oracle.com>,
	Jeff Moyer <jmoyer@redhat.com>,
	"Vitaly V. Bursov" <vitalyb@telenet.dn.ua>,
	linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases
Date: Wed, 18 Feb 2009 21:14:12 +0300	[thread overview]
Message-ID: <499C4FF4.6000006@vlnb.net> (raw)
In-Reply-To: <499B09FB.1070909@vlnb.net>

[-- Attachment #1: Type: text/plain, Size: 4161 bytes --]

Vladislav Bolkhovitin, on 02/17/2009 10:03 PM wrote:
> Wu Fengguang, on 02/16/2009 05:34 AM wrote:
>> On Fri, Feb 13, 2009 at 11:08:25PM +0300, Vladislav Bolkhovitin wrote:
>>> Wu Fengguang, on 02/13/2009 04:57 AM wrote:
>>>> On Thu, Feb 12, 2009 at 09:35:18PM +0300, Vladislav Bolkhovitin wrote:
>>>>> Sorry for such a huge delay. There were many other activities I had 
>>>>> to  do before + I had to be sure I didn't miss anything.
>>>>>
>>>>> We didn't use NFS, we used SCST (http://scst.sourceforge.net) with   
>>>>> iSCSI-SCST target driver. It has similar to NFS architecture, where N 
>>>>>  threads (N=5 in this case) handle IO from remote initiators 
>>>>> (clients)  coming from wire using iSCSI protocol. In addition, SCST 
>>>>> has patch  called export_alloc_io_context (see   
>>>>> http://lkml.org/lkml/2008/12/10/282), which allows for the IO threads 
>>>>>  queue IO using single IO context, so we can see if context RA can   
>>>>> replace grouping IO threads in single IO context.
>>>>>
>>>>> Unfortunately, the results are negative. We find neither any 
>>>>> advantages  of context RA over current RA implementation, nor 
>>>>> possibility for  context RA to replace grouping IO threads in single 
>>>>> IO context.
>>>>>
>>>>> Setup on the target (server) was the following. 2 SATA drives grouped 
>>>>> in  md RAID-0 with average local read throughput ~120MB/s ("dd 
>>>>> if=/dev/zero  of=/dev/md0 bs=1M count=20000" outputs "20971520000 
>>>>> bytes (21 GB)  copied, 177,742 s, 118 MB/s"). The md device was 
>>>>> partitioned on 3  partitions. The first partition was 10% of space in 
>>>>> the beginning of the  device, the last partition was 10% of space in 
>>>>> the end of the device,  the middle one was the rest in the middle of 
>>>>> the space them. Then the  first and the last partitions were exported 
>>>>> to the initiator (client).  They were /dev/sdb and /dev/sdc on it 
>>>>> correspondingly.
>>>> Vladislav, Thank you for the benchmarks! I'm very interested in
>>>> optimizing your workload and figuring out what happens underneath.
>>>>
>>>> Are the client and server two standalone boxes connected by GBE?
>>> Yes, they directly connected using GbE.
>>>
>>>> When you set readahead sizes in the benchmarks, you are setting them
>>>> in the server side? I.e. "linux-4dtq" is the SCST server?
>>> Yes, it's the server. On the client all the parameters were left default.
>>>
>>>> What's the
>>>> client side readahead size?
>>> Default, i.e. 128K
>>>
>>>> It would help a lot to debug readahead if you can provide the
>>>> server side readahead stats and trace log for the worst case.
>>>> This will automatically answer the above questions as well as disclose
>>>> the micro-behavior of readahead:
>>>>
>>>>         mount -t debugfs none /sys/kernel/debug
>>>>
>>>>         echo > /sys/kernel/debug/readahead/stats # reset counters
>>>>         # do benchmark
>>>>         cat /sys/kernel/debug/readahead/stats
>>>>
>>>>         echo 1 > /sys/kernel/debug/readahead/trace_enable
>>>>         # do micro-benchmark, i.e. run the same benchmark for a short time
>>>>         echo 0 > /sys/kernel/debug/readahead/trace_enable
>>>>         dmesg
>>>>
>>>> The above readahead trace should help find out how the client side
>>>> sequential reads convert into server side random reads, and how we can
>>>> prevent that.
>>> We will do it as soon as we have a free window on that system.
>> Thank you. For NFS, the client side read/readahead requests will be
>> split into units of rsize which will be served by a pool of nfsd
>> concurrently and possibly out of order. Does SCST have the same
>> process? If so, what's the rsize value for your SCST benchmarks?
> 
> No, there is no such splitting in SCST. Client sees raw SCSI disks from 
> server and what client sends is directly and in full size sent by the 
> server to its backstorage using regular buffered read() 
> (fd->f_op->aio_read() followed by 
> wait_on_retry_sync_kiocb()/wait_on_sync_kiocb() to be precise).

Update. We ran the same tests with deadline I/O scheduler and had 
roughly the same results as with CFQ, see attachment.

> Thanks,
> Vlad
> 
> 


[-- Attachment #2: 2.6.27.12-except_export+readahead-4M-deadline.txt --]
[-- Type: text/plain, Size: 2395 bytes --]

linux-4dtq:~ # uname -r
2.6.27.12-except_export+readahead
-scheduler = deadline
- RA = 4M


linux-4dtq:~ # free
             total       used       free     shared    buffers     cached
Mem:        508168     111288     396880          0       4476      62648
-/+ buffers/cache:      44164     464004
Swap:            0          0          0


linux-4dtq:~ # echo deadline > /sys/block/sdb/queue/scheduler
linux-4dtq:~ # echo deadline > /sys/block/sda/queue/scheduler
linux-4dtq:~ # cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq
linux-4dtq:~ # cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

linux-4dtq:~ # echo 1 > /sys/block/sda/queue/context_readahead
linux-4dtq:~ # echo 1 > /sys/block/sdb/queue/context_readahead
linux-4dtq:~ # cat /sys/block/sdb/queue/context_readahead
1
linux-4dtq:~ # cat /sys/block/sda/queue/context_readahead
1

linux-4dtq:~ # blockdev --setra 4096 /dev/sda
linux-4dtq:~ # blockdev --setra 4096 /dev/sdb
linux-4dtq:~ # blockdev --getra /dev/sdb
4096
linux-4dtq:~ # blockdev --getra /dev/sda
4096

linux-4dtq:~ # mdadm --assemble /dev/md0 /dev/sd[ab]
mdadm: /dev/md/0 has been started with 2 drives.
linux-4dtq:~ # vgchange -a y
  3 logical volume(s) in volume group "raid" now active
linux-4dtq:~ # lvs
  LV   VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  1st  raid -wi-a-  46.00G
  2nd  raid -wi-a- 374.00G
  3rd  raid -wi-a-  46.00G

scst: Using security group "Default" for initiator "iqn.1996-04.de.suse:01:aadab8bc4be5"
iscsi-scst:		Negotiated parameters: InitialR2T No, ImmediateData Yes, MaxConnections 1, MaxRecvDataSegmentLength 262144, MaxXmitDataSegmentLength 131072,
iscsi-scst:     MaxBurstLength 1048576, FirstBurstLength 262144, DefaultTime2Wait 2, DefaultTime2Retain 0,
iscsi-scst:     MaxOutstandingR2T 1, DataPDUInOrder Yes, DataSequenceInOrder Yes, ErrorRecoveryLevel 0,
iscsi-scst:     HeaderDigest None, DataDigest None, OFMarker No, IFMarker No, OFMarkInt 2048, IFMarkInt 2048

1) dd if=/dev/sdb of=/dev/null bs=64K count=80000
a) 54,1 MB/s
b) 55,6 MB/s
c) 54,3 MB/s

2) dd if=/dev/sdc of=/dev/null bs=64K count=80000
a) 71,3 MB/s
b) 73,8 MB/s
c) 72,7 MB/s

3)Run at the same time:	
while true; do dd if=/dev/sdc  of=/dev/null bs=64K; done	
dd if=/dev/sdb of=/dev/null bs=64K count=80000
a) 4,3 MB/s
b) 5.0 MB/s 
c) 5.2 MB/s

  reply	other threads:[~2009-02-18 18:14 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-09 18:04 Slow file transfer speeds with CFQ IO scheduler in some cases Vitaly V. Bursov
2008-11-09 18:30 ` Alexey Dobriyan
2008-11-09 18:32   ` Vitaly V. Bursov
2008-11-10 10:44 ` Jens Axboe
2008-11-10 13:51   ` Jeff Moyer
2008-11-10 13:56     ` Jens Axboe
2008-11-10 17:16       ` Vitaly V. Bursov
2008-11-10 17:35         ` Jens Axboe
2008-11-10 18:27           ` Vitaly V. Bursov
2008-11-10 18:29             ` Jens Axboe
2008-11-10 18:39               ` Jeff Moyer
2008-11-10 18:42               ` Jens Axboe
2008-11-10 21:51             ` Jeff Moyer
2008-11-11  9:34               ` Jens Axboe
2008-11-11  9:35                 ` Jens Axboe
2008-11-11 11:52                   ` Jens Axboe
2008-11-11 16:48                     ` Jeff Moyer
2008-11-11 18:08                       ` Jens Axboe
2008-11-11 16:53                     ` Vitaly V. Bursov
2008-11-11 18:06                       ` Jens Axboe
2008-11-11 19:36                         ` Jeff Moyer
2008-11-11 21:41                           ` Jeff Layton
2008-11-11 21:59                             ` Jeff Layton
2008-11-12 12:20                               ` Jens Axboe
2008-11-12 12:45                                 ` Jeff Layton
2008-11-12 12:54                                   ` Christoph Hellwig
2008-11-11 19:42                         ` Vitaly V. Bursov
2008-11-12 18:32       ` Jeff Moyer
2008-11-12 19:02         ` Jens Axboe
2008-11-13  8:51           ` Wu Fengguang
2008-11-13  8:54             ` Jens Axboe
2008-11-14  1:36               ` Wu Fengguang
2008-11-25 11:02                 ` Vladislav Bolkhovitin
2008-11-25 11:25                   ` Wu Fengguang
2008-11-25 15:21                   ` Jeff Moyer
2008-11-25 16:17                     ` Vladislav Bolkhovitin
2008-11-13 18:46             ` Vitaly V. Bursov
2008-11-25 10:59             ` Vladislav Bolkhovitin
2008-11-25 11:30               ` Wu Fengguang
2008-11-25 11:41                 ` Vladislav Bolkhovitin
2008-11-25 11:49                   ` Wu Fengguang
2008-11-25 12:03                     ` Vladislav Bolkhovitin
2008-11-25 12:09                       ` Vladislav Bolkhovitin
2008-11-25 12:15                         ` Wu Fengguang
2008-11-27 17:46                           ` Vladislav Bolkhovitin
2008-11-28  0:48                             ` Wu Fengguang
2009-02-12 18:35                               ` Vladislav Bolkhovitin
2009-02-13  1:57                                 ` Wu Fengguang
2009-02-13 20:08                                   ` Vladislav Bolkhovitin
2009-02-16  2:34                                     ` Wu Fengguang
2009-02-17 19:03                                       ` Vladislav Bolkhovitin
2009-02-18 18:14                                         ` Vladislav Bolkhovitin [this message]
2009-02-19  1:35                                         ` Wu Fengguang
2009-02-17 19:01                                   ` Vladislav Bolkhovitin
2009-02-19  2:05                                     ` Wu Fengguang
2009-03-19 17:44                                       ` Vladislav Bolkhovitin
2009-03-20  8:53                                         ` Vladislav Bolkhovitin
2009-03-23  1:42                                         ` Wu Fengguang
2009-04-21 18:18                                           ` Vladislav Bolkhovitin
2009-04-24  8:43                                             ` Wu Fengguang
2009-05-12 18:13                                               ` Vladislav Bolkhovitin
2009-02-17 19:01                                 ` Vladislav Bolkhovitin
2009-02-19  1:38                                   ` Wu Fengguang
2008-11-24 15:33           ` Jeff Moyer
2008-11-24 18:13             ` Jens Axboe
2008-11-24 18:50               ` Jeff Moyer
2008-11-24 18:51                 ` Jens Axboe
2008-11-13  6:54         ` Vitaly V. Bursov
2008-11-13 14:32           ` Jeff Moyer
2008-11-13 18:33             ` Vitaly V. Bursov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=499C4FF4.6000006@vlnb.net \
    --to=vst@vlnb.net \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=vitalyb@telenet.dn.ua \
    --cc=wfg@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox