* CSCAN vs CFQ I/O scheduler benchmark results
@ 2006-06-09 19:30 Vishal Patil
2006-06-10 2:15 ` Vishal Patil
0 siblings, 1 reply; 9+ messages in thread
From: Vishal Patil @ 2006-06-09 19:30 UTC (permalink / raw)
To: linux-kernel
Hello
I ran the sysbench benchmark to compare the CSCAN I/O scheduler
against the CFQ scheduler and following are the results. The results
are interesting especially in case of sequential writes and the random
workloads
Latency (seconds)
seq seq seq rnd
rnd rnd
reads writes r + w reads
writes r + w
--------------------------------------------------------------------------------------
CFQ 0.0116 0.0164 0.0107 0.1178 0.0423 0.0605
CSCAN 0.0148 0.0092 0.0169 0.1043 0.0473 0.0732
Throughput (MB/seconds)
seq seq seq rnd
rnd rnd
reads writes r + w reads
writes r + w
--------------------------------------------------------------------------------------
CFQ 19.062 15.251 22.127 2.1197 1.0032 1.376
CSCAN 14.553 22.108 14.72 2.394 0.9304 1.399
The machine configuation is as follows
CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz
Memory: 1027500 KB (1 GB)
Filesystem: ext3
Kernel: 2.6.16.2
If interseted you may have a look at the raw data at
http://www.google.com/notebook/public/14554179817860061151/BDQtXSwoQ2_mdxLgh
- Vishal
^ permalink raw reply [flat|nested] 9+ messages in thread
* CSCAN vs CFQ I/O scheduler benchmark results
2006-06-09 19:30 CSCAN vs CFQ I/O scheduler benchmark results Vishal Patil
@ 2006-06-10 2:15 ` Vishal Patil
2006-06-11 10:09 ` Jan Engelhardt
2006-06-11 18:58 ` Jens Axboe
0 siblings, 2 replies; 9+ messages in thread
From: Vishal Patil @ 2006-06-10 2:15 UTC (permalink / raw)
To: Linux Kernel Mailing List; +Cc: Jan Engelhardt, Jens Axboe
The previous mail got scrambbled and hence I am resending this one
again
Hello
I ran the sysbench benchmark to compare the CSCAN I/O scheduler
against the CFQ scheduler and following are the results. The results
are interesting especially in case of sequential writes and the random
workloads
Latency (seconds)
seq seq seq rnd
rnd rnd
reads writes r + w reads
writes r + w
--------------------------------------------------------------------------------------
CFQ 0.0116 0.0164 0.0107 0.1178 0.0423 0.0605
CSCAN 0.0148 0.0092 0.0169 0.1043 0.0473 0.0732
Throughput (MB/seconds)
seq seq seq rnd
rnd rnd
reads writes r + w reads
writes r + w
--------------------------------------------------------------------------------------
CFQ 19.062 15.251 22.127 2.1197 1.0032 1.376
CSCAN 14.553 22.108 14.72 2.394 0.9304 1.399
The machine configuation is as follows
CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz
Memory: 1027500 KB (1 GB)
Filesystem: ext3
Kernel: 2.6.16.2
If interseted you may have a look at the raw data at
http://www.google.com/notebook/public/14554179817860061151/BDQtXSwoQ2_mdxLgh
- Vishal
--
Success is mainly about failing a lot.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-10 2:15 ` Vishal Patil
@ 2006-06-11 10:09 ` Jan Engelhardt
2006-06-11 18:58 ` Jens Axboe
1 sibling, 0 replies; 9+ messages in thread
From: Jan Engelhardt @ 2006-06-11 10:09 UTC (permalink / raw)
To: Vishal Patil; +Cc: Linux Kernel Mailing List, Jens Axboe
> The previous mail got scrambbled and hence I am resending this one
> again
>
Still. Here are the values, fit for 80 cols.
Latency (seconds)
CFQ CSCAN
seq reads 0.0116 0.0148
seq writes 0.0164 0.0092
seq r+w 0.0107 0.0169
rnd reads 0.1178 0.1043
rnd writes 0.0423 0.0473
rnd r+w 0.0605 0.0732
Throughput MB/sec
CFQ CSCAN
seq reads 19.062 14.553
seq writes 15.251 22.108
seq r+w 22.127 14.72
rnd reads 2.1197 2.394
rnd writes 1.0032 0.9304
rnd r+w 1.376 1.399
Excels at sequential writes, so it seems like a good idea to use CSCAN on a
target drive when doing `rsync disk1/ disk2/`, esp. for large
files.
Jan Engelhardt
--
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-10 2:15 ` Vishal Patil
2006-06-11 10:09 ` Jan Engelhardt
@ 2006-06-11 18:58 ` Jens Axboe
2006-06-11 23:47 ` Vishal Patil
1 sibling, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2006-06-11 18:58 UTC (permalink / raw)
To: Vishal Patil; +Cc: Linux Kernel Mailing List, Jan Engelhardt
On Fri, Jun 09 2006, Vishal Patil wrote:
> The machine configuation is as follows
> CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz
> Memory: 1027500 KB (1 GB)
> Filesystem: ext3
> Kernel: 2.6.16.2
You don't mention the storage used, which is quite relevant.
If you have the time, please rerun with 2.6.17-rc6-gitX latest. Although
I'm not sure why you think CSCAN is a good scheduling algorithm, in
general it may be fine but there are trivial non-root 'dos' attacks. Any
of the non-noop Linux io schedulers is a better choice imo.
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-11 18:58 ` Jens Axboe
@ 2006-06-11 23:47 ` Vishal Patil
2006-06-12 6:41 ` Jens Axboe
0 siblings, 1 reply; 9+ messages in thread
From: Vishal Patil @ 2006-06-11 23:47 UTC (permalink / raw)
To: Jens Axboe; +Cc: Linux Kernel Mailing List, Jan Engelhardt
Jan
I ran the performance benchmark on an IBM machine with the following
harddrive attached to it.
cat /proc/ide/hda/model
ST340014A
Also note the CSCAN implementation is using rbtrees due which the time
complexity of the different operations is O(log(n)) and not O(n) and
that might be the reason that we are getting good values for specially
in case of sequential writes and the random workloads.
I will try making measurements using 2.6.17-rc6-gitX until next weekend.
Thanks for help and inputs folks.
- Vishal
On 6/11/06, Jens Axboe <axboe@suse.de> wrote:
> On Fri, Jun 09 2006, Vishal Patil wrote:
> > The machine configuation is as follows
> > CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz
> > Memory: 1027500 KB (1 GB)
> > Filesystem: ext3
> > Kernel: 2.6.16.2
>
> You don't mention the storage used, which is quite relevant.
>
> If you have the time, please rerun with 2.6.17-rc6-gitX latest. Although
> I'm not sure why you think CSCAN is a good scheduling algorithm, in
> general it may be fine but there are trivial non-root 'dos' attacks. Any
> of the non-noop Linux io schedulers is a better choice imo.
>
> --
> Jens Axboe
>
>
--
Success is mainly about failing a lot.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-11 23:47 ` Vishal Patil
@ 2006-06-12 6:41 ` Jens Axboe
2006-06-12 17:38 ` Vishal Patil
0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2006-06-12 6:41 UTC (permalink / raw)
To: Vishal Patil; +Cc: Linux Kernel Mailing List, Jan Engelhardt
(please don't top post)
On Sun, Jun 11 2006, Vishal Patil wrote:
> Jan
>
> I ran the performance benchmark on an IBM machine with the following
> harddrive attached to it.
>
> cat /proc/ide/hda/model
> ST340014A
Ok, so plain IDE.
> Also note the CSCAN implementation is using rbtrees due which the time
> complexity of the different operations is O(log(n)) and not O(n) and
> that might be the reason that we are getting good values for specially
> in case of sequential writes and the random workloads.
Extremely unlikely. The sort overhead is completely noise in a test such
as yours, an O(n^2) would likely run just as fast.
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-12 6:41 ` Jens Axboe
@ 2006-06-12 17:38 ` Vishal Patil
2006-06-12 22:21 ` Nate Diller
2006-06-13 5:46 ` Jens Axboe
0 siblings, 2 replies; 9+ messages in thread
From: Vishal Patil @ 2006-06-12 17:38 UTC (permalink / raw)
To: Jens Axboe; +Cc: Linux Kernel Mailing List, Jan Engelhardt
Jens
Could you let me know what tests would be fair to make comparsion
between the I/O schedulers? Thanks.
- Vishal
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-12 17:38 ` Vishal Patil
@ 2006-06-12 22:21 ` Nate Diller
2006-06-13 5:46 ` Jens Axboe
1 sibling, 0 replies; 9+ messages in thread
From: Nate Diller @ 2006-06-12 22:21 UTC (permalink / raw)
To: Vishal Patil; +Cc: Jens Axboe, Linux Kernel Mailing List, Jan Engelhardt
On 6/12/06, Vishal Patil <vishpat@gmail.com> wrote:
> Jens
>
> Could you let me know what tests would be fair to make comparsion
> between the I/O schedulers? Thanks.
any filesystem benchmark should be informative, namesys tests with a
whole suite, bonnie++, iozone, mongo (custom), and one or two others.
You can ask on reiserfs-list for more information.
I suggest testing against more than one filesystem as well, it's quite
common for a FS to "prefer" one scheduler.
NATE
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: CSCAN vs CFQ I/O scheduler benchmark results
2006-06-12 17:38 ` Vishal Patil
2006-06-12 22:21 ` Nate Diller
@ 2006-06-13 5:46 ` Jens Axboe
1 sibling, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2006-06-13 5:46 UTC (permalink / raw)
To: Vishal Patil; +Cc: Linux Kernel Mailing List, Jan Engelhardt
On Mon, Jun 12 2006, Vishal Patil wrote:
> Jens
>
> Could you let me know what tests would be fair to make comparsion
> between the I/O schedulers? Thanks.
Depends on what you want to test, of course! I can't give you an answer
on that.
The thing about IO schedulers is that it's easy to provide good
throughput or good latency, but hard to do both. The tests you did so
far have x processes doing the same thing. Try something that has eg an
async writer going full throttle, and then a/some sync readers.
--
Jens Axboe
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2006-06-13 5:45 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-09 19:30 CSCAN vs CFQ I/O scheduler benchmark results Vishal Patil
2006-06-10 2:15 ` Vishal Patil
2006-06-11 10:09 ` Jan Engelhardt
2006-06-11 18:58 ` Jens Axboe
2006-06-11 23:47 ` Vishal Patil
2006-06-12 6:41 ` Jens Axboe
2006-06-12 17:38 ` Vishal Patil
2006-06-12 22:21 ` Nate Diller
2006-06-13 5:46 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox