From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758865AbYEMTab (ORCPT ); Tue, 13 May 2008 15:30:31 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754809AbYEMTaY (ORCPT ); Tue, 13 May 2008 15:30:24 -0400 Received: from brick.kernel.dk ([87.55.233.238]:15292 "EHLO kernel.dk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753887AbYEMTaX (ORCPT ); Tue, 13 May 2008 15:30:23 -0400 Date: Tue, 13 May 2008 21:30:16 +0200 From: Jens Axboe To: Matthew Cc: Kasper Sandberg , Daniel J Blueman , Linux Kernel Subject: Re: performance "regression" in cfq compared to anticipatory, deadline and noop Message-ID: <20080513193016.GW16217@kernel.dk> References: <6278d2220805110614i7160a8a5o36d55acb732c1b59@mail.gmail.com> <1210514567.7827.62.camel@localhost> <20080513122021.GP16217@kernel.dk> <20080513130508.GQ16217@kernel.dk> <20080513180334.GS16217@kernel.dk> <20080513184057.GU16217@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 13 2008, Matthew wrote: > On Tue, May 13, 2008 at 8:40 PM, Jens Axboe wrote: > > > > On Tue, May 13 2008, Jens Axboe wrote: > > > On Tue, May 13 2008, Matthew wrote: > > > > On Tue, May 13, 2008 at 3:05 PM, Jens Axboe wrote: > > > > > > > > > > On Tue, May 13 2008, Matthew wrote: > > > > > > On Tue, May 13, 2008 at 2:20 PM, Jens Axboe wrote: > > > > > > > > > > > > > > On Sun, May 11 2008, Kasper Sandberg wrote: > > > > > > > > On Sun, 2008-05-11 at 14:14 +0100, Daniel J Blueman wrote: > > > > > > > > > I've been experiencing this for a while also; an almost 50% regression > > > > > > > > > is seen for single-process reads (ie sync) if slice_idle is 1ms or > > > > > > > > > more (eg default of 8) [1], which seems phenomenal. > > > > > > > > > > > > > > > > > > Jens, is this the expected price to pay for optimal busy-spindle > > > > > > > > > scheduling, a design issue, bug or am I missing something totally? > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > Daniel > > > > > > [snip] > > > > > > ... > > > > > > [snip] > > > > > > > > > > > > [snip] > > > > > > > > ... > > > > > > > > [snip] > > > > > > well - back to topic: > > > > > > > > > > > > for a blktrace one need to enable CONFIG_BLK_DEV_IO_TRACE , right ? > > > > > > blktrace can be obtained from your git-repo ? > > > > > > > > > > Yes on both accounts, or just grab a blktrace snapshot from: > > > > > > > > > > http://brick.kernel.dk/snaps/blktrace-git-latest.tar.gz > > > > > > > > > > if you don't use git. > > > > > > > > > > -- > > > > > Jens Axboe > > > > > > > > > > > > > > > [snip] > ... > [snip] > > > > > > They seem to start out the same, but then CFQ gets interrupted by a > > > timer unplug (which is also odd) and after that the request size drops. > > > On most devices you don't notice, but some are fairly picky about > > > request sizes. The end result is that CFQ has an average dispatch > > > request size of 142kb, where AS is more than double that at 306kb. I'll > > > need to analyze the data and look at the code a bit more to see WHY this > > > happens. > > > > Here's a test patch, I think we get into this situation due to CFQ being > > a bit too eager to start queuing again. Not tested, I'll need to spend > > some testing time on this. But I'd appreciate some feedback on whether > > this changes the situation! The final patch will be a little more > > involved. > [snip] > ... > [snip] > > > > -- > > Jens Axboe > > > > > > unfortunately that patch didn't help: > > hdparm -t /dev/sde > > /dev/sde: > Timing buffered disk reads: 178 MB in 3.03 seconds = 58.67 MB/sec > > > hdparm -t /dev/sdd > > /dev/sdd: > Timing buffered disk reads: 164 MB in 3.00 seconds = 54.61 MB/sec > > -> the first should be around 74 MB/sec, the second around 102 MB/sec Can you capture blktrace for that run as well, please? Just to have something to compare with. -- Jens Axboe