From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx182.postini.com [74.125.245.182]) by kanga.kvack.org (Postfix) with SMTP id BD0D36B002C for ; Wed, 1 Feb 2012 02:20:06 -0500 (EST) Date: Wed, 1 Feb 2012 15:10:00 +0800 From: Wu Fengguang Subject: Re: [PATCH] fix readahead pipeline break caused by block plug Message-ID: <20120201071000.GB29083@localhost> References: <1327996780.21268.42.camel@sli10-conroe> <20120131220333.GD4378@redhat.com> <20120131141301.ba35ffe0.akpm@linux-foundation.org> <20120131222217.GE4378@redhat.com> <20120201033653.GA12092@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120201033653.GA12092@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: Vivek Goyal Cc: Andrew Morton , Shaohua Li , lkml , linux-mm , Jens Axboe , Herbert Poetzl , Eric Dumazet On Tue, Jan 31, 2012 at 10:36:53PM -0500, Vivek Goyal wrote: > On Tue, Jan 31, 2012 at 05:22:17PM -0500, Vivek Goyal wrote: > [..] > > > > > > > We've never really bothered making the /dev/sda[X] I/O very efficient > > > for large I/O's under the (probably wrong) assumption that it isn't a > > > very interesting case. Regular files will (or should) use the mpage > > > functions, via address_space_operations.readpages(). fs/blockdev.c > > > doesn't even implement it. > > > > > > > and by the time all the pages > > > > are submitted and one big merged request is formed it wates lot of time. > > > > > > But that was the case in eariler kernels too. Why did it change? > > > > Actually, I assumed that the case of reading /dev/sda[X] worked well in > > earlier kernels. Sorry about that. Will build a 2.6.38 kernel tonight > > and run the test case again to make sure we had same overhead and > > relatively poor performance while reading /dev/sda[X]. > > Ok, I tried it with 2.6.38 kernel and results look more or less same. > Throughput varied between 105MB to 145MB. Many a times it was close to > 110MB and other times it was 145MB. Don't know what causes that spike > sometimes. The block device really has some aged performance bug. Which interestingly only show up in some test environments... > I still see that IO is being submitted one page at a time. The only > real difference seems to be that queue unplug happening at random times > and many a times we are submitting much smaller requests (40 sectors, 48 > sectors etc). Would you share the blktrace data? Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org