From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wu Fengguang Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev Date: Wed, 15 Jul 2009 15:06:17 +0800 Message-ID: <20090715070617.GB6145@localhost> References: <4A5395FD.2040507@vlnb.net> <4A5493A8.2000806@vlnb.net> <4A56FF32.2060303@vlnb.net> <4A570981.5080803@vlnb.net> <20090713123621.GA31051@localhost> <4A5CD3EB.50402@vlnb.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Ronald Moesbergen , "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "kosaki.motohiro@jp.fujitsu.com" , "Alan.Brunelle@hp.com" , "hifumi.hisashi@oss.ntt.co.jp" , "linux-fsdevel@vger.kernel.org" , "jens.axboe@oracle.com" , "randy.dunlap@oracle.com" , Bart Van Assche To: Vladislav Bolkhovitin Return-path: Received: from mga03.intel.com ([143.182.124.21]:39444 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752023AbZGOHGe (ORCPT ); Wed, 15 Jul 2009 03:06:34 -0400 Content-Disposition: inline In-Reply-To: <4A5CD3EB.50402@vlnb.net> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wed, Jul 15, 2009 at 02:52:27AM +0800, Vladislav Bolkhovitin wrote: > > Wu Fengguang, on 07/13/2009 04:36 PM wrote: > >> Test done with XFS on both the target and the initiator. This confirms > >> your findings, using files instead of block devices is faster, but > >> only when using the io_context patch. > > > > It shows that the one really matters is the io_context patch, > > even when context readahead is running. I guess what happened > > in the tests are: > > - without readahead (or readahead algorithm failed to do proper > > sequential readaheads), the SCST processes will be submitting > > small but close to each other IOs. CFQ relies on the io_context > > patch to prevent unnecessary idling. > > - with proper readahead, the SCST processes will also be submitting > > close readahead IOs. For example, one file's 100-102MB pages is > > readahead by process A, while its 102-104MB pages may be > > readahead by process B. In this case CFQ will also idle waiting > > for process A to submit the next IO, but in fact that IO is being > > submitted by process B. So the io_context patch is still necessary > > even when context readahead is working fine. I guess context > > readahead do have the added value of possibly enlarging the IO size > > (however this benchmark seems to not very sensitive to IO size). > > Looks like the truth. Although with 2MB RA I expect CFQ to do idling >10 > times less, which should bring bigger improvement than few %%. > > For how long CFQ idles? For HZ/125, i.e. 8 ms with HZ 250? Yes, 8ms by default. Note that the 8ms idle time is armed when the last IO from current process completes. So it would be definitely a waste if the cooperative process submitted the next read/readahead IO within this 8ms idle window (without cfq_coop.patch). Thanks, Fengguang