From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx166.postini.com [74.125.245.166]) by kanga.kvack.org (Postfix) with SMTP id 1401E6B00BB for ; Wed, 6 Jun 2012 10:09:03 -0400 (EDT) Date: Wed, 6 Jun 2012 22:08:59 +0800 From: Fengguang Wu Subject: Re: write-behind on streaming writes Message-ID: <20120606140859.GA8234@localhost> References: <20120528114124.GA6813@localhost> <20120529155759.GA11326@localhost> <20120530032129.GA7479@localhost> <20120605172302.GB28556@redhat.com> <20120605174157.GC28556@redhat.com> <20120605184853.GD28556@redhat.com> <20120605201045.GE28556@redhat.com> <20120606025729.GA1197@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120606025729.GA1197@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: Vivek Goyal Cc: Linus Torvalds , LKML , "Myklebust, Trond" , linux-fsdevel@vger.kernel.org, Linux Memory Management List , Jens Axboe On Tue, Jun 05, 2012 at 10:57:30PM -0400, Vivek Goyal wrote: > On Tue, Jun 05, 2012 at 04:10:45PM -0400, Vivek Goyal wrote: > > On Tue, Jun 05, 2012 at 02:48:53PM -0400, Vivek Goyal wrote: > > > > [..] > > > So sync_file_range() test keeps less in flight requests on on average > > > hence better latencies. It might not produce throughput drop on SATA > > > disks but might have some effect on storage array luns. Will give it > > > a try. > > > > Well, I ran dd and syn_file_range test on a storage array Lun. Wrote a > > file of size 4G on ext4. Got about 300MB/s write speed. In fact when I > > measured time using "time", sync_file_range test finished little faster. > > > > Then I started looking at blktrace output. sync_file_range() test > > initially (for about 8 seconds), drives shallow queue depth (about 16), > > but after 8 seconds somehow flusher gets involved and starts submitting > > lots of requests and we start driving much higher queue depth (upto more than > > 100). Not sure why flusher should get involved. Is everything working as > > expected. I thought that as we wait for last 8MB IO to finish before we > > start new one, we should have at max 16MB of IO in flight. Fengguang? > > Ok, found it. I am using "int index" which in turn caused signed integer > extension of (i*BUFSIZE). Once "i" crosses 255, integer overflow happens > and 64bit offset is sign extended and offsets are screwed. So after 2G > file size, sync_file_range() effectively stops working leaving dirty > pages which are cleaned up by flusher. So that explains why flusher > was kicking during my tests. Change "int" to "unsigned int" and problem > if fixed. Good catch! Besides that, I do see a small chance for the flusher thread to kick in: at the time when the inode dirty expires after 30s. Just a kind reminder, because I don't see how it can impact this workload in some noticeable way. Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org