From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q0K2N0iM068099 for ; Thu, 19 Jan 2012 20:23:00 -0600 Received: from mail-tul01m020-f181.google.com (mail-tul01m020-f181.google.com [209.85.214.181]) by cuda.sgi.com with ESMTP id JSKl7JOChRkyhZBk for ; Thu, 19 Jan 2012 18:22:59 -0800 (PST) Received: by obbup10 with SMTP id up10so165718obb.26 for ; Thu, 19 Jan 2012 18:22:59 -0800 (PST) Date: Fri, 20 Jan 2012 10:26:49 +0800 From: Zheng Liu Subject: Re: working on extent locks for i_mutex Message-ID: <20120120022649.GA12463@gmail.com> References: <4F0F9E97.1090403@linux.vnet.ibm.com> <20120113043411.GH2806@dastard> <4F10992C.3070303@linux.vnet.ibm.com> <20120115235747.GA6922@dastard> <4F146275.8090304@linux.vnet.ibm.com> <20120118120223.GA4322@gmail.com> <1327007770.5899.66.camel@peace.lax.corp.google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1327007770.5899.66.camel@peace.lax.corp.google.com> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Frank Mayhar Cc: xfs@oss.sgi.com, Allison Henderson , Lukas Czerner , Tao Ma , Ext4 Developers List On Thu, Jan 19, 2012 at 01:16:10PM -0800, Frank Mayhar wrote: > On Wed, 2012-01-18 at 20:02 +0800, Zheng Liu wrote: > > For this project, do you have a schedule? Would you like to share to me? This > > lock contention heavily impacts the performance of direct IO in our production > > environment. So we hope to improve it ASAP. > > > > I have done some direct IO benchmarks to compare ext4 with xfs using fio > > in Intel SSD. The result shows that, in direct IO, xfs outperforms ext4 and > > ext4 with dioread_nolock. > > > > To understand the effect of lock contention, I define a new function called > > ext4_file_aio_write() that calls __generic_file_aio_write() without acquiring > > i_mutex lock. Meanwhile, I remove DIO_LOCKING flag when __blockdev_direct_IO() > > is called and do the similar benchmarks. The result shows that the performance > > in ext4 is almost the same to the xfs. Thus, it proves that the i_mutex heavily > > impacts the performance. Hopefully the result is useful for you. :-) > > For the record, I have a patchset that, while not affecting i_mutex (or > locking in general), does allow AIO append writes to actually be done > asynchronously. (Currently they're forced to be done synchronously.) > It makes a big difference in performance for that particular case, even > for spinning media. Performance roughly doubled when testing with fio > against a regular two-terabyte drive; the performance improvement > against SSD would have to be much greater. > > One day soon I'll accumulate enough spare time to port the patchset > forward to the latest kernel and submit it here. Interesting. I think it might help us to improve this issue. So could you please post your test case and result in detail? Thank you. :-) Regards, Zheng _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs