public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Zheng Liu <gnehzuil.liu@gmail.com>
To: Frank Mayhar <fmayhar@google.com>
Cc: xfs@oss.sgi.com, Allison Henderson <achender@linux.vnet.ibm.com>,
	Lukas Czerner <lczerner@redhat.com>, Tao Ma <tm@tao.ma>,
	Ext4 Developers List <linux-ext4@vger.kernel.org>
Subject: Re: working on extent locks for i_mutex
Date: Fri, 20 Jan 2012 10:26:49 +0800	[thread overview]
Message-ID: <20120120022649.GA12463@gmail.com> (raw)
In-Reply-To: <1327007770.5899.66.camel@peace.lax.corp.google.com>

On Thu, Jan 19, 2012 at 01:16:10PM -0800, Frank Mayhar wrote:
> On Wed, 2012-01-18 at 20:02 +0800, Zheng Liu wrote:
> > For this project, do you have a schedule? Would you like to share to me? This
> > lock contention heavily impacts the performance of direct IO in our production
> > environment. So we hope to improve it ASAP.
> > 
> > I have done some direct IO benchmarks to compare ext4 with xfs using fio
> > in Intel SSD. The result shows that, in direct IO, xfs outperforms ext4 and
> > ext4 with dioread_nolock.
> > 
> > To understand the effect of lock contention, I define a new function called 
> > ext4_file_aio_write() that calls __generic_file_aio_write() without acquiring 
> > i_mutex lock. Meanwhile, I remove DIO_LOCKING flag when __blockdev_direct_IO() 
> > is called and do the similar benchmarks. The result shows that the performance 
> > in ext4 is almost the same to the xfs. Thus, it proves that the i_mutex heavily
> > impacts the performance. Hopefully the result is useful for you. :-)
> 
> For the record, I have a patchset that, while not affecting i_mutex (or
> locking in general), does allow AIO append writes to actually be done
> asynchronously.  (Currently they're forced to be done synchronously.)
> It makes a big difference in performance for that particular case, even
> for spinning media.  Performance roughly doubled when testing with fio
> against a regular two-terabyte drive; the performance improvement
> against SSD would have to be much greater.
> 
> One day soon I'll accumulate enough spare time to port the patchset
> forward to the latest kernel and submit it here.
Interesting. I think it might help us to improve this issue. So could
you please post your test case and result in detail? Thank you. :-)

Regards,
Zheng

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2012-01-20  2:23 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <4F0F9E97.1090403@linux.vnet.ibm.com>
2012-01-13  4:34 ` working on extent locks for i_mutex Dave Chinner
2012-01-13  7:14   ` Tao Ma
2012-01-13 11:52     ` Dave Chinner
2012-01-13 11:57       ` Tao Ma
2012-01-13 20:50   ` Allison Henderson
2012-01-15 23:57     ` Dave Chinner
     [not found]       ` <4F146275.8090304@linux.vnet.ibm.com>
2012-01-18 12:02         ` Zheng Liu
2012-01-19 21:16           ` Frank Mayhar
2012-01-20  2:26             ` Zheng Liu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120120022649.GA12463@gmail.com \
    --to=gnehzuil.liu@gmail.com \
    --cc=achender@linux.vnet.ibm.com \
    --cc=fmayhar@google.com \
    --cc=lczerner@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=tm@tao.ma \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox