linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Nauman Rafique <nauman@google.com>
Cc: lsf-pc@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org
Subject: Re: [LSF/FS TOPIC] Scaling file systems on high performance flash devices
Date: Wed, 9 Feb 2011 09:38:53 +1100	[thread overview]
Message-ID: <20110208223853.GE2559@dastard> (raw)
In-Reply-To: <AANLkTimkZmH7P3LhWatCJ9C9-XGU1+fZ7UgfQONoxx-1@mail.gmail.com>

On Thu, Feb 03, 2011 at 04:39:40PM -0800, Nauman Rafique wrote:
> Flash device vendors are coming up with faster and faster devices
> every year.  Given the high performance supported by these devices,
> there are thoughts about using them not only as high performance
> storage but also as a replacement for huge quantities of DRAM. That
> particular use case would put very stringent requirements on the
> performance of file systems on these devices --- an issue that should
> be discussed.
> 
> I will share our experience running some experiments on a high
> performance flash device (FusionIO IODrive duo) with ext4 and XFS. We
> have devised an extensive set of experiments focused on finding the
> scaling and overhead problems in the kernel. Our experiments use
> various IO sizes, and perform IO in both synchronous multi-threaded
> mode and AIO mode. We configure our setup to bypass the block layer
> (fusionIO driver supports that), and do IO in O_DIRECT mode to
> minimize overhead in the kernel. In spite of such optimizations, we
> still see performance issues especially while doing IO at the peak
> throughput capacity available on these drives. The issues pertain to
> CPU scheduling behavior, filesystem metadata manipulation, and
> basically the whole kernel code path involved in doing IO to
> such devices, that would not be involved if data was read from DRAM
> directly.

Seeing as I'm not going to be around for LSF, can you describe some
of your testing and the limitations you came across on XFS? I'm
especially interested in the metadata manipulation issues you saw as
we've done a fair bit of metadata and journal IO optimisation in the
past year....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2011-02-08 22:38 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-04  0:39 [LSF/FS TOPIC] Scaling file systems on high performance flash devices Nauman Rafique
2011-02-04 16:58 ` [Lsf-pc] " Ric Wheeler
2011-02-08 22:38 ` Dave Chinner [this message]
2011-02-10 18:41   ` Nauman Rafique

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110208223853.GE2559@dastard \
    --to=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linuxfoundation.org \
    --cc=nauman@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).