From: Jens Axboe <jens.axboe@oracle.com>
To: Bill Davidsen <davidsen@tmr.com>
Cc: Theodore Tso <tytso@mit.edu>,
Jan Engelhardt <jengelh@computergmbh.de>,
Richard Ballantyne <richardballantyne@gmail.com>,
linux-kernel@vger.kernel.org
Subject: Re: file system for solid state disks
Date: Wed, 29 Aug 2007 19:57:52 +0200 [thread overview]
Message-ID: <20070829175751.GB7932@kernel.dk> (raw)
In-Reply-To: <46D5AEAD.50909@tmr.com>
On Wed, Aug 29 2007, Bill Davidsen wrote:
> Jens Axboe wrote:
> >On Thu, Aug 23 2007, Theodore Tso wrote:
> >>On Thu, Aug 23, 2007 at 07:52:46AM +0200, Jan Engelhardt wrote:
> >>>On Aug 23 2007 01:01, Richard Ballantyne wrote:
> >>>>What file system that is already in the linux kernel do people recommend
> >>>>I use for my laptop that now contains a solid state disk?
> >>>If I had to choose, the list of options seems to be:
> >>>
> >>>- logfs
> >>> [unmerged]
> >>>
> >>>- UBI layer with any fs you like
> >>> [just a guess]
> >>The question is whether the solid state disk gives you access to the
> >>raw flash, or whether you have to go through the flash translation
> >>layer because it's trying to look (exclusively) like a PATA or SATA
> >>drive. There are some SSD's that have a form factor and interfaces
> >>that make them a drop-in replacement for a laptop hard drive, and a
> >>number of the newer laptops that are supporting SSD's seem to be these
> >>because (a) they don't have to radically change their design, (b) so
> >>they can be compatible with Windows, and (c) so that users can
> >>purchase the laptop either with a traditional hard drive or a SSD's as
> >>an option, since at the moment SSD's are far more expensive than
> >>disks.
> >>
> >>So if you can't get access to the raw flash layer, then what you're
> >>probably going to be looking at is a traditional block-oriented
> >>filesystem, such as ext3, although there are clearly some things that
> >>could be done such as disabling the elevator.
> >
> >It's more complicated than that, I'd say. If the job of the elevator was
> >purely to sort request based on sector criteria, then I'd agree that
> >noop was the best way to go. But the elevator also abritrates access to
> >the disk for processes. Even if you don't pay a seek penalty, you still
> >would rather like to get your sync reads in without having to wait for
> >that huge writer that just queued hundreds of megabytes of io in front
> >of you (and will have done so behind your read, making you wait again
> >for a subsequent read).
>
> In most cases the time in the elevator is minimal compared to the
> benefits. Even without your next suggestion.
Runtime overhead, yes. Head optimizations like trying to avoid seeks,
definitely no. That can be several miliseconds for a request, and if you
waste that time often, then you are going noticably slower than you
could be.
> >My plan in this area is to add a simple storage profile and attach it to
> >the queue. Just start simple, allow a device driver to inform the block
> >layer that this device has no seek penalty. Then the io scheduler can
> >make more informed decisions on what to do - eg for ssd, sector
> >proximity may not have much meaning, so we should not take that into
> >account.
> >
> Eventually the optimal solution may require both bandwidth and seek
> information. If "solid state disk" means flash, it's on a peripheral
> bus, it's probably not all that fast at transfer rate. If it means NV
> memory, battery backed or core, probably nothing changes, again *if*
> it's on a peripheral bus, but if it's on a card plugged to the
> backplane, the transfer rate may be high enough to make ordering cost
> more than waiting. This could be extended to nbd and iSCSI devices as
> well, I think, to optimize performance.
I've yet to see any real runtime overhead problems for any workload, so
the ordering is not an issue imo. It's easy enough to bypass for any io
scheduler, should it become interesting.
--
Jens Axboe
next prev parent reply other threads:[~2007-08-29 17:58 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-23 5:01 file system for solid state disks Richard Ballantyne
2007-08-23 5:52 ` Jan Engelhardt
2007-08-23 10:26 ` Theodore Tso
2007-08-23 11:25 ` Jens Axboe
2007-08-29 17:36 ` Bill Davidsen
2007-08-29 17:57 ` Jens Axboe [this message]
-- strict thread matches above, loose matches on Subject: below --
2007-08-23 8:55 Daniel J Blueman
2007-08-23 12:45 ` James Courtier-Dutton
2007-08-23 12:56 ` Daniel J Blueman
[not found] ` <20070823134359.GB5576@mail.ustc.edu.cn>
2007-08-23 13:43 ` Fengguang Wu
2007-08-23 15:09 ` Daniel J Blueman
2007-09-05 12:34 ` Denys Vlasenko
2007-09-05 12:56 ` linux-os (Dick Johnson)
2007-09-05 13:04 ` Manu Abraham
2007-08-25 8:41 Just Marc
2007-08-30 18:25 ` Jan Engelhardt
2007-08-30 18:26 ` Just Marc
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070829175751.GB7932@kernel.dk \
--to=jens.axboe@oracle.com \
--cc=davidsen@tmr.com \
--cc=jengelh@computergmbh.de \
--cc=linux-kernel@vger.kernel.org \
--cc=richardballantyne@gmail.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox