Linux RAID subsystem development
 help / color / mirror / Atom feed
From: "Pasi Kärkkäinen" <pasik@iki.fi>
To: Doug Dumitru <doug@easyco.com>
Cc: Adam Goryachev <mailinglists@websitemanagers.com.au>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: Re: Intel SSD or other brands
Date: Fri, 30 Dec 2016 18:32:11 +0200	[thread overview]
Message-ID: <20161230163210.GW28824@reaktio.net> (raw)
In-Reply-To: <CAFx4rwRUnfCXgM66oa+-GYbtthVZvtHQkb2-J3XqUU9YXWF6sA@mail.gmail.com>

Hello,

On Thu, Dec 29, 2016 at 05:24:16PM -0800, Doug Dumitru wrote:
> 
> My test is of a "managed" array with a "host side Flash Translation
> Layer".  This means that software is linearizing the writes before
> RAID-5 sees them.  This is how the major "storage appliance" vendors
> get really fast performance.  One vendor, running an earlier version
> of the software I am running here, was able to support 5000 ESXI VDI
> clients from a single 2U storage server (with a lot of FC cards).  The
> boot storm took about 3 minutes to settle.
>

Does this software happen to be opensource / publicly available ? 


Thanks,

-- Pasi
 
> Single drives are around 500 MB/sec which is 125K IOPS through our
> engine.  Eight drives are (8-1)x500=3500 MB/sec or 900K IOPS.  This is
> actually faster than FIO can generate a test pattern from a single
> job.  It is also faster than stock RAID-5 can linearly write without
> patches.
> 
> In terms of wear, lots of users are running very light write
> environments.  This is good as many configurations are > 50:1 write
> amp if you measure "end to end".  By end to end, I mean, how many
> flash writes happen when you create a small file inside of a file
> system.  This leads to "file system write amp" x "raid write amp" x
> "SSD write amp".  Some people don't like this approach as the file
> system is often "off limits" and a black box.  Then again, some file
> systems are better than others (for 10K sync creates, EXT4 and XFS are
> both about 4.4:1 whereas ZFS is a lot worse).  And with EXT4/XFS, you
> can mitigate some of this with an SSD or mapping layer that compresses
> blocks.
> 
> Doug Dumitru
> 


  reply	other threads:[~2016-12-30 16:32 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-29  2:14 Intel SSD or other brands Adam Goryachev
2016-12-29 11:39 ` Peter Grandi
2016-12-29 14:35   ` Adam Goryachev
2016-12-29 17:46     ` Peter Grandi
2016-12-29 16:56 ` Robert LeBlanc
2016-12-29 17:33   ` Peter Grandi
2016-12-29 18:37     ` Peter Grandi
2016-12-29 23:04   ` Adam Goryachev
2016-12-29 23:20     ` Robert LeBlanc
2016-12-29 18:50 ` Doug Dumitru
2016-12-29 22:51   ` Adam Goryachev
2016-12-30  1:24     ` Doug Dumitru
2016-12-30 16:32       ` Pasi Kärkkäinen [this message]
2016-12-30 18:23         ` Doug Dumitru
  -- strict thread matches above, loose matches on Subject: below --
2016-12-29  1:52 Adam Goryachev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161230163210.GW28824@reaktio.net \
    --to=pasik@iki.fi \
    --cc=doug@easyco.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=mailinglists@websitemanagers.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox