public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Sergey Meirovich <rathamahata@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>, Jan Kara <jack@suse.cz>,
	linux-scsi <linux-scsi@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Gluk <git.user@gmail.com>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage.
Date: Wed, 8 Jan 2014 21:55:24 +0100	[thread overview]
Message-ID: <20140108205524.GA15313@quack.suse.cz> (raw)
In-Reply-To: <CA+QCeVRXAXAk2Zv2gtdvT+c80hbpcvezz_dvk9aUjwPbVp7pnQ@mail.gmail.com>

On Wed 08-01-14 19:30:38, Sergey Meirovich wrote:
> On 8 January 2014 17:26, Christoph Hellwig <hch@infradead.org> wrote:
> >
> > On my laptop SSD I get the following results (sometimes up to 200MB/s,
> > sometimes down to 100MB/s, always in the 40k to 50k IOps range):
> >
> > time elapsed (sec.):    5
> > bandwidth (MiB/s):      160.00
> > IOps:                   40960.00
> 
> Any direct attached storage I've tried was faster for me as well,
> indeed. I have already posted IIRC
> "06:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS
> 2208 [Thunderbolt] (rev 05)"   - 1Gb BBU RAM
> sysbench seqwr aio 4k:                     326.24Mb/sec 20879.56 Requests/sec
> 
> That is good that you mentioned SSD. I've tried fnic HBA zoned to EMC
> XtremIO (SSD only based storage)
>      14.43Mb/sec 3693.65 Requests/sec for sequential 4k.
  You see big degradation only in SAN environments because they have
generally higher latency to complete a single request. And given appending
direct IO is completely synchronous, the latency is the only thing that
really matters for performance. I've also seen my desktop-grade SATA drive
perform better than some enterprise grade SAN for this particular
workload...

> So far I've seen so massive degradation only in SAN environment. I
> started my investigation with RHEL6.5 kernel so below table is from it
> but the trend is the same as for mainline it seems.
> 
> Chunk size Bandwidth MiB/s
> ================================
> 64M                512
> 32M                510
> 16M                492
> 8M                  451
> 4M                  436
> 2M                  350
> 1M                  256
> 512K               191
> 256K               165
> 128K               142
> 64K                 101
> 32K                 65
> 16K                 39
> 8K                   20
> 4K                   11
  Yes, that's expected. The latency to complete a request consists of some
fixed overhead + time to write data. So for small request sizes the latency
is constant (corresponding to bandwidth growing linearly with the request
size) and for larger request sizes latency somewhat grows so bandwidth grows
slower and slower (as the time to write the data forms larger and larger
part of the total latency)...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

  reply	other threads:[~2014-01-08 20:55 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-06  9:38 Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage Sergey Meirovich
2014-01-06 20:10 ` Jan Kara
2014-01-07  9:13   ` Sergey Meirovich
2014-01-07 15:58   ` Christoph Hellwig
2014-01-07 18:37     ` Sergey Meirovich
2014-01-08 14:03       ` Christoph Hellwig
2014-01-08 14:43         ` Sergey Meirovich
2014-01-08 15:26           ` Christoph Hellwig
2014-01-08 17:30             ` Sergey Meirovich
2014-01-08 20:55               ` Jan Kara [this message]
2014-01-09 10:11                 ` Sergey Meirovich
2014-01-10  9:36                   ` Jan Kara
2014-01-10 10:36                     ` Sergey Meirovich
2014-01-10 10:48                       ` Jan Kara
2014-01-10 14:32                         ` Sergey Meirovich
2014-01-10 18:14                           ` Sergey Meirovich
2014-01-14 13:30         ` Sergey Meirovich
2014-01-15 22:07           ` Dave Chinner
2014-01-20 13:58             ` Christoph Hellwig
2014-01-20 22:18               ` Dave Chinner
2014-01-08  1:17     ` Jan Kara
2014-01-08 14:03       ` Christoph Hellwig
2014-01-07 20:57   ` James Smart
2014-01-08 13:57     ` Sergey Meirovich
2014-01-09 19:54       ` Douglas Gilbert
2014-01-09 21:26         ` Sergey Meirovich
2014-01-09 21:43           ` Sergey Meirovich
  -- strict thread matches above, loose matches on Subject: below --
2014-01-06 13:16 Sergey Meirovich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140108205524.GA15313@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=git.user@gmail.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=rathamahata@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox