public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Joe Landman <landman@scalableinformatics.com>
To: Cameron Harr <charr@fusionio.com>
Cc: linux-scsi <linux-scsi@vger.kernel.org>, general@lists.openfabrics.org
Subject: Re: [ofa-general] iSer and Direct IO
Date: Thu, 15 May 2008 11:58:58 -0400	[thread overview]
Message-ID: <482C5DC2.7000100@scalableinformatics.com> (raw)
In-Reply-To: <482C5BC4.6090301@fusionio.com>

Cameron Harr wrote:
> Joe Landman wrote:
>> This is only 8 GB of IO.  It is possible that (despite dio) you are 
>> caching.  Make the IO much larger than RAM.  Use a count of 128m or so.
> 
> This is going to sound dumb, but I thought I had 4 GB of RAM and thus 
> intentionally used a file size 2x my physical RAM. As it turns out, I 
> have 32GB of RAM on the box (4G usually shows up as 38.... and I just 
> saw the 3). Anyway, with a 64GB file the numbers are looking more 
> accurate (and even low):
> 393.3 MB/s

This is about right.  We were seeing ~650MB/s iSER for a 1.3 TB file dd 
on our units, but it bounced all over the place in terms of rates.  Very 
hard to pin down a single performance number.  Locally the drives were 
 >750 MB/s, so 650 isn't terrible.

>> We have found dd to be quite trustworthy with [oi]flag=direct.
> I like it too. At any rate, I'm going to need to do some new testing to 
> avoid the ram size (might just set a mem limit on the boot line).
> 
> There's still a bit of a discrepancy between IOP performance with iSer 
> and srpt. Has anyone else done comparisons with the two? I think Erez 
> was hoping to get some numbers before too long.
> Cameron


I think it might be coalescing the IOPs somehow (what do your elevators 
look like, how deep are your queues).  Each drive can do 100-300 IOPs 
best case.  30000 IOPs is 100-300 drives.  Or 
caching/coalescing/elevators in action.

Joe


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman@scalableinformatics.com
web  : http://www.scalableinformatics.com
        http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 866 888 3112
cell : +1 734 612 4615

  reply	other threads:[~2008-05-15 15:58 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <482B7FE4.9070502@fusionio.com>
2008-05-15 11:23 ` [ofa-general] iSer and Direct IO Eli Dorfman
2008-05-15 15:11   ` Cameron Harr
2008-05-15 15:25     ` Joe Landman
2008-05-15 15:50       ` Cameron Harr
2008-05-15 15:58         ` Joe Landman [this message]
2008-05-15 15:25   ` Cameron Harr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=482C5DC2.7000100@scalableinformatics.com \
    --to=landman@scalableinformatics.com \
    --cc=charr@fusionio.com \
    --cc=general@lists.openfabrics.org \
    --cc=linux-scsi@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox