* Re: [ofa-general] iSer and Direct IO
[not found] <482B7FE4.9070502@fusionio.com>
@ 2008-05-15 11:23 ` Eli Dorfman
2008-05-15 15:11 ` Cameron Harr
2008-05-15 15:25 ` Cameron Harr
0 siblings, 2 replies; 6+ messages in thread
From: Eli Dorfman @ 2008-05-15 11:23 UTC (permalink / raw)
To: Cameron Harr; +Cc: linux-scsi, general
On Thu, May 15, 2008 at 3:12 AM, Cameron Harr <charr@fusionio.com> wrote:
> Hi, I've been trying to compare performances between iSer and srpt and
> am getting mixed results where iSer wins for IOPs and srpt wins for some
> streaming b/w tests. I've tested with iozone, spew and FIO, and IOP
> numbers are always higher on iSer. My problem though is that I'm a
> little suspicious of some of the iSer numbers and whether they are
> really using Direct IO. For example, you'll see below in some of my FIO
> results that I'm getting a write B/W of 799.1 MB/s at one point. That's
> way above what I can get natively on the device (~650 MB/s DIO) and is
> more along the lines of buffered IO. If the IOP numbers are also using
> some kind of caching, that could possibly taint them also. Does anyone
> know if specifying DIO will really bypass all buffers or if something is
> getting cached in the agents (iscsi, tgtd)?
>
>
> FIO
> --------------- iSer 1----iSer 2----SRPT 1----SRPT 2-
> RBW (MB/s) 565.3 836.5 622.0 581.7
> Read IOPs 63488.1 68053.8 5335.6 5446.1
> WBW (MB/s) 799.1 737.7 589.5 594.4
> Write IOPs 79086.6 80005.7 33884.6 34058.6
>
>
> Thanks much,
> Cameron
>
Your question should be posted on linux-scsi.
See the following link that explains about DIO
http://tldp.org/HOWTO/SCSI-Generic-HOWTO/dio.html
Please check with sgp_dd to avoid any caching.
Thanks,
Eli
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ofa-general] iSer and Direct IO
2008-05-15 11:23 ` [ofa-general] iSer and Direct IO Eli Dorfman
@ 2008-05-15 15:11 ` Cameron Harr
2008-05-15 15:25 ` Joe Landman
2008-05-15 15:25 ` Cameron Harr
1 sibling, 1 reply; 6+ messages in thread
From: Cameron Harr @ 2008-05-15 15:11 UTC (permalink / raw)
To: general; +Cc: linux-scsi
[-- Attachment #1: Type: text/html, Size: 3160 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ofa-general] iSer and Direct IO
2008-05-15 11:23 ` [ofa-general] iSer and Direct IO Eli Dorfman
2008-05-15 15:11 ` Cameron Harr
@ 2008-05-15 15:25 ` Cameron Harr
1 sibling, 0 replies; 6+ messages in thread
From: Cameron Harr @ 2008-05-15 15:25 UTC (permalink / raw)
To: linux-scsi
Eli Dorfman wrote:
> On Thu, May 15, 2008 at 3:12 AM, Cameron Harr <charr@fusionio.com> wrote:
>
>> My problem though is that I'm a
>> little suspicious of some of the iSer numbers and whether they are
>> really using Direct IO. For example, you'll see below in some of my FIO
>> results that I'm getting a write B/W of 799.1 MB/s at one point. That's
>> way above what I can get natively on the device (~650 MB/s DIO) and is
>> more along the lines of buffered IO. If the IOP numbers are also using
>> some kind of caching, that could possibly taint them also. Does anyone
>> know if specifying DIO will really bypass all buffers or if something is
>> getting cached in the agents (iscsi, tgtd)?
>>
>>
>
> Your question should be posted on linux-scsi.
> See the following link that explains about DIO
> http://tldp.org/HOWTO/SCSI-Generic-HOWTO/dio.html
>
> Please check with sgp_dd to avoid any caching.
>
Well, I posted here because I was looking more at iSer characteristics
than DIO. Things seemed to behave differently on iSer than what I'd
expect and what srpt shows. Also, I have trust issues with sg*_dd. On
the local box, they give me impossible numbers, whereas dd is where I'd
expect it:
----
[root@test05 ~]# sgp_dd dio=1 if=/dev/zero of=/dev/fioa bs=512 bpt=2048
count=16777216 time=1
time to transfer data was 5.556115 secs, 1546.03 MB/sec
[root@test05 ~]# sg_dd dio=1 if=/dev/zero of=/dev/fioa bs=512 bpt=2048
count=16777216 time=1
time to transfer data: 5.565360 secs at 1543.46 MB/sec
[root@test05 ~]# dd oflag=direct if=/dev/zero of=/dev/fioa bs=1M count=8192
8589934592 bytes (8.6 GB) copied, 12.7761 seconds, 672 MB/s
----
Using iSer, with the small transfer chunks, sgp_dd has numbers that are
in line with what I'd expect for DIO while sg_dd doesn't:
---------
sgp_dd: 200.64 MB/s
sg_dd: 735.42 MB/s
dd: 62.3 MB/s
--------
But for larger transfers (with 1M block transfers), both sgp_dd and
sg_dd show well above what I think I can be getting:
-------
sgp_dd: 882.43
sg_dd: 819.89
dd: 731 MB/s #Which is still high, and which makes me suspect iSer
-------
The page Eli linked states "Direct IO support is designed in such a way
that if it is requested and cannot be performed then the command will
still be performed using indirect IO." So I'm wondering if for some
reason here DIO can't be used with iSer? (BTW, /proc/scsi/sg/allow_dio is 1)
Cameron
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ofa-general] iSer and Direct IO
2008-05-15 15:11 ` Cameron Harr
@ 2008-05-15 15:25 ` Joe Landman
2008-05-15 15:50 ` Cameron Harr
0 siblings, 1 reply; 6+ messages in thread
From: Joe Landman @ 2008-05-15 15:25 UTC (permalink / raw)
To: Cameron Harr; +Cc: linux-scsi, general
Cameron Harr wrote:
> ----
> [root@test05 ~]# sgp_dd dio=1 if=/dev/zero of=/dev/fioa bs=512 bpt=2048
> count=16777216 time=1
This is only 8 GB of IO. It is possible that (despite dio) you are
caching. Make the IO much larger than RAM. Use a count of 128m or so.
> time to transfer data was 5.556115 secs, 1546.03 MB/sec
> [root@test05 ~]# sg_dd dio=1 if=/dev/zero of=/dev/fioa bs=512 bpt=2048
> count=16777216 time=1
> time to transfer data: 5.565360 secs at 1543.46 MB/sec
> [root@test05 ~]# dd oflag=direct if=/dev/zero of=/dev/fioa bs=1M count=8192
> 8589934592 bytes (8.6 GB) copied, 12.7761 seconds, 672 MB/s
We have found dd to be quite trustworthy with [oi]flag=direct.
> ----
> Using iSer, with the small transfer chunks, sgp_dd has numbers that are in line
> with what I'd expect for DIO while sg_dd doesn't:
> ---------
> sgp_dd: 200.64 MB/s
> sg_dd: 735.42 MB/s
> dd: 62.3 MB/s
> --------
> But for larger transfers (with 1M block transfers), both sgp_dd and sg_dd show
> well above what I think I can be getting:
> -------
> sgp_dd: 882.43
> sg_dd: 819.89
> dd: 731 MB/s #Which is still high, and which makes me suspect iSer
We had iSER bouncing from low 200s through 1000 MB/s during testing.
Very hard to pin down good stable benchmark times. This was a few
months ago.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman@scalableinformatics.com
web : http://www.scalableinformatics.com
http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ofa-general] iSer and Direct IO
2008-05-15 15:25 ` Joe Landman
@ 2008-05-15 15:50 ` Cameron Harr
2008-05-15 15:58 ` Joe Landman
0 siblings, 1 reply; 6+ messages in thread
From: Cameron Harr @ 2008-05-15 15:50 UTC (permalink / raw)
To: landman; +Cc: general, linux-scsi
Joe Landman wrote:
> This is only 8 GB of IO. It is possible that (despite dio) you are
> caching. Make the IO much larger than RAM. Use a count of 128m or so.
This is going to sound dumb, but I thought I had 4 GB of RAM and thus
intentionally used a file size 2x my physical RAM. As it turns out, I
have 32GB of RAM on the box (4G usually shows up as 38.... and I just
saw the 3). Anyway, with a 64GB file the numbers are looking more
accurate (and even low):
393.3 MB/s
> We have found dd to be quite trustworthy with [oi]flag=direct.
I like it too. At any rate, I'm going to need to do some new testing to
avoid the ram size (might just set a mem limit on the boot line).
There's still a bit of a discrepancy between IOP performance with iSer
and srpt. Has anyone else done comparisons with the two? I think Erez
was hoping to get some numbers before too long.
Cameron
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ofa-general] iSer and Direct IO
2008-05-15 15:50 ` Cameron Harr
@ 2008-05-15 15:58 ` Joe Landman
0 siblings, 0 replies; 6+ messages in thread
From: Joe Landman @ 2008-05-15 15:58 UTC (permalink / raw)
To: Cameron Harr; +Cc: linux-scsi, general
Cameron Harr wrote:
> Joe Landman wrote:
>> This is only 8 GB of IO. It is possible that (despite dio) you are
>> caching. Make the IO much larger than RAM. Use a count of 128m or so.
>
> This is going to sound dumb, but I thought I had 4 GB of RAM and thus
> intentionally used a file size 2x my physical RAM. As it turns out, I
> have 32GB of RAM on the box (4G usually shows up as 38.... and I just
> saw the 3). Anyway, with a 64GB file the numbers are looking more
> accurate (and even low):
> 393.3 MB/s
This is about right. We were seeing ~650MB/s iSER for a 1.3 TB file dd
on our units, but it bounced all over the place in terms of rates. Very
hard to pin down a single performance number. Locally the drives were
>750 MB/s, so 650 isn't terrible.
>> We have found dd to be quite trustworthy with [oi]flag=direct.
> I like it too. At any rate, I'm going to need to do some new testing to
> avoid the ram size (might just set a mem limit on the boot line).
>
> There's still a bit of a discrepancy between IOP performance with iSer
> and srpt. Has anyone else done comparisons with the two? I think Erez
> was hoping to get some numbers before too long.
> Cameron
I think it might be coalescing the IOPs somehow (what do your elevators
look like, how deep are your queues). Each drive can do 100-300 IOPs
best case. 30000 IOPs is 100-300 drives. Or
caching/coalescing/elevators in action.
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman@scalableinformatics.com
web : http://www.scalableinformatics.com
http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 866 888 3112
cell : +1 734 612 4615
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-05-15 15:58 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <482B7FE4.9070502@fusionio.com>
2008-05-15 11:23 ` [ofa-general] iSer and Direct IO Eli Dorfman
2008-05-15 15:11 ` Cameron Harr
2008-05-15 15:25 ` Joe Landman
2008-05-15 15:50 ` Cameron Harr
2008-05-15 15:58 ` Joe Landman
2008-05-15 15:25 ` Cameron Harr
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox