* Performance of SCST versus STGT
@ 2008-01-17 9:27 Bart Van Assche
2008-01-17 9:40 ` FUJITA Tomonori
0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2008-01-17 9:27 UTC (permalink / raw)
To: stgt-devel-0fE9KPoRgkgATYTw5x5z8w
Cc: Fujita Tomonori, Vladislav Bolkhovitin,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, James Bottomley,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Hello,
I have performed a test to compare the performance of SCST and STGT.
Apparently the SCST target implementation performed far better than
the STGT target implementation. This makes me wonder whether this is
due to the design of SCST or whether STGT's performance can be
improved to the level of SCST ?
Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
cache -- no disk reads were performed, all reads were from the cache).
Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
STGT read SCST read
performance (MB/s) performance (MB/s)
Ethernet (1 Gb/s network) 77 89
IPoIB (8 Gb/s network) 82 229
SRP (8 Gb/s network) N/A 600
iSER (8 Gb/s network) 80 N/A
These results show that SCST uses the InfiniBand network very well
(effectivity of about 88% via SRP), but that the current STGT version
is unable to transfer data faster than 82 MB/s. Does this mean that
there is a severe bottleneck present in the current STGT
implementation ?
Details about the test equipment:
- Ethernet controller: Intel 80003ES2LAN Gigabit Ethernet controller
(copper) in full duplex mode.
- InfiniBand controller: Mellanox MT25204 [InfiniHost III Lx HCA].
According to ib_rdma_bw and ib_rdma_lat, the InfiniBand peak bandwith
on this system is 675 MB/sec and its latency is 3 microseconds.
- CPU: one CPU, an Intel Xeon CPU 5130 @ 2.00GHz.
- RAM: 2 GB in the initiator, 8 GB in the target. According to
lmbench, memory read bandwidth is 2960 MB/s and write bandwidth is
1080 MB/s.
- Software: 64-bit Ubuntu 7.10 server edition + OFED 1.2.5.4
userspace components + SCST revision 242 (January 4, 2008) + TGT
version 20071227.
Regards,
Bart Van Assche.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 9:27 Performance of SCST versus STGT Bart Van Assche
@ 2008-01-17 9:40 ` FUJITA Tomonori
2008-01-17 9:48 ` Vladislav Bolkhovitin
0 siblings, 1 reply; 37+ messages in thread
From: FUJITA Tomonori @ 2008-01-17 9:40 UTC (permalink / raw)
To: bart.vanassche
Cc: stgt-devel, linux-scsi, scst-devel, James.Bottomley, erezz,
fujita.tomonori, vst
On Thu, 17 Jan 2008 10:27:08 +0100
"Bart Van Assche" <bart.vanassche@gmail.com> wrote:
> Hello,
>
> I have performed a test to compare the performance of SCST and STGT.
> Apparently the SCST target implementation performed far better than
> the STGT target implementation. This makes me wonder whether this is
> due to the design of SCST or whether STGT's performance can be
> improved to the level of SCST ?
>
> Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
> cache -- no disk reads were performed, all reads were from the cache).
> Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>
> STGT read SCST read
> performance (MB/s) performance (MB/s)
> Ethernet (1 Gb/s network) 77 89
> IPoIB (8 Gb/s network) 82 229
> SRP (8 Gb/s network) N/A 600
> iSER (8 Gb/s network) 80 N/A
>
> These results show that SCST uses the InfiniBand network very well
> (effectivity of about 88% via SRP), but that the current STGT version
> is unable to transfer data faster than 82 MB/s. Does this mean that
> there is a severe bottleneck present in the current STGT
> implementation ?
I don't know about the details but Pete said that he can achieve more
than 900MB/s read performance with tgt iSER target using ramdisk.
http://www.mail-archive.com/stgt-devel@lists.berlios.de/msg00004.html
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 9:40 ` FUJITA Tomonori
@ 2008-01-17 9:48 ` Vladislav Bolkhovitin
2008-01-17 10:05 ` FUJITA Tomonori
0 siblings, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-17 9:48 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: bart.vanassche, stgt-devel, linux-scsi, scst-devel,
James.Bottomley, erezz
FUJITA Tomonori wrote:
> On Thu, 17 Jan 2008 10:27:08 +0100
> "Bart Van Assche" <bart.vanassche@gmail.com> wrote:
>
>
>>Hello,
>>
>>I have performed a test to compare the performance of SCST and STGT.
>>Apparently the SCST target implementation performed far better than
>>the STGT target implementation. This makes me wonder whether this is
>>due to the design of SCST or whether STGT's performance can be
>>improved to the level of SCST ?
>>
>>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>cache -- no disk reads were performed, all reads were from the cache).
>>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>
>> STGT read SCST read
>> performance (MB/s) performance (MB/s)
>>Ethernet (1 Gb/s network) 77 89
>>IPoIB (8 Gb/s network) 82 229
>>SRP (8 Gb/s network) N/A 600
>>iSER (8 Gb/s network) 80 N/A
>>
>>These results show that SCST uses the InfiniBand network very well
>>(effectivity of about 88% via SRP), but that the current STGT version
>>is unable to transfer data faster than 82 MB/s. Does this mean that
>>there is a severe bottleneck present in the current STGT
>>implementation ?
>
>
> I don't know about the details but Pete said that he can achieve more
> than 900MB/s read performance with tgt iSER target using ramdisk.
>
> http://www.mail-archive.com/stgt-devel@lists.berlios.de/msg00004.html
Please don't confuse multithreaded latency insensitive workload with
single threaded, hence latency sensitive one.
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 9:48 ` Vladislav Bolkhovitin
@ 2008-01-17 10:05 ` FUJITA Tomonori
[not found] ` <20080117190558K.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-17 14:22 ` Erez Zilber
0 siblings, 2 replies; 37+ messages in thread
From: FUJITA Tomonori @ 2008-01-17 10:05 UTC (permalink / raw)
To: vst
Cc: fujita.tomonori, bart.vanassche, stgt-devel, linux-scsi,
scst-devel, James.Bottomley, erezz
On Thu, 17 Jan 2008 12:48:28 +0300
Vladislav Bolkhovitin <vst@vlnb.net> wrote:
> FUJITA Tomonori wrote:
> > On Thu, 17 Jan 2008 10:27:08 +0100
> > "Bart Van Assche" <bart.vanassche@gmail.com> wrote:
> >
> >
> >>Hello,
> >>
> >>I have performed a test to compare the performance of SCST and STGT.
> >>Apparently the SCST target implementation performed far better than
> >>the STGT target implementation. This makes me wonder whether this is
> >>due to the design of SCST or whether STGT's performance can be
> >>improved to the level of SCST ?
> >>
> >>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
> >>cache -- no disk reads were performed, all reads were from the cache).
> >>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
> >>
> >> STGT read SCST read
> >> performance (MB/s) performance (MB/s)
> >>Ethernet (1 Gb/s network) 77 89
> >>IPoIB (8 Gb/s network) 82 229
> >>SRP (8 Gb/s network) N/A 600
> >>iSER (8 Gb/s network) 80 N/A
> >>
> >>These results show that SCST uses the InfiniBand network very well
> >>(effectivity of about 88% via SRP), but that the current STGT version
> >>is unable to transfer data faster than 82 MB/s. Does this mean that
> >>there is a severe bottleneck present in the current STGT
> >>implementation ?
> >
> >
> > I don't know about the details but Pete said that he can achieve more
> > than 900MB/s read performance with tgt iSER target using ramdisk.
> >
> > http://www.mail-archive.com/stgt-devel@lists.berlios.de/msg00004.html
>
> Please don't confuse multithreaded latency insensitive workload with
> single threaded, hence latency sensitive one.
Seems that he can get good performance with single threaded workload:
http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
But I don't know about the details so let's wait for Pete to comment
on this.
Perhaps Voltaire people could comment on the tgt iSER performances.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080117190558K.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
@ 2008-01-17 10:34 ` Vladislav Bolkhovitin
[not found] ` <478F2F46.9040103-d+Crzxg7Rs0@public.gmane.org>
2008-01-17 17:45 ` Pete Wyckoff
1 sibling, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-17 10:34 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
FUJITA Tomonori wrote:
> On Thu, 17 Jan 2008 12:48:28 +0300
> Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>
>
>>FUJITA Tomonori wrote:
>>
>>>On Thu, 17 Jan 2008 10:27:08 +0100
>>>"Bart Van Assche" <bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>>
>>>
>>>
>>>>Hello,
>>>>
>>>>I have performed a test to compare the performance of SCST and STGT.
>>>>Apparently the SCST target implementation performed far better than
>>>>the STGT target implementation. This makes me wonder whether this is
>>>>due to the design of SCST or whether STGT's performance can be
>>>>improved to the level of SCST ?
>>>>
>>>>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>>>cache -- no disk reads were performed, all reads were from the cache).
>>>>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>>>
>>>> STGT read SCST read
>>>> performance (MB/s) performance (MB/s)
>>>>Ethernet (1 Gb/s network) 77 89
>>>>IPoIB (8 Gb/s network) 82 229
>>>>SRP (8 Gb/s network) N/A 600
>>>>iSER (8 Gb/s network) 80 N/A
>>>>
>>>>These results show that SCST uses the InfiniBand network very well
>>>>(effectivity of about 88% via SRP), but that the current STGT version
>>>>is unable to transfer data faster than 82 MB/s. Does this mean that
>>>>there is a severe bottleneck present in the current STGT
>>>>implementation ?
>>>
>>>
>>>I don't know about the details but Pete said that he can achieve more
>>>than 900MB/s read performance with tgt iSER target using ramdisk.
>>>
>>>http://www.mail-archive.com/stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org/msg00004.html
>>
>>Please don't confuse multithreaded latency insensitive workload with
>>single threaded, hence latency sensitive one.
>
>
> Seems that he can get good performance with single threaded workload:
>
> http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
Hmm, I can't find which IB hardware did he use and it's declared Gbps
speed. He declared only "Mellanox 4X SDR, switch". What does it mean?
> But I don't know about the details so let's wait for Pete to comment
> on this.
I added him on CC
> Perhaps Voltaire people could comment on the tgt iSER performances.
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <478F2F46.9040103-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-17 12:29 ` Robin Humble
2008-01-17 13:44 ` [Scst-devel] [Stgt-devel] " Vladislav Bolkhovitin
[not found] ` <20080117122956.GA3567-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
0 siblings, 2 replies; 37+ messages in thread
From: Robin Humble @ 2008-01-17 12:29 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: FUJITA Tomonori, James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA
On Thu, Jan 17, 2008 at 01:34:46PM +0300, Vladislav Bolkhovitin wrote:
>Hmm, I can't find which IB hardware did he use and it's declared Gbps
>speed. He declared only "Mellanox 4X SDR, switch". What does it mean?
SDR is 10Gbit carrier, at most about ~900MB/s data rate.
DDR is 20Gbit carrier, at most about ~1400MB/s data rate.
On Thu, 17 Jan 2008 10:27:08 +0100 "Bart Van Assche" <bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
> cache -- no disk reads were performed, all reads were from the cache).
> Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>
> STGT read SCST read
> performance (MB/s) performance (MB/s)
> Ethernet (1 Gb/s network) 77 89
> IPoIB (8 Gb/s network) 82 229
> SRP (8 Gb/s network) N/A 600
> iSER (8 Gb/s network) 80 N/A
it kinda looks to me like the tgt iSER tests were waaay too slow to be
using RDMA :-/
I use tgt to get 500MB/s writes over iSER DDR IB to real files (not
ramdisk). Reads are a little slower, but that changes a bit with distro
vs. mainline kernels.
was iscsiadm was pointed at the IP of the IPoIB interface on the
target? I think tgtd requires that.
how about setting the transport to be iser with eg.
iscsiadm --mode node --targetname <something> --portal <ipoib>:3260 --op update -n node.transport_name -v iser
iscsiadm --mode node --targetname <something> --portal <ipoib>:3260 --op update -n "node.conn[0].iscsi.HeaderDigest" -v None
does the initiator side kernel report that it's using iSER?
it should look roughly like the below.
Jan 14 14:37:21 x2 kernel: iscsi: registered transport (iser)
Jan 14 14:37:21 x2 iscsid: iSCSI logger with pid=5617 started!
Jan 14 14:37:22 x2 iscsid: transport class version 2.0-724. iscsid version 2.0-865
Jan 14 14:37:22 x2 iscsid: iSCSI daemon with pid=5618 started!
Jan 14 14:37:22 x2 kernel: iser: iser_connect:connecting to: 192.168.1.8, port 0xbc0c
Jan 14 14:37:23 x2 kernel: iser: iser_cma_handler:event 0 conn ffff8102523c4c80 id ffff81025df68e00
Jan 14 14:37:23 x2 kernel: iser: iser_cma_handler:event 2 conn ffff8102523c4c80 id ffff81025df68e00
Jan 14 14:37:24 x2 kernel: iser: iser_create_ib_conn_res:setting conn ffff8102523c4c80 cma_id ffff81025df68e00: fmr_pool ffff81025341b5c0 qp ffff810252109200
Jan 14 14:37:24 x2 kernel: iser: iser_cma_handler:event 9 conn ffff8102523c4c80 id ffff81025df68e00
Jan 14 14:37:24 x2 kernel: iser: iscsi_iser_ep_poll:ib conn ffff8102523c4c80 rc = 1
Jan 14 14:37:24 x2 kernel: scsi6 : iSCSI Initiator over iSER, v.0.1
Jan 14 14:37:24 x2 kernel: iser: iscsi_iser_conn_bind:binding iscsi conn ffff810251a94290 to iser_conn ffff8102523c4c80
Jan 14 14:37:24 x2 kernel: Vendor: IET Model: Controller Rev: 0001
Jan 14 14:37:24 x2 kernel: Type: RAID ANSI SCSI revision: 05
Jan 14 14:37:24 x2 kernel: scsi 6:0:0:0: Attached scsi generic sg2 type 12
Jan 14 14:37:25 x2 kernel: Vendor: IET Model: VIRTUAL-DISK Rev: 0001
Jan 14 14:37:25 x2 kernel: Type: Direct-Access ANSI SCSI revision: 05
Jan 14 14:37:25 x2 kernel: SCSI device sdc: 20971520 512-byte hdwr sectors (10737 MB)
Jan 14 14:37:25 x2 kernel: sdc: Write Protect is off
Jan 14 14:37:25 x2 kernel: SCSI device sdc: drive cache: write back
Jan 14 14:37:25 x2 kernel: SCSI device sdc: 20971520 512-byte hdwr sectors (10737 MB)
Jan 14 14:37:25 x2 kernel: sdc: Write Protect is off
Jan 14 14:37:25 x2 kernel: SCSI device sdc: drive cache: write back
Jan 14 14:37:25 x2 kernel: sdc: unknown partition table
Jan 14 14:37:25 x2 kernel: sd 6:0:0:1: Attached scsi disk sdc
Jan 14 14:37:26 x2 kernel: sd 6:0:0:1: Attached scsi generic sg3 type 0
Jan 14 14:37:26 x2 iscsid: connection1:0 is operational now
cheers,
robin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Scst-devel] [Stgt-devel] Performance of SCST versus STGT
2008-01-17 12:29 ` Robin Humble
@ 2008-01-17 13:44 ` Vladislav Bolkhovitin
[not found] ` <20080117122956.GA3567-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
1 sibling, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-17 13:44 UTC (permalink / raw)
To: Robin Humble
Cc: FUJITA Tomonori, James.Bottomley, scst-devel, stgt-devel,
linux-scsi
Robin Humble wrote:
> On Thu, Jan 17, 2008 at 01:34:46PM +0300, Vladislav Bolkhovitin wrote:
>
>>Hmm, I can't find which IB hardware did he use and it's declared Gbps
>>speed. He declared only "Mellanox 4X SDR, switch". What does it mean?
>
>
> SDR is 10Gbit carrier, at most about ~900MB/s data rate.
> DDR is 20Gbit carrier, at most about ~1400MB/s data rate.
Thanks. Then the single threaded rate with one outstanding command
between SCST SRP on 8Gbps link vs STGT iSRP on 10Gbps link (according to
that paper) is 600MB/s vs ~480MB/s (page 26). Still SCST based target is
about 60% faster.
> On Thu, 17 Jan 2008 10:27:08 +0100 "Bart Van Assche" <bart.vanassche@gmail.com> wrote:
>
>>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>cache -- no disk reads were performed, all reads were from the cache).
>>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>
>> STGT read SCST read
>> performance (MB/s) performance (MB/s)
>>Ethernet (1 Gb/s network) 77 89
>>IPoIB (8 Gb/s network) 82 229
>>SRP (8 Gb/s network) N/A 600
>>iSER (8 Gb/s network) 80 N/A
>
>
> it kinda looks to me like the tgt iSER tests were waaay too slow to be
> using RDMA :-/
> I use tgt to get 500MB/s writes over iSER DDR IB to real files (not
> ramdisk). Reads are a little slower, but that changes a bit with distro
> vs. mainline kernels.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 10:05 ` FUJITA Tomonori
[not found] ` <20080117190558K.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
@ 2008-01-17 14:22 ` Erez Zilber
[not found] ` <478F64A0.6020201-hKgKHo2Ms0F+cjeuK/JdrQ@public.gmane.org>
1 sibling, 1 reply; 37+ messages in thread
From: Erez Zilber @ 2008-01-17 14:22 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: vst, bart.vanassche, stgt-devel, linux-scsi, scst-devel,
James.Bottomley, Pete Wyckoff
FUJITA Tomonori wrote:
> On Thu, 17 Jan 2008 12:48:28 +0300
> Vladislav Bolkhovitin <vst@vlnb.net> wrote:
>
>
>> FUJITA Tomonori wrote:
>>
>>> On Thu, 17 Jan 2008 10:27:08 +0100
>>> "Bart Van Assche" <bart.vanassche@gmail.com> wrote:
>>>
>>>
>>>
>>>> Hello,
>>>>
>>>> I have performed a test to compare the performance of SCST and STGT.
>>>> Apparently the SCST target implementation performed far better than
>>>> the STGT target implementation. This makes me wonder whether this is
>>>> due to the design of SCST or whether STGT's performance can be
>>>> improved to the level of SCST ?
>>>>
>>>> Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>>> cache -- no disk reads were performed, all reads were from the cache).
>>>> Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>>>
>>>> STGT read SCST read
>>>> performance (MB/s) performance (MB/s)
>>>> Ethernet (1 Gb/s network) 77 89
>>>> IPoIB (8 Gb/s network) 82 229
>>>> SRP (8 Gb/s network) N/A 600
>>>> iSER (8 Gb/s network) 80 N/A
>>>>
>>>> These results show that SCST uses the InfiniBand network very well
>>>> (effectivity of about 88% via SRP), but that the current STGT version
>>>> is unable to transfer data faster than 82 MB/s. Does this mean that
>>>> there is a severe bottleneck present in the current STGT
>>>> implementation ?
>>>>
>>> I don't know about the details but Pete said that he can achieve more
>>> than 900MB/s read performance with tgt iSER target using ramdisk.
>>>
>>> http://www.mail-archive.com/stgt-devel@lists.berlios.de/msg00004.html
>>>
>> Please don't confuse multithreaded latency insensitive workload with
>> single threaded, hence latency sensitive one.
>>
>
> Seems that he can get good performance with single threaded workload:
>
> http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
>
>
> But I don't know about the details so let's wait for Pete to comment
> on this.
>
> Perhaps Voltaire people could comment on the tgt iSER performances.
>
We didn't run any real performance test with tgt, so I don't have
numbers yet. I know that Pete got ~900 MB/sec by hacking sgp_dd, so all
data was read/written to the same block (so it was all done in the
cache). Pete - am I right?
As already mentioned, he got that with IB SDR cards that are 10 Gb/sec
cards in theory (actual speed is ~900 MB/sec). With DDR cards (20
Gb/sec), you can get even more. I plan to test that in the near future.
Erez
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <478F64A0.6020201-hKgKHo2Ms0F+cjeuK/JdrQ@public.gmane.org>
@ 2008-01-17 14:32 ` Vladislav Bolkhovitin
[not found] ` <478F6708.30604-d+Crzxg7Rs0@public.gmane.org>
0 siblings, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-17 14:32 UTC (permalink / raw)
To: Erez Zilber
Cc: stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Erez Zilber wrote:
> FUJITA Tomonori wrote:
>
>>On Thu, 17 Jan 2008 12:48:28 +0300
>>Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>>
>>
>>
>>>FUJITA Tomonori wrote:
>>>
>>>
>>>>On Thu, 17 Jan 2008 10:27:08 +0100
>>>>"Bart Van Assche" <bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>>>
>>>>
>>>>
>>>>
>>>>>Hello,
>>>>>
>>>>>I have performed a test to compare the performance of SCST and STGT.
>>>>>Apparently the SCST target implementation performed far better than
>>>>>the STGT target implementation. This makes me wonder whether this is
>>>>>due to the design of SCST or whether STGT's performance can be
>>>>>improved to the level of SCST ?
>>>>>
>>>>>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>>>>cache -- no disk reads were performed, all reads were from the cache).
>>>>>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>>>>
>>>>> STGT read SCST read
>>>>> performance (MB/s) performance (MB/s)
>>>>>Ethernet (1 Gb/s network) 77 89
>>>>>IPoIB (8 Gb/s network) 82 229
>>>>>SRP (8 Gb/s network) N/A 600
>>>>>iSER (8 Gb/s network) 80 N/A
>>>>>
>>>>>These results show that SCST uses the InfiniBand network very well
>>>>>(effectivity of about 88% via SRP), but that the current STGT version
>>>>>is unable to transfer data faster than 82 MB/s. Does this mean that
>>>>>there is a severe bottleneck present in the current STGT
>>>>>implementation ?
>>>>>
>>>>
>>>>I don't know about the details but Pete said that he can achieve more
>>>>than 900MB/s read performance with tgt iSER target using ramdisk.
>>>>
>>>>http://www.mail-archive.com/stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org/msg00004.html
>>>>
>>>
>>>Please don't confuse multithreaded latency insensitive workload with
>>>single threaded, hence latency sensitive one.
>>>
>>
>>Seems that he can get good performance with single threaded workload:
>>
>>http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
>>
>>
>>But I don't know about the details so let's wait for Pete to comment
>>on this.
>>
>>Perhaps Voltaire people could comment on the tgt iSER performances.
>
> We didn't run any real performance test with tgt, so I don't have
> numbers yet. I know that Pete got ~900 MB/sec by hacking sgp_dd, so all
> data was read/written to the same block (so it was all done in the
> cache). Pete - am I right?
>
> As already mentioned, he got that with IB SDR cards that are 10 Gb/sec
> cards in theory (actual speed is ~900 MB/sec). With DDR cards (20
> Gb/sec), you can get even more. I plan to test that in the near future.
Are you writing about a maximum possible speed which he got, including
multithreded tests with many outstanding commands or about speed he got
on single threaded reads with one outstanding command? This thread is
about the second one.
> Erez
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080117122956.GA3567-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
@ 2008-01-17 14:43 ` Bart Van Assche
0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-17 14:43 UTC (permalink / raw)
To: Robin Humble
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
Vladislav Bolkhovitin, linux-scsi-u79uwXL29TY76Z2rM5mHXA,
FUJITA Tomonori, scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w
On Jan 17, 2008 1:29 PM, Robin Humble <robin.humble+stgt-FCV4sgi5zeUQrrorzV6ljw@public.gmane.org> wrote:
> was iscsiadm was pointed at the IP of the IPoIB interface on the target? I think tgtd requires that.
>
> how about setting the transport to be iser with eg.
> iscsiadm --mode node --targetname <something> --portal <ipoib>:3260 --op update -n node.transport_name -v iser
> iscsiadm --mode node --targetname <something> --portal <ipoib>:3260 --op update -n "node.conn[0].iscsi.HeaderDigest" -v None
Ah, thanks. After issuing these commands I get better performance with
STGT and iSER. The updated table is as follows:
STGT read SCST read
performance (MB/s) performance (MB/s)
Ethernet (1 Gb/s network) 77 89
IPoIB (8 Gb/s network) 82 229
SRP (8 Gb/s network) N/A 600
iSER (8 Gb/s network) 324 N/A
Can we conclude that with the tested software versions SCST performs
significantly better than STGT on an InfiniBand network ?
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <478F6708.30604-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-17 14:46 ` Erez Zilber
0 siblings, 0 replies; 37+ messages in thread
From: Erez Zilber @ 2008-01-17 14:46 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
>> We didn't run any real performance test with tgt, so I don't have
>> numbers yet. I know that Pete got ~900 MB/sec by hacking sgp_dd, so all
>> data was read/written to the same block (so it was all done in the
>> cache). Pete - am I right?
>>
>> As already mentioned, he got that with IB SDR cards that are 10 Gb/sec
>> cards in theory (actual speed is ~900 MB/sec). With DDR cards (20
>> Gb/sec), you can get even more. I plan to test that in the near future.
>
> Are you writing about a maximum possible speed which he got, including
> multithreded tests with many outstanding commands or about speed he
> got on single threaded reads with one outstanding command? This
> thread is about the second one.
>
As I said, we didn't run any performance tests on stgt yet.
Erez
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080117190558K.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-17 10:34 ` Vladislav Bolkhovitin
@ 2008-01-17 17:45 ` Pete Wyckoff
[not found] ` <20080117174542.GC29650-pxmRpbKlMIQ@public.gmane.org>
` (2 more replies)
1 sibling, 3 replies; 37+ messages in thread
From: Pete Wyckoff @ 2008-01-17 17:45 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, vst-d+Crzxg7Rs0
fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org wrote on Thu, 17 Jan 2008 19:05 +0900:
> On Thu, 17 Jan 2008 12:48:28 +0300
> Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>
> > FUJITA Tomonori wrote:
> > > On Thu, 17 Jan 2008 10:27:08 +0100
> > > "Bart Van Assche" <bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > >
> > >
> > >>Hello,
> > >>
> > >>I have performed a test to compare the performance of SCST and STGT.
> > >>Apparently the SCST target implementation performed far better than
> > >>the STGT target implementation. This makes me wonder whether this is
> > >>due to the design of SCST or whether STGT's performance can be
> > >>improved to the level of SCST ?
> > >>
> > >>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
> > >>cache -- no disk reads were performed, all reads were from the cache).
> > >>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
> > >>
> > >> STGT read SCST read
> > >> performance (MB/s) performance (MB/s)
> > >>Ethernet (1 Gb/s network) 77 89
> > >>IPoIB (8 Gb/s network) 82 229
> > >>SRP (8 Gb/s network) N/A 600
> > >>iSER (8 Gb/s network) 80 N/A
> > >>
> > >>These results show that SCST uses the InfiniBand network very well
> > >>(effectivity of about 88% via SRP), but that the current STGT version
> > >>is unable to transfer data faster than 82 MB/s. Does this mean that
> > >>there is a severe bottleneck present in the current STGT
> > >>implementation ?
> > >
> > >
> > > I don't know about the details but Pete said that he can achieve more
> > > than 900MB/s read performance with tgt iSER target using ramdisk.
> > >
> > > http://www.mail-archive.com/stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org/msg00004.html
> >
> > Please don't confuse multithreaded latency insensitive workload with
> > single threaded, hence latency sensitive one.
>
> Seems that he can get good performance with single threaded workload:
>
> http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
>
> But I don't know about the details so let's wait for Pete to comment
> on this.
Page 16 is pretty straight forward. One command outstanding from
the client. It is an OSD read command. Data on tmpfs. 500 MB/s is
pretty easy to get on IB.
The other graph on page 23 is for block commands. 600 MB/s ish.
Still single command; so essentially a "latency" test. Dominated by
the memcpy time from tmpfs to pinned IB buffer, as per page 24.
Erez said:
> We didn't run any real performance test with tgt, so I don't have
> numbers yet. I know that Pete got ~900 MB/sec by hacking sgp_dd, so all
> data was read/written to the same block (so it was all done in the
> cache). Pete - am I right?
Yes (actually just 1 thread in sg_dd). This is obviously cheating.
Take the pread time to zero in SCSI Read analysis on page 24 to show
max theoretical. It's IB theoretical minus some initiator and stgt
overheads.
The other way to get more read throughput is to throw multiple
simultaneous commands at the server.
There's nothing particularly stunning here. Suspect Bart has
configuration issues if not even IPoIB will do > 100 MB/s.
-- Pete
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080117174542.GC29650-pxmRpbKlMIQ@public.gmane.org>
@ 2008-01-18 10:30 ` Bart Van Assche
0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-18 10:30 UTC (permalink / raw)
To: Pete Wyckoff
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, vst-d+Crzxg7Rs0
On Jan 17, 2008 6:45 PM, Pete Wyckoff <pw-pxmRpbKlMIQ@public.gmane.org> wrote:
> There's nothing particularly stunning here. Suspect Bart has
> configuration issues if not even IPoIB will do > 100 MB/s.
Regarding configuration issues: the systems I ran the test on probably
communicate via PCI-e x4 with the InfiniBand HCA's. With other systems
with identical software and with PCI-e x8 HCA's on the same InfiniBand
network I reach a throughput of 934 MB/s instead of 675 MB/s (PCI-e
x4). This is something I only found out today, otherwise I would have
run all tests on the systems with PCI-e x8 HCA's.
So the relative utilization of the InfiniBand network is as follows:
* STGT + iSER, PCI-e x4 HCA: 324/675 = 48% (measured myself)
* STGT + iSER, PCI-e x8 HCA: 550/934 = 59%
(http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf)
* SCST + SRP, PCI-e x4 HCA: 600/675 = 89% (measured myself)
Or: SCST uses the InfiniBand network much more effectively than STGT.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 17:45 ` Pete Wyckoff
[not found] ` <20080117174542.GC29650-pxmRpbKlMIQ@public.gmane.org>
@ 2008-01-18 12:08 ` Vladislav Bolkhovitin
2008-01-20 9:36 ` Bart Van Assche
2008-01-22 10:04 ` Bart Van Assche
2 siblings, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-18 12:08 UTC (permalink / raw)
To: Pete Wyckoff
Cc: FUJITA Tomonori, bart.vanassche, stgt-devel, linux-scsi,
scst-devel, James.Bottomley, erezz
Pete Wyckoff wrote:
>>>>>I have performed a test to compare the performance of SCST and STGT.
>>>>>Apparently the SCST target implementation performed far better than
>>>>>the STGT target implementation. This makes me wonder whether this is
>>>>>due to the design of SCST or whether STGT's performance can be
>>>>>improved to the level of SCST ?
>>>>>
>>>>>Test performed: read 2 GB of data in blocks of 1 MB from a target (hot
>>>>>cache -- no disk reads were performed, all reads were from the cache).
>>>>>Test command: time dd if=/dev/sde of=/dev/null bs=1M count=2000
>>>>>
>>>>> STGT read SCST read
>>>>> performance (MB/s) performance (MB/s)
>>>>>Ethernet (1 Gb/s network) 77 89
>>>>>IPoIB (8 Gb/s network) 82 229
>>>>>SRP (8 Gb/s network) N/A 600
>>>>>iSER (8 Gb/s network) 80 N/A
>>>>>
>>>>>These results show that SCST uses the InfiniBand network very well
>>>>>(effectivity of about 88% via SRP), but that the current STGT version
>>>>>is unable to transfer data faster than 82 MB/s. Does this mean that
>>>>>there is a severe bottleneck present in the current STGT
>>>>>implementation ?
>>>>
>>>>
>>>>I don't know about the details but Pete said that he can achieve more
>>>>than 900MB/s read performance with tgt iSER target using ramdisk.
>>>>
>>>>http://www.mail-archive.com/stgt-devel@lists.berlios.de/msg00004.html
>>>
>>>Please don't confuse multithreaded latency insensitive workload with
>>>single threaded, hence latency sensitive one.
>>
>>Seems that he can get good performance with single threaded workload:
>>
>>http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf
>>
>>But I don't know about the details so let's wait for Pete to comment
>>on this.
>
> Page 16 is pretty straight forward. One command outstanding from
> the client. It is an OSD read command. Data on tmpfs.
Hmm, I wouldn't say it's pretty straight forward. It has data for
"InfiniBand" and it's unclear if it's using iSER or some IB performance
test tool. I would rather interpret those data as for IB, not iSER.
> 500 MB/s is
> pretty easy to get on IB.
>
> The other graph on page 23 is for block commands. 600 MB/s ish.
> Still single command; so essentially a "latency" test. Dominated by
> the memcpy time from tmpfs to pinned IB buffer, as per page 24.
>
> Erez said:
>
>
>>We didn't run any real performance test with tgt, so I don't have
>>numbers yet. I know that Pete got ~900 MB/sec by hacking sgp_dd, so all
>>data was read/written to the same block (so it was all done in the
>>cache). Pete - am I right?
>
> Yes (actually just 1 thread in sg_dd). This is obviously cheating.
> Take the pread time to zero in SCSI Read analysis on page 24 to show
> max theoretical. It's IB theoretical minus some initiator and stgt
> overheads.
Yes, that's obviously cheating and its result can't be compared with
what Bart had. Full data footprint on target fit in the CPU cache, so
you had rather results for NULLIO (SCST term).
So, seems I understood your slides correctly: the more valuable data for
our SCST SRP vs STGT iSER comparison should be on page 26 for 1 command
read (~480MB/s, i.e. ~60% from Bart's result on the equivalent hardware).
> The other way to get more read throughput is to throw multiple
> simultaneous commands at the server.
>
> There's nothing particularly stunning here. Suspect Bart has
> configuration issues if not even IPoIB will do > 100 MB/s.
>
> -- Pete
>
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-18 12:08 ` Vladislav Bolkhovitin
@ 2008-01-20 9:36 ` Bart Van Assche
[not found] ` <e2e108260801200136g7f17b8a0g89a54cc1d73bbc34-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2008-01-20 9:36 UTC (permalink / raw)
To: FUJITA Tomonori, Vladislav Bolkhovitin
Cc: stgt-devel, linux-scsi, scst-devel, Pete Wyckoff, James.Bottomley,
erezz
On Jan 18, 2008 1:08 PM, Vladislav Bolkhovitin <vst@vlnb.net> wrote:
>
> [ ... ]
> So, seems I understood your slides correctly: the more valuable data for
> our SCST SRP vs STGT iSER comparison should be on page 26 for 1 command
> read (~480MB/s, i.e. ~60% from Bart's result on the equivalent hardware).
At least in my tests SCST performed significantly better than STGT.
These tests were performed with the currently available
implementations of SCST and STGT. Which performance improvements are
possible for these projects (e.g. zero-copying), and by how much is it
expected that these performance improvements will increase throughput
and will decrease latency ?
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <e2e108260801200136g7f17b8a0g89a54cc1d73bbc34-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2008-01-21 12:07 ` Vladislav Bolkhovitin
2008-01-22 3:26 ` FUJITA Tomonori
1 sibling, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-21 12:07 UTC (permalink / raw)
To: Bart Van Assche
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Bart Van Assche wrote:
> On Jan 18, 2008 1:08 PM, Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>
>>[ ... ]
>>So, seems I understood your slides correctly: the more valuable data for
>>our SCST SRP vs STGT iSER comparison should be on page 26 for 1 command
>>read (~480MB/s, i.e. ~60% from Bart's result on the equivalent hardware).
>
>
> At least in my tests SCST performed significantly better than STGT.
> These tests were performed with the currently available
> implementations of SCST and STGT. Which performance improvements are
> possible for these projects (e.g. zero-copying), and by how much is it
> expected that these performance improvements will increase throughput
> and will decrease latency ?
Sure, zero-copying cache support is well possible for SCST and hopefully
will be available soon. The performance (throughput) improvement will
depend from used hardware and data access pattern, but the upper bound
estimation can be made knowing memory copy throughput on your system
(1.6GB/s according to your measurements). For 10Gbps link with 0.9GB/s
wire speed it should be up to 30%, for 20Gbps link with wire speed
1.5GB/s (PCI-E 8x limitation) - something up to 70-80%.
Vlad
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <e2e108260801200136g7f17b8a0g89a54cc1d73bbc34-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-21 12:07 ` Vladislav Bolkhovitin
@ 2008-01-22 3:26 ` FUJITA Tomonori
[not found] ` <20080122122657R.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
1 sibling, 1 reply; 37+ messages in thread
From: FUJITA Tomonori @ 2008-01-22 3:26 UTC (permalink / raw)
To: bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, vst-d+Crzxg7Rs0
On Sun, 20 Jan 2008 10:36:18 +0100
"Bart Van Assche" <bart.vanassche-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> On Jan 18, 2008 1:08 PM, Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
> >
> > [ ... ]
> > So, seems I understood your slides correctly: the more valuable data for
> > our SCST SRP vs STGT iSER comparison should be on page 26 for 1 command
> > read (~480MB/s, i.e. ~60% from Bart's result on the equivalent hardware).
>
> At least in my tests SCST performed significantly better than STGT.
> These tests were performed with the currently available
> implementations of SCST and STGT. Which performance improvements are
First, I recommend you to examine iSER stuff more since it has some
parameters unlike SRP, which effects the performance, IIRC. At least,
you could get the iSER performances similar to Pete's.
> possible for these projects (e.g. zero-copying), and by how much is it
> expected that these performance improvements will increase throughput
> and will decrease latency ?
The major bottleneck about RDMA transfer is registering the buffer
before transfer. stgt's iSER driver has pre-registered buffers and
move data between page cache and thsse buffers, and then does RDMA
transfer.
The big problem of stgt iSER is disk I/Os (move data between disk and
page cache). We need a proper asynchronous I/O mechanism, however,
Linux doesn't provide such and we use a workaround, which incurs large
latency. I guess, we cannot solve this until syslets is merged into
mainline.
The above approach still needs one memory copy (between the
pre-registered buffers and page cahce). If we need more performance,
we have to implement a new caching mechanism using the pre-registered
buffers instead of just using page cache. AIO with O_DIRECT enables us
to implement such caching mechanism (we can use eventfd so we don't
need something like syslets, that is, we can implement such now).
I'm not sure someone will implement such RDMA caching mechanism for
stgt. Pete and his colleagues implemented stgt iSER driver (thanks!)
but they are not interested in block I/Os (they are OSD people).
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080122122657R.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
@ 2008-01-22 7:50 ` Bart Van Assche
2008-01-22 11:33 ` Vladislav Bolkhovitin
2008-01-22 15:14 ` Bart Van Assche
2 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-22 7:50 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, James Bottomley,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
Vladislav Bolkhovitin
On Jan 22, 2008 4:26 AM, FUJITA Tomonori <fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org> wrote:
> First, I recommend you to examine iSER stuff more since it has some
> parameters unlike SRP, which effects the performance, IIRC. At least,
> you could get the iSER performances similar to Pete's.
Documentation about configuring iSER parameters at the initiator side
appears to be hard to find. A Google query for (iscsiadm "op update"
"v iser" -- http://www.google.com/search?q=iscsiadm+%22op+update%22+%22v+iser%22)
gave only one result:
http://www.mail-archive.com/stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org/msg00033.html.
I also found an update of this document:
http://www.mail-archive.com/stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org/msg00133.html.
Are you referring to parameters like MaxRecvDataSegmentLength,
TargetRecvDataSegmentLength, InitiatorRecvDataSegmentLength and
MaxOutstandingUnexpectedPDUs as explained in RFC 5046
(http://www.ietf.org/rfc/rfc5046.txt) ?
It would be interesting for me to know which values Pete had
configured in his tests, such that I can configure the same values for
these parameters.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-17 17:45 ` Pete Wyckoff
[not found] ` <20080117174542.GC29650-pxmRpbKlMIQ@public.gmane.org>
2008-01-18 12:08 ` Vladislav Bolkhovitin
@ 2008-01-22 10:04 ` Bart Van Assche
2008-01-22 11:33 ` Vladislav Bolkhovitin
2 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2008-01-22 10:04 UTC (permalink / raw)
To: Pete Wyckoff
Cc: FUJITA Tomonori, vst, stgt-devel, linux-scsi, scst-devel,
James.Bottomley, erezz
On Jan 17, 2008 6:45 PM, Pete Wyckoff <pw@osc.edu> wrote:
> There's nothing particularly stunning here. Suspect Bart has
> configuration issues if not even IPoIB will do > 100 MB/s.
By this time I found out that the BIOS of the test systems (Intel
Server Board S5000PAL) set the PCI-e parameter MaxReadReq to 128
bytes, which explains the low InfiniBand performance. After changing
this parameter to 4096 bytes the InfiniBand throughput was as
expected: ib_rdma_bw now reports a
bandwidth of 933 MB/s.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080122122657R.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-22 7:50 ` Bart Van Assche
@ 2008-01-22 11:33 ` Vladislav Bolkhovitin
[not found] ` <4795D479.1080805-d+Crzxg7Rs0@public.gmane.org>
2008-01-22 15:14 ` Bart Van Assche
2 siblings, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-22 11:33 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
FUJITA Tomonori wrote:
> The big problem of stgt iSER is disk I/Os (move data between disk and
> page cache). We need a proper asynchronous I/O mechanism, however,
> Linux doesn't provide such and we use a workaround, which incurs large
> latency. I guess, we cannot solve this until syslets is merged into
> mainline.
Hmm, SCST also doesn't have ability to use asynchronous I/O, but that
doesn't prevent it from showing good performance.
Vlad
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-22 10:04 ` Bart Van Assche
@ 2008-01-22 11:33 ` Vladislav Bolkhovitin
[not found] ` <4795D4A7.5000105-d+Crzxg7Rs0@public.gmane.org>
2008-01-22 12:33 ` Bart Van Assche
0 siblings, 2 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-22 11:33 UTC (permalink / raw)
To: Bart Van Assche
Cc: Pete Wyckoff, FUJITA Tomonori, stgt-devel, linux-scsi, scst-devel,
James.Bottomley, erezz
Bart Van Assche wrote:
> On Jan 17, 2008 6:45 PM, Pete Wyckoff <pw@osc.edu> wrote:
>
>>There's nothing particularly stunning here. Suspect Bart has
>>configuration issues if not even IPoIB will do > 100 MB/s.
>
>
> By this time I found out that the BIOS of the test systems (Intel
> Server Board S5000PAL) set the PCI-e parameter MaxReadReq to 128
> bytes, which explains the low InfiniBand performance. After changing
> this parameter to 4096 bytes the InfiniBand throughput was as
> expected: ib_rdma_bw now reports a
> bandwidth of 933 MB/s.
What are the new SRPT/iSER numbers?
> Bart.
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <4795D479.1080805-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-22 11:48 ` FUJITA Tomonori
2008-01-22 12:20 ` Vladislav Bolkhovitin
0 siblings, 1 reply; 37+ messages in thread
From: FUJITA Tomonori @ 2008-01-22 11:48 UTC (permalink / raw)
To: vst-d+Crzxg7Rs0
Cc: stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Tue, 22 Jan 2008 14:33:13 +0300
Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
> FUJITA Tomonori wrote:
> > The big problem of stgt iSER is disk I/Os (move data between disk and
> > page cache). We need a proper asynchronous I/O mechanism, however,
> > Linux doesn't provide such and we use a workaround, which incurs large
> > latency. I guess, we cannot solve this until syslets is merged into
> > mainline.
>
> Hmm, SCST also doesn't have ability to use asynchronous I/O, but that
> doesn't prevent it from showing good performance.
I don't know how SCST performs I/Os, but surely, in kernel space, you
can performs I/Os asynchronously. Or you use an event notification
mechanism with multiple kernel threads performing I/Os synchronously.
Xen blktap has the same problem as stgt. IIRC, Xen mainline uses a
kernel patch to add a proper event notification to AIO though redhat
uses the same workaround as stgt instead of applying the kernel patch.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-22 11:48 ` FUJITA Tomonori
@ 2008-01-22 12:20 ` Vladislav Bolkhovitin
0 siblings, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-22 12:20 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: bart.vanassche, stgt-devel, linux-scsi, scst-devel, pw,
James.Bottomley, erezz
FUJITA Tomonori wrote:
> On Tue, 22 Jan 2008 14:33:13 +0300
> Vladislav Bolkhovitin <vst@vlnb.net> wrote:
>
>
>>FUJITA Tomonori wrote:
>>
>>>The big problem of stgt iSER is disk I/Os (move data between disk and
>>>page cache). We need a proper asynchronous I/O mechanism, however,
>>>Linux doesn't provide such and we use a workaround, which incurs large
>>>latency. I guess, we cannot solve this until syslets is merged into
>>>mainline.
>>
>>Hmm, SCST also doesn't have ability to use asynchronous I/O, but that
>>doesn't prevent it from showing good performance.
>
>
> I don't know how SCST performs I/Os, but surely, in kernel space, you
> can performs I/Os asynchronously.
Sure, but currently it all synchronous
> Or you use an event notification
> mechanism with multiple kernel threads performing I/Os synchronously.
>
> Xen blktap has the same problem as stgt. IIRC, Xen mainline uses a
> kernel patch to add a proper event notification to AIO though redhat
> uses the same workaround as stgt instead of applying the kernel patch.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <4795D4A7.5000105-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-22 12:32 ` Bart Van Assche
[not found] ` <e2e108260801220432l353b1d76xd2707b5e6f336aef-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2008-01-22 12:32 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1.1: Type: text/plain, Size: 1653 bytes --]
On Jan 22, 2008 12:33 PM, Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>
> What are the new SRPT/iSER numbers?
You can find the new performance numbers below. These are all numbers for
reading from the remote buffer cache, no actual disk reads were performed.
The read tests have been performed with dd, both for a block size of 512
bytes and of 1 MB. The tests with small block size learn more about latency,
while the tests with large block size learn more about the maximal possible
throughput.
.............................................................................................
. . STGT read SCST read . STGT
read SCST read .
. . performance performance .
performance performance .
. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB,
MB/s) (1 MB, MB/s) .
.............................................................................................
. Ethernet (1 Gb/s network) . 77 78 . 77
89 .
. IPoIB (8 Gb/s network) . 163 185 .
201 239 .
. iSER (8 Gb/s network) . 250 N/A .
360 N/A .
. SRP (8 Gb/s network) . N/A 421 .
N/A 683 .
.............................................................................................
My conclusion from the above numbers: the performance difference between
STGT and SCST is small for a Gigabit Ethernet network. The faster the
network technology, the larger the difference between SCST and STGT.
Bart.
[-- Attachment #1.2: Type: text/html, Size: 4690 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
Stgt-devel mailing list
Stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org
https://lists.berlios.de/mailman/listinfo/stgt-devel
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-22 11:33 ` Vladislav Bolkhovitin
[not found] ` <4795D4A7.5000105-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-22 12:33 ` Bart Van Assche
1 sibling, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-22 12:33 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: Pete Wyckoff, FUJITA Tomonori, stgt-devel, linux-scsi, scst-devel,
James.Bottomley, erezz
On Jan 22, 2008 12:33 PM, Vladislav Bolkhovitin <vst@vlnb.net> wrote:
>
> What are the new SRPT/iSER numbers?
You can find the new performance numbers below. These are all numbers
for reading from the remote buffer cache, no actual disk reads were
performed. The read tests have been performed with dd, both for a
block size of 512 bytes and of 1 MB. The tests with small block size
learn more about latency, while the tests with large block size learn
more about the maximal possible throughput.
.............................................................................................
. . STGT read SCST read . STGT
read SCST read .
. . performance performance .
performance performance .
. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB,
MB/s) (1 MB, MB/s) .
.............................................................................................
. Ethernet (1 Gb/s network) . 77 78 . 77
89 .
. IPoIB (8 Gb/s network) . 163 185 . 201
239 .
. iSER (8 Gb/s network) . 250 N/A . 360
N/A .
. SRP (8 Gb/s network) . N/A 421 . N/A
683 .
.............................................................................................
My conclusion from the above numbers: the performance difference
between STGT and SCST is small for a Gigabit Ethernet network. The
faster the network technology, the larger the difference between SCST
and STGT.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080122122657R.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-22 7:50 ` Bart Van Assche
2008-01-22 11:33 ` Vladislav Bolkhovitin
@ 2008-01-22 15:14 ` Bart Van Assche
2 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-22 15:14 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, vst-d+Crzxg7Rs0
On Jan 22, 2008 4:26 AM, FUJITA Tomonori <fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org> wrote:
>
> First, I recommend you to examine iSER stuff more since it has some
> parameters unlike SRP, which effects the performance, IIRC. At least,
> you could get the iSER performances similar to Pete's.
Apparently open-iscsi uses the following defaults:
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].tcp.window_size = 524288
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072
I have tried to change some of these parameters to a larger value, but
this did not have a noticeable effect (read bandwidth increased less
than 1%):
$ iscsiadm --mode node --targetname
iqn.2007-05.com.example:storage.disk2.sys1.xyz --portal
192.168.102.5:3260 --op update -n node.session.iscsi.FirstBurstLength
-v 16776192
$ iscsiadm --mode node --targetname
iqn.2007-05.com.example:storage.disk2.sys1.xyz --portal
192.168.102.5:3260 --op update -n node.session.iscsi.MaxBurstLength -v
16776192
$ iscsiadm --mode node --targetname
iqn.2007-05.com.example:storage.disk2.sys1.xyz --portal
192.168.102.5:3260 --op update -n
"node.conn[0].iscsi.MaxRecvDataSegmentLength" -v 16776192
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <e2e108260801220432l353b1d76xd2707b5e6f336aef-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2008-01-22 15:23 ` Vladislav Bolkhovitin
2008-01-24 7:06 ` Robin Humble
1 sibling, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-22 15:23 UTC (permalink / raw)
To: Bart Van Assche
Cc: James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA, FUJITA Tomonori,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Bart Van Assche wrote:
> On Jan 22, 2008 12:33 PM, Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org
> <mailto:vst-d+Crzxg7Rs0@public.gmane.org>> wrote:
>>
>> What are the new SRPT/iSER numbers?
>
> You can find the new performance numbers below. These are all numbers
> for reading from the remote buffer cache, no actual disk reads were
> performed. The read tests have been performed with dd, both for a block
> size of 512 bytes and of 1 MB. The tests with small block size learn
> more about latency, while the tests with large block size learn more
> about the maximal possible throughput.
If you want to compare performance of 512b vs 1MB blocks, your
experiment isn't fully correct. You should use "iflag=direct" dd option
for that.
> .............................................................................................
>
> . . STGT read SCST read . STGT
> read SCST read .
> . . performance performance .
> performance performance .
> . . (0.5K, MB/s) (0.5K, MB/s) . (1 MB,
> MB/s) (1 MB, MB/s) .
> .............................................................................................
> . Ethernet (1 Gb/s network) . 77 78 .
> 77 89 .
> . IPoIB (8 Gb/s network) . 163 185 .
> 201 239 .
> . iSER (8 Gb/s network) . 250 N/A .
> 360 N/A .
> . SRP (8 Gb/s network) . N/A 421 .
> N/A 683 .
> .............................................................................................
>
> My conclusion from the above numbers: the performance difference between
> STGT and SCST is small for a Gigabit Ethernet network. The faster the
> network technology, the larger the difference between SCST and STGT.
This is what I expected
> Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <e2e108260801220432l353b1d76xd2707b5e6f336aef-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-22 15:23 ` Vladislav Bolkhovitin
@ 2008-01-24 7:06 ` Robin Humble
2008-01-24 10:36 ` [Stgt-devel] " Bart Van Assche
2008-01-24 16:16 ` [Stgt-devel] " Bart Van Assche
1 sibling, 2 replies; 37+ messages in thread
From: Robin Humble @ 2008-01-24 7:06 UTC (permalink / raw)
To: Bart Van Assche
Cc: FUJITA Tomonori, stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
Vladislav Bolkhovitin
On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
>On Jan 22, 2008 12:33 PM, Vladislav Bolkhovitin <vst-d+Crzxg7Rs0@public.gmane.org> wrote:
>> What are the new SRPT/iSER numbers?
>You can find the new performance numbers below. These are all numbers for
>reading from the remote buffer cache, no actual disk reads were performed.
>The read tests have been performed with dd, both for a block size of 512
>bytes and of 1 MB. The tests with small block size learn more about latency,
>while the tests with large block size learn more about the maximal possible
>throughput.
>
>.............................................................................................
>. . STGT read SCST read . STGT read SCST read .
>. . performance performance . performance performance .
>. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
>.............................................................................................
>. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
>. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
>. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
>. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
>............................................................................................
how are write speeds with SCST SRP?
for some kernels and tests tgt writes at >2x the read speed.
also I see much higher speeds that what you report in my DDR 4x IB tgt
testing... which could be taken as inferring that tgt is scaling quite
nicely on the faster fabric?
ib_write_bw of 1473 MB/s
ib_read_bw of 1378 MB/s
iSER to 7G ramfs, x86_64, centos4.6, 2.6.22 kernels, git tgtd,
initiator end booted with mem=512M, target with 8G ram
direct i/o dd
write/read 800/751 MB/s
dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
buffered i/o dd
write/read 1109/350 MB/s
dd if=/dev/zero of=/dev/sdc bs=1M count=5000
dd of=/dev/null if=/dev/sdc bs=1M count=5000
buffered i/o lmdd
write/read 682/438 MB/s
lmdd if=internal of=/dev/sdc bs=1M count=5000
lmdd of=internal if=/dev/sdc bs=1M count=5000
which goes to show that a) buffered i/o makes reads suck and writes fly
b) most benchmarks are unreliable c) at these high speeds you get all
sorts of weird effects which can easily vary with kernel, OS, ... and
d) that IMHO really we shouldn't get too caught up in these very
artificial tests to ramdisks/ram because it's the speed of real
applications to actual spinning rust that matters.
having said that, if you know of a way to clock my IB cards down to your
SDR rates then let me know and I'll be happy to re-run the tests.
cheers,
robin
>My conclusion from the above numbers: the performance difference between
>STGT and SCST is small for a Gigabit Ethernet network. The faster the
>network technology, the larger the difference between SCST and STGT.
>
>Bart.
>_______________________________________________
>Stgt-devel mailing list
>Stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org
>https://lists.berlios.de/mailman/listinfo/stgt-devel
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Stgt-devel] Performance of SCST versus STGT
2008-01-24 7:06 ` Robin Humble
@ 2008-01-24 10:36 ` Bart Van Assche
2008-01-24 11:10 ` Vladislav Bolkhovitin
[not found] ` <e2e108260801240236o2273be0bw24a2a61dcc781222-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-24 16:16 ` [Stgt-devel] " Bart Van Assche
1 sibling, 2 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-24 10:36 UTC (permalink / raw)
To: Robin Humble
Cc: Vladislav Bolkhovitin, James.Bottomley, stgt-devel, linux-scsi,
FUJITA Tomonori, scst-devel
On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt@anu.edu.au> wrote:
> On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
> >
> >.............................................................................................
> >. . STGT read SCST read . STGT read SCST read .
> >. . performance performance . performance performance .
> >. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
> >.............................................................................................
> >. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
> >. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
> >. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
> >. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
> >............................................................................................
>
> how are write speeds with SCST SRP?
> for some kernels and tests tgt writes at >2x the read speed.
>
> also I see much higher speeds that what you report in my DDR 4x IB tgt
> testing... which could be taken as inferring that tgt is scaling quite
> nicely on the faster fabric?
> ib_write_bw of 1473 MB/s
> ib_read_bw of 1378 MB/s
>
> iSER to 7G ramfs, x86_64, centos4.6, 2.6.22 kernels, git tgtd,
> initiator end booted with mem=512M, target with 8G ram
>
> direct i/o dd
> write/read 800/751 MB/s
> dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
> dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
>
> buffered i/o dd
> write/read 1109/350 MB/s
> dd if=/dev/zero of=/dev/sdc bs=1M count=5000
> dd of=/dev/null if=/dev/sdc bs=1M count=5000
Hello Robin,
The tests I performed were read performance tests with dd and with
buffered I/O. For this test you obtained 350 MB/s with STGT on a DDR
4x InfiniBand network, while I obtained 360 MB/s on a SDR 4x
InfiniBand network. I don't think that we can call this "scaling up"
...
Regarding write performance: the write tests were performed with a
real target (three disks in RAID-0, write bandwidth about 100 MB/s). I
did not yet publish these numbers because it is not yet clear to me
how much disk writing speed / InfiniBand transfer speed / target write
buffering each contribute in the results. The results I obtained in
the write tests (dd, buffered I/O) show the same trend as for the read
tests: for large data transfers over a Gigabit Ethernet network the
results for STGT and SCST are similar. For small transfer sizes (512
bytes) or fast network technology (SRP / iSER) the write performance
of SCST is significantly better than that of STGT.
Bart.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-24 10:36 ` [Stgt-devel] " Bart Van Assche
@ 2008-01-24 11:10 ` Vladislav Bolkhovitin
[not found] ` <4798720E.4020802-d+Crzxg7Rs0@public.gmane.org>
[not found] ` <e2e108260801240236o2273be0bw24a2a61dcc781222-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
1 sibling, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-24 11:10 UTC (permalink / raw)
To: Bart Van Assche
Cc: Robin Humble, FUJITA Tomonori, stgt-devel, linux-scsi,
James.Bottomley, scst-devel
Bart Van Assche wrote:
> On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt@anu.edu.au> wrote:
>
>>On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
>>
>>>.............................................................................................
>>>. . STGT read SCST read . STGT read SCST read .
>>>. . performance performance . performance performance .
>>>. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
>>>.............................................................................................
>>>. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
>>>. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
>>>. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
>>>. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
>>>............................................................................................
>>
>>how are write speeds with SCST SRP?
>>for some kernels and tests tgt writes at >2x the read speed.
Robin,
There is a fundamental difference between regular dd-like reads and
writes: reads are sync, i.e. latency sensitive, but writes are async,
i.e. latency insensitive. You should use O_DIRECT dd writes for the fair
comparison.
Vlad
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <e2e108260801240236o2273be0bw24a2a61dcc781222-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2008-01-24 11:32 ` Robin Humble
[not found] ` <20080124113215.GB26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
0 siblings, 1 reply; 37+ messages in thread
From: Robin Humble @ 2008-01-24 11:32 UTC (permalink / raw)
To: Bart Van Assche
Cc: FUJITA Tomonori, stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
Vladislav Bolkhovitin
On Thu, Jan 24, 2008 at 11:36:45AM +0100, Bart Van Assche wrote:
>On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt-FCV4sgi5zeUQrrorzV6ljw@public.gmane.org> wrote:
>> On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
>> >.............................................................................................
>> >. . STGT read SCST read . STGT read SCST read .
>> >. . performance performance . performance performance .
>> >. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
>> >.............................................................................................
>> >. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
>> >. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
>> >. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
>> >. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
>> >............................................................................................
>>
>> how are write speeds with SCST SRP?
>> for some kernels and tests tgt writes at >2x the read speed.
>>
>> also I see much higher speeds that what you report in my DDR 4x IB tgt
>> testing... which could be taken as inferring that tgt is scaling quite
>> nicely on the faster fabric?
>> ib_write_bw of 1473 MB/s
>> ib_read_bw of 1378 MB/s
>>
>> iSER to 7G ramfs, x86_64, centos4.6, 2.6.22 kernels, git tgtd,
>> initiator end booted with mem=512M, target with 8G ram
>>
>> direct i/o dd
>> write/read 800/751 MB/s
>> dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
>> dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
>>
>> buffered i/o dd
>> write/read 1109/350 MB/s
>> dd if=/dev/zero of=/dev/sdc bs=1M count=5000
>> dd of=/dev/null if=/dev/sdc bs=1M count=5000
>>
>> buffered i/o lmdd
>> write/read 682/438 MB/s
>> lmdd if=internal of=/dev/sdc bs=1M count=5000
>> lmdd of=internal if=/dev/sdc bs=1M count=5000
>The tests I performed were read performance tests with dd and with
>buffered I/O. For this test you obtained 350 MB/s with STGT on a DDR
... and 1.1GB/s writes :)
presumably because buffer aggregation works well.
>4x InfiniBand network, while I obtained 360 MB/s on a SDR 4x
>InfiniBand network. I don't think that we can call this "scaling up"
>...
the direct i/o read speed being twice the buffered i/o speed would seem
to imply that Linux's page cache is being slow and confused with this
particular set of kernel + OS + OFED versions.
I doubt that this result actually says that much about tgt really.
>Regarding write performance: the write tests were performed with a
>real target (three disks in RAID-0, write bandwidth about 100 MB/s). I
I'd be interested to see ramdisk writes.
cheers,
robin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <4798720E.4020802-d+Crzxg7Rs0@public.gmane.org>
@ 2008-01-24 11:40 ` Robin Humble
[not found] ` <20080124114027.GC26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
0 siblings, 1 reply; 37+ messages in thread
From: Robin Humble @ 2008-01-24 11:40 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: FUJITA Tomonori, stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Thu, Jan 24, 2008 at 02:10:06PM +0300, Vladislav Bolkhovitin wrote:
>> On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt-FCV4sgi5zeUQrrorzV6ljw@public.gmane.org> wrote:
>>> how are write speeds with SCST SRP?
>>> for some kernels and tests tgt writes at >2x the read speed.
>
> There is a fundamental difference between regular dd-like reads and writes:
> reads are sync, i.e. latency sensitive, but writes are async, i.e. latency
> insensitive. You should use O_DIRECT dd writes for the fair comparison.
I agree, although the vast majority of applications don't use O_DIRECT.
anwyay, the direct i/o results were in the email:
direct i/o dd
write/read 800/751 MB/s
dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
I couldn't find a direct i/o option for lmdd.
cheers,
robin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080124113215.GB26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
@ 2008-01-24 12:40 ` Vladislav Bolkhovitin
0 siblings, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-24 12:40 UTC (permalink / raw)
To: Robin Humble
Cc: FUJITA Tomonori, stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Robin Humble wrote:
> On Thu, Jan 24, 2008 at 11:36:45AM +0100, Bart Van Assche wrote:
>
>>On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt-FCV4sgi5zeUQrrorzV6ljw@public.gmane.org> wrote:
>>
>>>On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
>>>
>>>>.............................................................................................
>>>>. . STGT read SCST read . STGT read SCST read .
>>>>. . performance performance . performance performance .
>>>>. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
>>>>.............................................................................................
>>>>. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
>>>>. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
>>>>. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
>>>>. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
>>>>............................................................................................
>>>
>>>how are write speeds with SCST SRP?
>>>for some kernels and tests tgt writes at >2x the read speed.
>>>
>>>also I see much higher speeds that what you report in my DDR 4x IB tgt
>>>testing... which could be taken as inferring that tgt is scaling quite
>>>nicely on the faster fabric?
>>> ib_write_bw of 1473 MB/s
>>> ib_read_bw of 1378 MB/s
>>>
>>>iSER to 7G ramfs, x86_64, centos4.6, 2.6.22 kernels, git tgtd,
>>>initiator end booted with mem=512M, target with 8G ram
>>>
>>> direct i/o dd
>>> write/read 800/751 MB/s
>>> dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
>>> dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
>>>
>>> buffered i/o dd
>>> write/read 1109/350 MB/s
>>> dd if=/dev/zero of=/dev/sdc bs=1M count=5000
>>> dd of=/dev/null if=/dev/sdc bs=1M count=5000
>>>
>>>buffered i/o lmdd
>>> write/read 682/438 MB/s
>>> lmdd if=internal of=/dev/sdc bs=1M count=5000
>>> lmdd of=internal if=/dev/sdc bs=1M count=5000
>
>
>>The tests I performed were read performance tests with dd and with
>>buffered I/O. For this test you obtained 350 MB/s with STGT on a DDR
>
>
> ... and 1.1GB/s writes :)
> presumably because buffer aggregation works well.
>
>
>>4x InfiniBand network, while I obtained 360 MB/s on a SDR 4x
>>InfiniBand network. I don't think that we can call this "scaling up"
>>...
>
>
> the direct i/o read speed being twice the buffered i/o speed would seem
> to imply that Linux's page cache is being slow and confused with this
> particular set of kernel + OS + OFED versions.
> I doubt that this result actually says that much about tgt really.
Buffered dd read is, actually, one of the best benchmarks if you want to
compare STGT vs SCST, because it's single threaded with one outstanding
command most of the time, i.e. it's a latency bound workload. Plus, most
of the applications reading files do exactly what dd does.
Both SCST and STGT suffer equally from possible problems on the
initiator, but SCST bears it much better, because it has much less
processing latency (e.g., because there are no extra user<->kernel
spaces switches and other related overhead).
>>Regarding write performance: the write tests were performed with a
>>real target (three disks in RAID-0, write bandwidth about 100 MB/s). I
>
>
> I'd be interested to see ramdisk writes.
>
> cheers,
> robin
> _______________________________________________
> Stgt-devel mailing list
> Stgt-devel-0fE9KPoRgkgATYTw5x5z8w@public.gmane.org
> https://lists.berlios.de/mailman/listinfo/stgt-devel
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
[not found] ` <20080124114027.GC26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
@ 2008-01-24 12:41 ` Vladislav Bolkhovitin
0 siblings, 0 replies; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-24 12:41 UTC (permalink / raw)
To: Robin Humble
Cc: FUJITA Tomonori, stgt-devel-0fE9KPoRgkgATYTw5x5z8w,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
James.Bottomley-JuX6DAaQMKPCXq6kfMZ53/egYHeGw8Jk,
scst-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Robin Humble wrote:
> On Thu, Jan 24, 2008 at 02:10:06PM +0300, Vladislav Bolkhovitin wrote:
>
>>>On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt-FCV4sgi5zeUQrrorzV6ljw@public.gmane.org> wrote:
>>>
>>>>how are write speeds with SCST SRP?
>>>>for some kernels and tests tgt writes at >2x the read speed.
>>
>>There is a fundamental difference between regular dd-like reads and writes:
>>reads are sync, i.e. latency sensitive, but writes are async, i.e. latency
>>insensitive. You should use O_DIRECT dd writes for the fair comparison.
>
> I agree, although the vast majority of applications don't use O_DIRECT.
Sorry, it isn't about O_DIRECT usage. It's about latency bound or not
workload.
> anwyay, the direct i/o results were in the email:
>
> direct i/o dd
> write/read 800/751 MB/s
> dd if=/dev/zero of=/dev/sdc bs=1M count=5000 oflag=direct
> dd of=/dev/null if=/dev/sdc bs=1M count=5000 iflag=direct
>
> I couldn't find a direct i/o option for lmdd.
>
> cheers,
> robin
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Stgt-devel] Performance of SCST versus STGT
2008-01-24 7:06 ` Robin Humble
2008-01-24 10:36 ` [Stgt-devel] " Bart Van Assche
@ 2008-01-24 16:16 ` Bart Van Assche
2008-01-24 19:54 ` Vladislav Bolkhovitin
1 sibling, 1 reply; 37+ messages in thread
From: Bart Van Assche @ 2008-01-24 16:16 UTC (permalink / raw)
To: Robin Humble
Cc: Vladislav Bolkhovitin, James.Bottomley, stgt-devel, linux-scsi,
FUJITA Tomonori, scst-devel
On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt@anu.edu.au> wrote:
> On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
> >
> >.............................................................................................
> >. . STGT read SCST read . STGT read SCST read .
> >. . performance performance . performance performance .
> >. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
> >.............................................................................................
> >. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
> >. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
> >. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
> >. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
> >............................................................................................
>
Results with /dev/ram0 configured as backing store on the target (buffered I/O):
Read Write Read Write
performance performance performance performance
(0.5K, MB/s) (0.5K, MB/s) (1 MB, MB/s) (1 MB, MB/s)
STGT + iSER 250 48 349 781
SCST + SRP 411 66 659 746
Results with /dev/ram0 configured as backing store on the target (direct I/O):
Read Write Read Write
performance performance performance performance
(0.5K, MB/s) (0.5K, MB/s) (1 MB, MB/s) (1 MB, MB/s)
STGT + iSER 7.9 9.8 589 647
SCST + SRP 12.3 9.7 811 794
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-24 16:16 ` [Stgt-devel] " Bart Van Assche
@ 2008-01-24 19:54 ` Vladislav Bolkhovitin
2008-01-25 7:24 ` Bart Van Assche
0 siblings, 1 reply; 37+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-24 19:54 UTC (permalink / raw)
To: Bart Van Assche
Cc: Robin Humble, James.Bottomley, stgt-devel, linux-scsi,
FUJITA Tomonori, scst-devel
Bart Van Assche wrote:
> On Jan 24, 2008 8:06 AM, Robin Humble <robin.humble+stgt@anu.edu.au> wrote:
>
>>On Tue, Jan 22, 2008 at 01:32:08PM +0100, Bart Van Assche wrote:
>>
>>>.............................................................................................
>>>. . STGT read SCST read . STGT read SCST read .
>>>. . performance performance . performance performance .
>>>. . (0.5K, MB/s) (0.5K, MB/s) . (1 MB >MB/s) (1 MB, MB/s) .
>>>.............................................................................................
>>>. Ethernet (1 Gb/s network) . 77 78 . 77 89 .
>>>. IPoIB (8 Gb/s network) . 163 185 . 201 239 .
>>>. iSER (8 Gb/s network) . 250 N/A . 360 N/A .
>>>. SRP (8 Gb/s network) . N/A 421 . N/A 683 .
>>>............................................................................................
>>
>
> Results with /dev/ram0 configured as backing store on the target (buffered I/O):
> Read Write Read Write
> performance performance performance performance
> (0.5K, MB/s) (0.5K, MB/s) (1 MB, MB/s) (1 MB, MB/s)
> STGT + iSER 250 48 349 781
> SCST + SRP 411 66 659 746
Ib_rdma_bw now reports 933 MB/s on the same system, correct? Those
~250MB/s difference is what you will gain with zero-copy IO implemented
and what STGT with the current architecture has no chance to achieve.
> Results with /dev/ram0 configured as backing store on the target (direct I/O):
> Read Write Read Write
> performance performance performance performance
> (0.5K, MB/s) (0.5K, MB/s) (1 MB, MB/s) (1 MB, MB/s)
> STGT + iSER 7.9 9.8 589 647
> SCST + SRP 12.3 9.7 811 794
>
> Bart.
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: Performance of SCST versus STGT
2008-01-24 19:54 ` Vladislav Bolkhovitin
@ 2008-01-25 7:24 ` Bart Van Assche
0 siblings, 0 replies; 37+ messages in thread
From: Bart Van Assche @ 2008-01-25 7:24 UTC (permalink / raw)
To: Vladislav Bolkhovitin
Cc: Robin Humble, James.Bottomley, stgt-devel, linux-scsi,
FUJITA Tomonori, scst-devel
On Jan 24, 2008 8:54 PM, Vladislav Bolkhovitin <vst@vlnb.net> wrote:
> Ib_rdma_bw now reports 933 MB/s on the same system, correct? Those
> ~250MB/s difference is what you will gain with zero-copy IO implemented
> and what STGT with the current architecture has no chance to achieve.
Yes, that's correct:
* ib_rdma_bw, ib_write_bw and ib_read_bw report 933 MB/s between the
two test systems.
* ib_read_bw reports 905 MB/s.
* ib_rdma_lat reports 3.1 microseconds.
* ib_read_lat reports 7.5 microseconds.
Bart.
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2008-01-25 7:24 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-17 9:27 Performance of SCST versus STGT Bart Van Assche
2008-01-17 9:40 ` FUJITA Tomonori
2008-01-17 9:48 ` Vladislav Bolkhovitin
2008-01-17 10:05 ` FUJITA Tomonori
[not found] ` <20080117190558K.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-17 10:34 ` Vladislav Bolkhovitin
[not found] ` <478F2F46.9040103-d+Crzxg7Rs0@public.gmane.org>
2008-01-17 12:29 ` Robin Humble
2008-01-17 13:44 ` [Scst-devel] [Stgt-devel] " Vladislav Bolkhovitin
[not found] ` <20080117122956.GA3567-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
2008-01-17 14:43 ` Bart Van Assche
2008-01-17 17:45 ` Pete Wyckoff
[not found] ` <20080117174542.GC29650-pxmRpbKlMIQ@public.gmane.org>
2008-01-18 10:30 ` Bart Van Assche
2008-01-18 12:08 ` Vladislav Bolkhovitin
2008-01-20 9:36 ` Bart Van Assche
[not found] ` <e2e108260801200136g7f17b8a0g89a54cc1d73bbc34-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-21 12:07 ` Vladislav Bolkhovitin
2008-01-22 3:26 ` FUJITA Tomonori
[not found] ` <20080122122657R.fujita.tomonori-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2008-01-22 7:50 ` Bart Van Assche
2008-01-22 11:33 ` Vladislav Bolkhovitin
[not found] ` <4795D479.1080805-d+Crzxg7Rs0@public.gmane.org>
2008-01-22 11:48 ` FUJITA Tomonori
2008-01-22 12:20 ` Vladislav Bolkhovitin
2008-01-22 15:14 ` Bart Van Assche
2008-01-22 10:04 ` Bart Van Assche
2008-01-22 11:33 ` Vladislav Bolkhovitin
[not found] ` <4795D4A7.5000105-d+Crzxg7Rs0@public.gmane.org>
2008-01-22 12:32 ` Bart Van Assche
[not found] ` <e2e108260801220432l353b1d76xd2707b5e6f336aef-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-22 15:23 ` Vladislav Bolkhovitin
2008-01-24 7:06 ` Robin Humble
2008-01-24 10:36 ` [Stgt-devel] " Bart Van Assche
2008-01-24 11:10 ` Vladislav Bolkhovitin
[not found] ` <4798720E.4020802-d+Crzxg7Rs0@public.gmane.org>
2008-01-24 11:40 ` Robin Humble
[not found] ` <20080124114027.GC26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
2008-01-24 12:41 ` Vladislav Bolkhovitin
[not found] ` <e2e108260801240236o2273be0bw24a2a61dcc781222-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2008-01-24 11:32 ` Robin Humble
[not found] ` <20080124113215.GB26751-Td5ZOp7sT3Xw02mFwxTg32+DJq1SqhBbsOSz5zK2v9k@public.gmane.org>
2008-01-24 12:40 ` Vladislav Bolkhovitin
2008-01-24 16:16 ` [Stgt-devel] " Bart Van Assche
2008-01-24 19:54 ` Vladislav Bolkhovitin
2008-01-25 7:24 ` Bart Van Assche
2008-01-22 12:33 ` Bart Van Assche
2008-01-17 14:22 ` Erez Zilber
[not found] ` <478F64A0.6020201-hKgKHo2Ms0F+cjeuK/JdrQ@public.gmane.org>
2008-01-17 14:32 ` Vladislav Bolkhovitin
[not found] ` <478F6708.30604-d+Crzxg7Rs0@public.gmane.org>
2008-01-17 14:46 ` Erez Zilber
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).