public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
@ 2001-01-29 21:27 Jeff V. Merkey
  2001-01-29 23:49 ` Jeff V. Merkey
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-29 21:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: jmerkey

[-- Attachment #1: Type: text/plain, Size: 2747 bytes --]



Linux Kernel,

The RPM versions of the Dolphin PCI-SCI (Scalable Coherent Interface) 
adapter drivers have posted at vger.timpanogas.org/sci.  This release
supports the following SCI Adapters, PSB32, PSB64, and PSB66.  This 
version supports the 32-bit and 64-bit PCI versions of the Dolphin
PCI-SCI Adapters.  This RPM version supports i386, Alpha, Sparc, and
Sparc64 architecture systems.  

SCI is a high performance clustering interconnect fabric that allows 
a Linux system to perform push/pull DMA transfers of mapped regions 
of memory (such as a distributed cache) between nodes either via 
DMA or via remotely mapping R/W memory access to pages of memory 
in another cluster node.  This version of the adapters supports 
a scalable fabric on bi-CMOS chipsets that operates with a 
maximum data transfer rate of 500 Megabytes/second.  Obviously,
PCI bandwidth limits how much throughput you can actually get 
on a Live Linux System.  On an AMD Ahtalon 550Mhz System, we are
seeing @ 65 Megabytes/second with DMA and @ 45 Megabytes/second 
throughput with Memory copying between nodes with this 
version of the drivers/adapters.

Linux 2.2 vs. 2.4 performance is telling, with Linux 2.4 clearly 
providing significantly higher performance for NUMA 
operations with the Dolphin PCI-SCI adapters.   Please review 
the attached performance figures below for a comparison.

This release has removed the requirement for the bighysparea patch
for these drivers.  This package will allow these drivers to run 
on any "shrink-wrapped" Linux versions.  We have also packaged these
drivers in both tar.gz and RPM format, and a source RPM is provided
under the GNU Public License which will allow these drivers to be 
easily included into any commercial Linux Distributions that use
RPM.  We have tested this RPM package on RedHat, Suse, Caldera, 
and Ute Linux.  

This release has been tested and 2.4 Linux kernel enabled, and fully 
supports Linux kernel 2.4.  This release corrects several bugs 
and level I Oops on Linux 2.4 kernels with previous versions.   

Please feel feel to post any bugs or problems with these drivers to 
linux-kernel@vger.kernel.org or report them to jmerkey@timpanogas.org. 
I will be maintaining the RPM release of the Dolphin PCI-SCI drivers,
so please direct and reports to me and I will forward them onto
Dolphin ICS if they are related to hardware bugs.   For additional
useful information about SCI, please visit Dolphin's website
at www.dolphinics.com.

The next release of these drivers will contain a fast-path sockets 
and TCPIP interface to allow IP routing and support for LVS and 
distrbuted NUMA support for the M2FS Clustered File System on the 
Dolphin Adapters.   

Jeff Merkey
Chief Engineer, TRG



[-- Attachment #2: linux22-1.sci --]
[-- Type: text/plain, Size: 2583 bytes --]


Linux 2.2.18 Performance Benchmarks
------------------------------------

 /opt/DIS/bin/dma_bench program compiled Jan 28 2001 : 23:43:03

Test parameters for client 
----------------------------

Local nodeId      : 8
Remote nodeId     : 4
Local adapter no. : 0
Block size        : 65536
Loops to execute  : 3
ILoops to execute : 1000
Key Offset        : 0
Direction	  : PUSH
----------------------------

Local segment (id=0x80400, size=65536) is created. 
Local segment (id=0x80400, size=65536) is created. 
Local segment (id=0x80400) is mapped to user space.
Trying to connect to remote segment (id=0x40800) at node 4
Remote segment (id=0x40800) is connected.

-- Starting the data transfer -- 


----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		    112.00 us		    0.57 MBytes/s
128		    113.00 us		    1.13 MBytes/s
256		    115.00 us		    2.23 MBytes/s
512		    120.00 us		    4.27 MBytes/s
1024		    130.00 us		    7.88 MBytes/s
2048		    150.00 us		   13.65 MBytes/s
4096		    188.00 us		   21.79 MBytes/s
8192		    266.00 us		   30.80 MBytes/s
16384		    423.00 us		   38.73 MBytes/s
32768		    758.00 us		   43.23 MBytes/s
65536		   1440.00 us		   45.51 MBytes/s
----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		    112.00 us		    0.57 MBytes/s
128		    113.00 us		    1.13 MBytes/s
256		    115.00 us		    2.23 MBytes/s
512		    120.00 us		    4.27 MBytes/s
1024		    130.00 us		    7.88 MBytes/s
2048		    150.00 us		   13.65 MBytes/s
4096		    188.00 us		   21.79 MBytes/s
8192		    266.00 us		   30.80 MBytes/s
16384		    423.00 us		   38.73 MBytes/s
32768		    758.00 us		   43.23 MBytes/s
65536		   1439.00 us		   45.54 MBytes/s
----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		    112.00 us		    0.57 MBytes/s
128		    112.00 us		    1.14 MBytes/s
256		    115.00 us		    2.23 MBytes/s
512		    120.00 us		    4.27 MBytes/s
1024		    129.00 us		    7.94 MBytes/s
2048		    150.00 us		   13.65 MBytes/s
4096		    188.00 us		   21.79 MBytes/s
8192		    266.00 us		   30.80 MBytes/s
16384		    423.00 us		   38.73 MBytes/s
32768		    758.00 us		   43.23 MBytes/s
65536		   1439.00 us		   45.54 MBytes/s
Node 8 triggering interrupt

Interrupt message sent to remote node

DMA transfer done!
The segment is disconnected
The local segment is unmapped
The local segment is removed

[-- Attachment #3: linux24-1.sci --]
[-- Type: text/plain, Size: 2582 bytes --]


Linux 2.4.0 Performance Benchmarks
------------------------------------

 /opt/DIS/bin/dma_bench program compiled Jan 28 2001 : 22:41:53

Test parameters for client 
----------------------------

Local nodeId      : 4
Remote nodeId     : 8
Local adapter no. : 0
Block size        : 65536
Loops to execute  : 3
ILoops to execute : 1000
Key Offset        : 0
Direction	  : PUSH
----------------------------

Local segment (id=0x40800, size=65536) is created. 
Local segment (id=0x40800, size=65536) is created. 
Local segment (id=0x40800) is mapped to user space.
Trying to connect to remote segment (id=0x80400) at node 8
Remote segment (id=0x80400) is connected.

-- Starting the data transfer -- 


----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		     52.00 us		    1.23 MBytes/s
128		     52.00 us		    2.46 MBytes/s
256		     54.00 us		    4.74 MBytes/s
512		     58.00 us		    8.83 MBytes/s
1024		     65.00 us		   15.75 MBytes/s
2048		     80.00 us		   25.60 MBytes/s
4096		    110.00 us		   37.24 MBytes/s
8192		    170.00 us		   48.19 MBytes/s
16384		    290.00 us		   56.50 MBytes/s
32768		    532.00 us		   61.59 MBytes/s
65536		   1009.00 us		   64.95 MBytes/s
----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		     52.00 us		    1.23 MBytes/s
128		     52.00 us		    2.46 MBytes/s
256		     54.00 us		    4.74 MBytes/s
512		     58.00 us		    8.83 MBytes/s
1024		     65.00 us		   15.75 MBytes/s
2048		     80.00 us		   25.60 MBytes/s
4096		    110.00 us		   37.24 MBytes/s
8192		    170.00 us		   48.19 MBytes/s
16384		    290.00 us		   56.50 MBytes/s
32768		    531.00 us		   61.71 MBytes/s
65536		   1011.00 us		   64.82 MBytes/s
----------------------------------------------------
Segment size:	Latency:		Throughput:
----------------------------------------------------
64		     52.00 us		    1.23 MBytes/s
128		     52.00 us		    2.46 MBytes/s
256		     54.00 us		    4.74 MBytes/s
512		     58.00 us		    8.83 MBytes/s
1024		     65.00 us		   15.75 MBytes/s
2048		     80.00 us		   25.60 MBytes/s
4096		    110.00 us		   37.24 MBytes/s
8192		    170.00 us		   48.19 MBytes/s
16384		    290.00 us		   56.50 MBytes/s
32768		    532.00 us		   61.59 MBytes/s
65536		   1012.00 us		   64.76 MBytes/s
Node 4 triggering interrupt

Interrupt message sent to remote node

DMA transfer done!
The segment is disconnected
The local segment is unmapped
The local segment is removed

[-- Attachment #4: NOTES --]
[-- Type: text/plain, Size: 9999 bytes --]


Copyright (c) 1990-2001 Dolphin Interconnect Solutions, Inc.
All Rights Reserved.


RPM (RedHat Package Manager) installation notes for DIS PCI-SCI Adapters 
------------------------------------------------------------------------

This release contains the Dolphin PCI-SCI adapter drivers for the PSB32, 
PSB64, and PSB66 adapters in RPM (RedHat Package Manager) package 
format.  

These packages are built using the RPM build utility.  The RPM format
is 3.04, and is compatible with RPM versions 3 and versions 4.  RPM
packages are provided in both source (.src.rpm) and binary forms
(.i386.rpm, .alpha.rpm, etc.).  A binary RPM package contains the
drivers and driver installation scripts needed to install or 
remove a version of the PCI-SCI drivers from your Linux systems.

The source RPM package contains the source code and build scripts
needed to rebuild the binary RPM packages for a target system. If 
you are using modversioned kernels, you will need to rebuild the 
binary packages for your system against the source RPM package.

All commercial Linux distributions use the RPM tools to administer 
packages and verify dependencies and version matches required to 
support a particular package.  If you are using a system that does 
not support RPM, please visit Dolphin's Website at www.dolphinics.com
for a copy of the latest .tar.gz or tar version of the PCI-SCI 
drivers for your Unix or Linux system.  

This release consists of the following RPM packages note: %{version} is the 
RPM package version release number:

RPM source package:

pci-sci-%{version}.src.rpm

RPM binary packages:

Intel X86:

pci-sci-PSB32-%{version}.i386.rpm
pci-sci-PSB64-%{version}.i386.rpm
pci-sci-PSB66-%{version}.i386.rpm

Sparc:

pci-sci-PSB32-%{version}.sparc.rpm
pci-sci-PSB64-%{version}.sparc.rpm
pci-sci-PSB66-%{version}.sparc.rpm
pci-sci-PSB32-%{version}.sparc64.rpm
pci-sci-PSB64-%{version}.sparc64.rpm
pci-sci-PSB66-%{version}.sparc64.rpm

Alpha:

pci-sci-PSB32-%{version}.alpha.rpm
pci-sci-PSB64-%{version}.alpha.rpm
pci-sci-PSB66-%{version}.alpha.rpm


Installing RPM source and binary packages
-----------------------------------------

It is recommended that you rebuild the .src.rpm if you are using a kernel
build with modversions enabled, and install the binary RPM packages created
from this build.  You must have a valid and properly configured 
kernel build sources tree located in the /usr/src/linux directory 
on your system in order to rebuild the PCI-SCI drivers.  Please consult 
the /usr/src/Linux/Documentation directory or the /usr/src/Linux/README 
file for specific instructions on setting up your kernel build tree.

To rebuild a source RPM on a target system, enter the following command:

rpm --rebuild pci-sci-%{version}.src.rpm 

This command will install the source files into the RPM build tree 
on your system and rebuild all the binary packages.  This tree is always 
found at /usr/src/($distribution)/ and contains BUILD, SOURCES, 
SPECS, SRPMS, and RPMS subdirectories. ($distribution) is the name of 
your Linux distribution.  For example, if your Linux distribution is 
Red Hat, then this directory would be located at /usr/src/redhat/.  
If your distribution is Caldera Linux, the this directory would be 
located at /usr/src/OpenLinux/.  It's a good idea to simply list 
the contents of /usr/src/ to locate your particular RPM build tree
for your commercial Linux distribution. 

The recompiled binary RPM packages will be found after the build 
finishes at /usr/src/($distribution)/RPMS/($arch).  ($arch) is 
the architecture of the system on which you are building.  If you 
are rebuilding the source RPM package on an Intel System on 
RedHat 6.2, then this directory would be /usr/src/redhat/RPMS/i386.
If you are rebuilding the source RPM package on an Alpha System on 
RedHat 6.2, then this directory would be /usr/src/redhat/RPMS/alpha.

To install either the pre-built or the rebuilt RPM binary packages
simply type the following command dpending on which PCI-SCI adapter 
is in your system (PSB32, PSB64, PSB66):

rpm -i pci-sci-PSB64-%{version}.i386.rpm

You can also force RPM to overwrite previously installed versions 
of the pci-sci drivers with this command:

rpm -i --force pci-sci-PSB64-%{version}.i386.rpm

You should see several messages on the system console detailing 
the progress of the PCI-SCI driver installtion.  They should
look something like this:

#
#
# rpm -i pci-sci-PSB64-1.0-1.i386.rpm
pci-sci reports this system has 1 processor(s) (LINUX)
Installing Dolphin PCI-SCI adapter drivers ...
Command:add
Adapter:PSB64
SMP: NOSMP
Creating /etc/rc.d/init.d/sci_irm_sisci for PSB64-2.2.18-27-NOSMP
Reading IRM driver configuration informaiton from /opt/DIS-1.0/sbin/..//lib/modules/pcisci.conf

Done.
#
#

The /var/log/messages file should also contain entries similiar to the 
following if your SCI hardware initialized properly:

Jan 24 22:37:52 nwfs kernel: SCI Driver : Linux SMP support disabled 
Jan 24 22:37:52 nwfs kernel: SCI Driver : using MTRR 
Jan 24 22:37:52 nwfs kernel: PCI SCI Bridge - device id 0xd665 found 
Jan 24 22:37:52 nwfs kernel: 1 supported PCI-SCI bridges (PSB's) found on the system 
Jan 24 22:37:52 nwfs kernel: Define PSB 1 key: Bus:  0 DevFn: 72 
Jan 24 22:37:52 nwfs kernel: Key 1: Key: (Bus:  0,DevFn: 72), Device No. 1, irq 9 
Jan 24 22:37:52 nwfs kernel: Mapping address space CSR space: phaddr f3100000 sz 32768 out of 32768 
Jan 24 22:37:52 nwfs kernel: Mapping address space CSR space: vaddr c40ae000 
Jan 24 22:37:52 nwfs kernel: Mapping address space IO space: phaddr d0000000 sz 268435456 out of 268435456 
Jan 24 22:37:52 nwfs kernel: Mapping address space IO space: vaddr c40b7000 
Jan 24 22:37:52 nwfs kernel: SCI Driver : version Dolphin IRM 1.10.2 ( Beta 1 2000-12-18 ) initializing 
Jan 24 22:37:52 nwfs kernel: SCI Driver : 32 bit mode Compiled Jan 24 2001 : 22:35:50 
Jan 24 22:37:52 nwfs kernel: Interrupt 9 function registered 
Jan 24 22:37:52 nwfs kernel: SCI Adapter 0 : Driver attaching 
Jan 24 22:37:52 nwfs kernel: osif_big_alloc 8192 align 0 <NULL> 
Jan 24 22:37:52 nwfs kernel: osif_big_alloc 65792 align 0 SW packet buffers 
Jan 24 22:37:52 nwfs kernel: SCI Adapter 0 : NodeId is 4 ( 0x4 ) Serial no : 100963 (0x18a63) 
Jan 24 22:37:52 nwfs kernel: osif_big_alloc 8192 align 2 ma_allocBuffer 
Jan 24 22:37:52 nwfs kernel: osif_big_alloc 520700 align 0 MbxMsgQueueDescArray 
Jan 24 22:37:52 nwfs kernel: osif_big_alloc 8192 align 2 ma_allocBuffer 
Jan 24 22:37:52 nwfs kernel: SCI driver : successfully registerd. Major number = 254 


Removing and Querying RPM packages
----------------------------------

To remove the PCI-SCI drivers from your system, enter the following 
command with the proper adapter type installed in your system:

rpm -e pci-sci-PSB64

Messages similiar to the following should be displayed on the system
console.

#
#
# rpm -e pci-sci-PSB64
pci-sci reports this system has 1 processor(s) (LINUX)
Command:rem
Adapter:PSB64
SMP: NOSMP
Removing the IRM and SISCI drivers from the system
Removing SCI device nodes and unloading drivers
Done.
#
#

The /var/log/messages file should also contain entries similiar to the 
following if your SCI hardware shutdown properly:

Jan 24 22:38:03 nwfs kernel: osif_big_free 8192 ma_allocBuffer 
Jan 24 22:38:03 nwfs kernel: osif_big_free 520700 MbxMsgQueueDescArray 
Jan 24 22:38:03 nwfs kernel: osif_big_free 8192 ma_allocBuffer 
Jan 24 22:38:04 nwfs kernel: osif_big_free 65792 SW packet buffers 
Jan 24 22:38:04 nwfs kernel: osif_big_free 8192 <NULL> 
Jan 24 22:38:04 nwfs kernel: SCI Adapter 0 : Adapter terminated 
Jan 24 22:38:04 nwfs kernel: SCI Driver : version Dolphin IRM 1.10.2 ( Beta 1 2000-12-18 ) terminating 
Jan 24 22:38:04 nwfs kernel: Trying to free free IRQ9 


Querying RPM information
------------------------

If you need to query whether the RPM packages have been installed or 
not in your system, type the following command:
  
rpm -q pci-sci-PSB64

This should produce output similiar to the following:

#
#
# rpm -q pci-sci-PSB64
pci-sci-PSB64-1.0-1
#
#

Additional useful information concerning the options and versions of 
the RPM tools can be found by visiting the RPM website at www.rpm.org 
or by typing 'info rpm' or 'man rpm' from your system console.  The 
man and info listing on RPM contains detailed information about 
using the RPM tools.

rpm --help | more

will also provide additional information about querying RPM options
that are available.

Rebuilding the RPM from the pci-sci-%{version}.spec file
-------------------------------------

If you want to rebuild your .src.rpm and binary rpm packages from 
the RPM build tree, then you can change directory (cd) into the 
/usr/src/$(distribution)/SPECS directory.  This directory will 
contain the rpm .spec files for any installed .src.rpm packages
either installed via rpm -i <rpm name>.src.rpm command or the 
rpm --rebuild <rpm name>.src.rpm command.  An RPM spec file is 
a file containing install scripts and build instructions for 
an RPM package or set of packages, along with versioning and 
dependecy information.

To rebuild the PCI-SCI drivers from the SPECS directory, type
the following command:

rpm -ba pci-sci.spec

This command will trigger complete rebuild of the .src.rpm and 
binary rpms for the PCI-SCI package.


Increasing the Number of SCI Nodes Supported
--------------------------------------------

This version of the PCI-SCI adapters will support up to 256 
clustered SCI nodes.  To increase the number of nodes supported,
you will need to change the following line in the 
/src/IRM/drv/src/id.h file, line 36 contained in the pci-sci-1.1.tar.gz 
file contained in the source RPM.  This file will normally be 
located in the /usr/src/$(distribution)/SOURCES directory.

#define          MAX_NODE_IDS        256  

This value can be increased up to 1024 nodes.  Increasing this value 
will also increase the amount of memory needed to setup the SCI 
mbx structures from 130K to 520K.  


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-29 21:27 [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released Jeff V. Merkey
@ 2001-01-29 23:49 ` Jeff V. Merkey
  2001-01-30  4:41   ` Todd
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-29 23:49 UTC (permalink / raw)
  To: linux-kernel; +Cc: jmerkey


Relative to some performance questions folks have asked, the SCI 
adapters are limited by PCI bus speeds.  If your system supports 
64-bit PCI you get much higher numbers.  If you have a system 
that supports 100+ Megabyte/second PCI throughput, the SCI 
adapters will exploit it.

This test was performed in on a 32-bit PCI system with a PCI bus
architecture that's limited to 70 MB/S.  

Jeff

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-29 23:49 ` Jeff V. Merkey
@ 2001-01-30  4:41   ` Todd
  2001-01-30 17:19     ` Jeff V. Merkey
  0 siblings, 1 reply; 12+ messages in thread
From: Todd @ 2001-01-30  4:41 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel

folx,

i must be missing something here.  i'm not aware of a PCI bus that only
supports 70 MBps but i am probably ignorant.  this is why i was confused
by jeff's performance numbers.  33MHz 32-bit PCI busses should do around
120MB/s (just do the math 33*32/8 allowing for some overhead of PCI bus
negotiation), much greater than the numbers jeff is reporting.  66 MHz
64bit busses should do on the order of 500MB/s.

the performance numbers that jeff is reporting are not very impressive
even for the slowest PCI bus.  we're seeing 993 Mbps (124MB/s) using the
alteon acenic gig-e cards on 32-bit cards on a 66MHz bus.  i would expect
to get somewhat slower on a 33MHz bus but not catastrophically so
(certainly nothing as slow as 60MB/s or 480Mb/s).

what am i misunderstanding here?

todd

On Mon, 29 Jan 2001, Jeff V. Merkey wrote:

> Date: Mon, 29 Jan 2001 16:49:53 -0700
> From: Jeff V. Merkey <jmerkey@vger.timpanogas.org>
> To: linux-kernel@vger.kernel.org
> Cc: jmerkey@timpanogas.org
> Subject: Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
>
>
> Relative to some performance questions folks have asked, the SCI
> adapters are limited by PCI bus speeds.  If your system supports
> 64-bit PCI you get much higher numbers.  If you have a system
> that supports 100+ Megabyte/second PCI throughput, the SCI
> adapters will exploit it.
>
> This test was performed in on a 32-bit PCI system with a PCI bus
> architecture that's limited to 70 MB/S.
>
> Jeff
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
>

=========================================================
Todd Underwood, todd@unm.edu

criticaltv.com
news, analysis and criticism.  about tv.
and other stuff.

=========================================================

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:19     ` Jeff V. Merkey
@ 2001-01-30 17:07       ` Todd
  2001-01-30 18:07         ` Jeff V. Merkey
  2001-01-30 17:22       ` Pekka Pietikainen
  2001-01-30 17:32       ` Jeff V. Merkey
  2 siblings, 1 reply; 12+ messages in thread
From: Todd @ 2001-01-30 17:07 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel, jmerkey

folx,

On Tue, 30 Jan 2001, Jeff V. Merkey wrote:
> What numbers does G-Enet provide
> doing userspace -> userspace transfers, and at what processor
> overhead?

using stock 2.4 kernel and alteon acenic cards with stock firmware we're
seeing 993 MBps userspace->userspace (running netperf UDP_STREAM tests,
which run as userspace client and server) with 88% CPU utilization.

Using a modified version of the firmware that we wrote we're getting
993Mbps with 55% CPU utilization.

> I posted the **ACCURATE** numbers from my test, but I did clarify that I
> was using a system with a limp PCI bus.
>
> Jeff

i appreciate that.  i'm just trying to figure out why the numbers are so
low compared to the network speed you mentioned.

todd

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:32       ` Jeff V. Merkey
@ 2001-01-30 17:11         ` Todd
  2001-01-30 17:49         ` Jeff V. Merkey
  1 sibling, 0 replies; 12+ messages in thread
From: Todd @ 2001-01-30 17:11 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: linux-kernel, jmerkey

jeff,

On Tue, 30 Jan 2001, Jeff V. Merkey wrote:

> On 32 bit PCI, the average we are seeing going userpace -> userspace is
> 120-140 MB/S ranges in those systems that have a PCI bus with
> bridge chipsets that can support these data rates.
>
> That's 2 x G-Enet.

good numbers.  not really 2 x Gig-e, though, is it.  we're getting 993Mbps
(124Mb/s) on alteon acenic gig-e adapters on 32bit/66MHz pci machines.
i'm not saying that the nubmers aren't good or that the technology doesn't
sound promising (SCI is definitely cool stuff).  I'm just trying to put
the numbers in perspective and show that the targets should be higher.

i like your argument about cpu utilization. that's really the key.
anything that moves significant traffic and can reduce cpu utilization
will help enormously.  unfortunately, you've not posted any cpu numbers.

this is analogous to transmeta's complaint about battery life and
processor speed:  no one benchmarks both metrics at the same time.  if
they did, intel chips would not look nearly as good as they do.
similarly, most people are still not posting bandwidth numbers along with
cpu utilization numbers.  if they did, lots of fast networks would be less
appealing.

todd

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30  4:41   ` Todd
@ 2001-01-30 17:19     ` Jeff V. Merkey
  2001-01-30 17:07       ` Todd
                         ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-30 17:19 UTC (permalink / raw)
  To: Todd; +Cc: linux-kernel, jmerkey

On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:

Todd,

I ran the tests on a box that has a limit of 70MB/S PCI throughput.  
There are GaAs (Gallium Arsenide) implementations of SCI that run 
well into the Gigabyte Ranges, and I've seen some of this hardware.  
The NUMA-Q chipsets in Sequents cluster boxes are actually SCI.  
Sun Microsystems uses SCI as their clustering interconnect for their 
Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
implementation of the SCI LC3 chipsets, and even though they are 
bi-CMOS, the Link speed on the back end is still 500 MB/S --
very respectable.

The PCI-SCI cards these drivers support in PCI systems have been clocked up 
to 140 MB/S+ on those systems that have enough bandwidth to actually 
push this much data.  These boards can pump up to 500MB/S over the 
SCI fabric, however, current PCI technology doesn't allow you to 
push this much data.  I also tested on the D320 chipsets, the newer 
D330 chipsets on the PSB66 cards support the 66Mhz bus and have been 
measured up to the Max PCI speeds.

The PCI-SCI adapters run circles around G-Enet on systems that can 
really pump this much data through the PCI bus.  Also, the numbers I 
posted are doing push/pull DMA transfers between user_space -> user_space 
in another system with **NO COPYING**.  Ethernet and LAN networking always 
copies data into userspace -- SCI has the ability to dump it directly 
into user space pages without copying.  That's what is cool about SCI, 
you can pump this data around with almost no processor utilization -- 
important on a cluster if you are doing computational stuff -- you need 
every cycle you can squeeze, and don't want to waste them copying 
data all over the place.  Sure, G-Enet can pump 124 MB/S, but the 
processor utilitzation will be high, and there will be lots of 
copying going on in the system.   What numbers does G-Enet provide 
doing userspace -> userspace transfers, and at what processor 
overhead?  These are the types of things that are the metrics for 
a good comparison.  Also, G-Enet has bandwidth limitations, the 
SCI standard does not, it's only limited by the laws of Physics
(which are being reached in the Dolphin Labs in Norway).

The GaAs SCI technology I have seen has hop latencies in the SCI 
switches @ 16 nano-seconds to route a packet, with xfer rates into
the Gigabytes per second -- very fast and low latency.   

These cards will use whatever PCI bandwidth is present in the host 
system, up to 500 MB/S.  As the PCI bus gets better, nice to know 
SCI is something that will keep it's value, since 500 MB/S gives us 
a lot of room to grow into.    

I could ask Dolphin for a GaAs version of the LC3 card (one board would
cost the equivalent to the income of a small third world nation), and 
rerun the tests on a Sparc system or Sequent system, and watch G-Enet
system suck wind in comparison.  

:-)

I posted the **ACCURATE** numbers from my test, but I did clarify that I 
was using a system with a limp PCI bus.

Jeff


> folx,
> 
> i must be missing something here.  i'm not aware of a PCI bus that only
> supports 70 MBps but i am probably ignorant.  this is why i was confused
> by jeff's performance numbers.  33MHz 32-bit PCI busses should do around
> 120MB/s (just do the math 33*32/8 allowing for some overhead of PCI bus
> negotiation), much greater than the numbers jeff is reporting.  66 MHz
> 64bit busses should do on the order of 500MB/s.
> 
> the performance numbers that jeff is reporting are not very impressive
> even for the slowest PCI bus.  we're seeing 993 Mbps (124MB/s) using the
> alteon acenic gig-e cards on 32-bit cards on a 66MHz bus.  i would expect
> to get somewhat slower on a 33MHz bus but not catastrophically so
> (certainly nothing as slow as 60MB/s or 480Mb/s).
> 
> what am i misunderstanding here?
> 
> todd
> 
> On Mon, 29 Jan 2001, Jeff V. Merkey wrote:
> 
> > Date: Mon, 29 Jan 2001 16:49:53 -0700
> > From: Jeff V. Merkey <jmerkey@vger.timpanogas.org>
> > To: linux-kernel@vger.kernel.org
> > Cc: jmerkey@timpanogas.org
> > Subject: Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
> >
> >
> > Relative to some performance questions folks have asked, the SCI
> > adapters are limited by PCI bus speeds.  If your system supports
> > 64-bit PCI you get much higher numbers.  If you have a system
> > that supports 100+ Megabyte/second PCI throughput, the SCI
> > adapters will exploit it.
> >
> > This test was performed in on a 32-bit PCI system with a PCI bus
> > architecture that's limited to 70 MB/S.
> >
> > Jeff
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > Please read the FAQ at http://www.tux.org/lkml/
> >
> 
> =========================================================
> Todd Underwood, todd@unm.edu
> 
> criticaltv.com
> news, analysis and criticism.  about tv.
> and other stuff.
> 
> =========================================================
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:19     ` Jeff V. Merkey
  2001-01-30 17:07       ` Todd
@ 2001-01-30 17:22       ` Pekka Pietikainen
  2001-01-30 18:18         ` Matti Aarnio
  2001-01-30 19:01         ` Jeff V. Merkey
  2001-01-30 17:32       ` Jeff V. Merkey
  2 siblings, 2 replies; 12+ messages in thread
From: Pekka Pietikainen @ 2001-01-30 17:22 UTC (permalink / raw)
  To: Jeff V. Merkey; +Cc: Todd, linux-kernel, jmerkey

On Tue, Jan 30, 2001 at 10:19:58AM -0700, Jeff V. Merkey wrote:
> On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:
> 
> Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
> implementation of the SCI LC3 chipsets, and even though they are 
> bi-CMOS, the Link speed on the back end is still 500 MB/S --
> very respectable.
Sounds impressive (and expensive)
> 
> in another system with **NO COPYING**.  Ethernet and LAN networking always 
> copies data into userspace -- SCI has the ability to dump it directly 
> into user space pages without copying.  That's what is cool about SCI, 
Well, my GigE card does that too. Not with TCP, though :)
(see http://oss.sgi.com/projects/stp)
> processor utilitzation will be high, and there will be lots of 
> copying going on in the system.   What numbers does G-Enet provide 
> doing userspace -> userspace transfers, and at what processor 
> overhead?  These are the types of things that are the metrics for 
What I get is 102MB/s with 4% CPU use on a dual pIII/500 32/66 box sending to
a dual pII/450 32/33 box (about 10M/s less the other way around, so 
I'm assuming I'd get somewhat more with real 64/66 PCI buses on both 
machines) 

> I could ask Dolphin for a GaAs version of the LC3 card (one board would
> cost the equivalent to the income of a small third world nation), and 
> rerun the tests on a Sparc system or Sequent system, and watch G-Enet
> system suck wind in comparison.  
Or you can buy an Alteon-based Netgear 620 for under $300. It all 
depends on your budget and needs :)

-- 
Pekka Pietikainen



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:19     ` Jeff V. Merkey
  2001-01-30 17:07       ` Todd
  2001-01-30 17:22       ` Pekka Pietikainen
@ 2001-01-30 17:32       ` Jeff V. Merkey
  2001-01-30 17:11         ` Todd
  2001-01-30 17:49         ` Jeff V. Merkey
  2 siblings, 2 replies; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-30 17:32 UTC (permalink / raw)
  To: Todd; +Cc: linux-kernel, jmerkey

On Tue, Jan 30, 2001 at 10:19:58AM -0700, Jeff V. Merkey wrote:
> On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:

Todd,

I just got back some more numbers from Dolphin.  The newer D330 
LC3 chipsets are running at 667 MB/S Link speed, and on a 
Serverworks HE system, we are seeing 240 MB/S throughput via 
the newer PCI-SCI adapters.   I have some D330 adapters on 
the way here, and will repost newer numbers after the Alpha
changes are rolled in and I repost the drivers again next 
week sometime. 

On 32 bit PCI, the average we are seeing going userpace -> userspace is 
120-140 MB/S ranges in those systems that have a PCI bus with 
bridge chipsets that can support these data rates.   

That's 2 x G-Enet.  

:-)

Jeff

> 
> Todd,
> 
> I ran the tests on a box that has a limit of 70MB/S PCI throughput.  
> There are GaAs (Gallium Arsenide) implementations of SCI that run 
> well into the Gigabyte Ranges, and I've seen some of this hardware.  
> The NUMA-Q chipsets in Sequents cluster boxes are actually SCI.  
> Sun Microsystems uses SCI as their clustering interconnect for their 
> Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
> implementation of the SCI LC3 chipsets, and even though they are 
> bi-CMOS, the Link speed on the back end is still 500 MB/S --
> very respectable.
> 
> The PCI-SCI cards these drivers support in PCI systems have been clocked up 
> to 140 MB/S+ on those systems that have enough bandwidth to actually 
> push this much data.  These boards can pump up to 500MB/S over the 
> SCI fabric, however, current PCI technology doesn't allow you to 
> push this much data.  I also tested on the D320 chipsets, the newer 
> D330 chipsets on the PSB66 cards support the 66Mhz bus and have been 
> measured up to the Max PCI speeds.
> 
> The PCI-SCI adapters run circles around G-Enet on systems that can 
> really pump this much data through the PCI bus.  Also, the numbers I 
> posted are doing push/pull DMA transfers between user_space -> user_space 
> in another system with **NO COPYING**.  Ethernet and LAN networking always 
> copies data into userspace -- SCI has the ability to dump it directly 
> into user space pages without copying.  That's what is cool about SCI, 
> you can pump this data around with almost no processor utilization -- 
> important on a cluster if you are doing computational stuff -- you need 
> every cycle you can squeeze, and don't want to waste them copying 
> data all over the place.  Sure, G-Enet can pump 124 MB/S, but the 
> processor utilitzation will be high, and there will be lots of 
> copying going on in the system.   What numbers does G-Enet provide 
> doing userspace -> userspace transfers, and at what processor 
> overhead?  These are the types of things that are the metrics for 
> a good comparison.  Also, G-Enet has bandwidth limitations, the 
> SCI standard does not, it's only limited by the laws of Physics
> (which are being reached in the Dolphin Labs in Norway).
> 
> The GaAs SCI technology I have seen has hop latencies in the SCI 
> switches @ 16 nano-seconds to route a packet, with xfer rates into
> the Gigabytes per second -- very fast and low latency.   
> 
> These cards will use whatever PCI bandwidth is present in the host 
> system, up to 500 MB/S.  As the PCI bus gets better, nice to know 
> SCI is something that will keep it's value, since 500 MB/S gives us 
> a lot of room to grow into.    
> 
> I could ask Dolphin for a GaAs version of the LC3 card (one board would
> cost the equivalent to the income of a small third world nation), and 
> rerun the tests on a Sparc system or Sequent system, and watch G-Enet
> system suck wind in comparison.  
> 
> :-)
> 
> I posted the **ACCURATE** numbers from my test, but I did clarify that I 
> was using a system with a limp PCI bus.
> 
> Jeff
> 
> 
> > folx,
> > 
> > i must be missing something here.  i'm not aware of a PCI bus that only
> > supports 70 MBps but i am probably ignorant.  this is why i was confused
> > by jeff's performance numbers.  33MHz 32-bit PCI busses should do around
> > 120MB/s (just do the math 33*32/8 allowing for some overhead of PCI bus
> > negotiation), much greater than the numbers jeff is reporting.  66 MHz
> > 64bit busses should do on the order of 500MB/s.
> > 
> > the performance numbers that jeff is reporting are not very impressive
> > even for the slowest PCI bus.  we're seeing 993 Mbps (124MB/s) using the
> > alteon acenic gig-e cards on 32-bit cards on a 66MHz bus.  i would expect
> > to get somewhat slower on a 33MHz bus but not catastrophically so
> > (certainly nothing as slow as 60MB/s or 480Mb/s).
> > 
> > what am i misunderstanding here?
> > 
> > todd
> > 
> > On Mon, 29 Jan 2001, Jeff V. Merkey wrote:
> > 
> > > Date: Mon, 29 Jan 2001 16:49:53 -0700
> > > From: Jeff V. Merkey <jmerkey@vger.timpanogas.org>
> > > To: linux-kernel@vger.kernel.org
> > > Cc: jmerkey@timpanogas.org
> > > Subject: Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
> > >
> > >
> > > Relative to some performance questions folks have asked, the SCI
> > > adapters are limited by PCI bus speeds.  If your system supports
> > > 64-bit PCI you get much higher numbers.  If you have a system
> > > that supports 100+ Megabyte/second PCI throughput, the SCI
> > > adapters will exploit it.
> > >
> > > This test was performed in on a 32-bit PCI system with a PCI bus
> > > architecture that's limited to 70 MB/S.
> > >
> > > Jeff
> > >
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > the body of a message to majordomo@vger.kernel.org
> > > Please read the FAQ at http://www.tux.org/lkml/
> > >
> > 
> > =========================================================
> > Todd Underwood, todd@unm.edu
> > 
> > criticaltv.com
> > news, analysis and criticism.  about tv.
> > and other stuff.
> > 
> > =========================================================
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:32       ` Jeff V. Merkey
  2001-01-30 17:11         ` Todd
@ 2001-01-30 17:49         ` Jeff V. Merkey
  1 sibling, 0 replies; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-30 17:49 UTC (permalink / raw)
  To: Todd; +Cc: linux-kernel, jmerkey

On Tue, Jan 30, 2001 at 10:32:08AM -0700, Jeff V. Merkey wrote:
> On Tue, Jan 30, 2001 at 10:19:58AM -0700, Jeff V. Merkey wrote:
> > On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:

Also,  The numbers were more provided as a comparison between 2.2.x
kernels and 2.4.X kernels with SCI.  You will note from the numbers 
that 2.2.X hits a wall relative to scaling, while 2.4.X Linux just 
kept on scaling.  I could have increased the packet size above 
65536 bytes, and 2.4.X would have kept scaling until it hit the wall
relative to the bandwidth constraints in the box (these el-cheapo
AMD boxes don't have the best PCI implementations out there).  

My testing leaves little doubt that 2.4.X is hot sh_t -- it will scale 
very well with SCI, and is one more reason folks should move to it 
as soon as possible.

:-)

Jeff

> 
> Todd,
> 
> I just got back some more numbers from Dolphin.  The newer D330 
> LC3 chipsets are running at 667 MB/S Link speed, and on a 
> Serverworks HE system, we are seeing 240 MB/S throughput via 
> the newer PCI-SCI adapters.   I have some D330 adapters on 
> the way here, and will repost newer numbers after the Alpha
> changes are rolled in and I repost the drivers again next 
> week sometime. 
> 
> On 32 bit PCI, the average we are seeing going userpace -> userspace is 
> 120-140 MB/S ranges in those systems that have a PCI bus with 
> bridge chipsets that can support these data rates.   
> 
> That's 2 x G-Enet.  
> 
> :-)
> 
> Jeff
> 
> > 
> > Todd,
> > 
> > I ran the tests on a box that has a limit of 70MB/S PCI throughput.  
> > There are GaAs (Gallium Arsenide) implementations of SCI that run 
> > well into the Gigabyte Ranges, and I've seen some of this hardware.  
> > The NUMA-Q chipsets in Sequents cluster boxes are actually SCI.  
> > Sun Microsystems uses SCI as their clustering interconnect for their 
> > Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
> > implementation of the SCI LC3 chipsets, and even though they are 
> > bi-CMOS, the Link speed on the back end is still 500 MB/S --
> > very respectable.
> > 
> > The PCI-SCI cards these drivers support in PCI systems have been clocked up 
> > to 140 MB/S+ on those systems that have enough bandwidth to actually 
> > push this much data.  These boards can pump up to 500MB/S over the 
> > SCI fabric, however, current PCI technology doesn't allow you to 
> > push this much data.  I also tested on the D320 chipsets, the newer 
> > D330 chipsets on the PSB66 cards support the 66Mhz bus and have been 
> > measured up to the Max PCI speeds.
> > 
> > The PCI-SCI adapters run circles around G-Enet on systems that can 
> > really pump this much data through the PCI bus.  Also, the numbers I 
> > posted are doing push/pull DMA transfers between user_space -> user_space 
> > in another system with **NO COPYING**.  Ethernet and LAN networking always 
> > copies data into userspace -- SCI has the ability to dump it directly 
> > into user space pages without copying.  That's what is cool about SCI, 
> > you can pump this data around with almost no processor utilization -- 
> > important on a cluster if you are doing computational stuff -- you need 
> > every cycle you can squeeze, and don't want to waste them copying 
> > data all over the place.  Sure, G-Enet can pump 124 MB/S, but the 
> > processor utilitzation will be high, and there will be lots of 
> > copying going on in the system.   What numbers does G-Enet provide 
> > doing userspace -> userspace transfers, and at what processor 
> > overhead?  These are the types of things that are the metrics for 
> > a good comparison.  Also, G-Enet has bandwidth limitations, the 
> > SCI standard does not, it's only limited by the laws of Physics
> > (which are being reached in the Dolphin Labs in Norway).
> > 
> > The GaAs SCI technology I have seen has hop latencies in the SCI 
> > switches @ 16 nano-seconds to route a packet, with xfer rates into
> > the Gigabytes per second -- very fast and low latency.   
> > 
> > These cards will use whatever PCI bandwidth is present in the host 
> > system, up to 500 MB/S.  As the PCI bus gets better, nice to know 
> > SCI is something that will keep it's value, since 500 MB/S gives us 
> > a lot of room to grow into.    
> > 
> > I could ask Dolphin for a GaAs version of the LC3 card (one board would
> > cost the equivalent to the income of a small third world nation), and 
> > rerun the tests on a Sparc system or Sequent system, and watch G-Enet
> > system suck wind in comparison.  
> > 
> > :-)
> > 
> > I posted the **ACCURATE** numbers from my test, but I did clarify that I 
> > was using a system with a limp PCI bus.
> > 
> > Jeff
> > 
> > 
> > > folx,
> > > 
> > > i must be missing something here.  i'm not aware of a PCI bus that only
> > > supports 70 MBps but i am probably ignorant.  this is why i was confused
> > > by jeff's performance numbers.  33MHz 32-bit PCI busses should do around
> > > 120MB/s (just do the math 33*32/8 allowing for some overhead of PCI bus
> > > negotiation), much greater than the numbers jeff is reporting.  66 MHz
> > > 64bit busses should do on the order of 500MB/s.
> > > 
> > > the performance numbers that jeff is reporting are not very impressive
> > > even for the slowest PCI bus.  we're seeing 993 Mbps (124MB/s) using the
> > > alteon acenic gig-e cards on 32-bit cards on a 66MHz bus.  i would expect
> > > to get somewhat slower on a 33MHz bus but not catastrophically so
> > > (certainly nothing as slow as 60MB/s or 480Mb/s).
> > > 
> > > what am i misunderstanding here?
> > > 
> > > todd
> > > 
> > > On Mon, 29 Jan 2001, Jeff V. Merkey wrote:
> > > 
> > > > Date: Mon, 29 Jan 2001 16:49:53 -0700
> > > > From: Jeff V. Merkey <jmerkey@vger.timpanogas.org>
> > > > To: linux-kernel@vger.kernel.org
> > > > Cc: jmerkey@timpanogas.org
> > > > Subject: Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
> > > >
> > > >
> > > > Relative to some performance questions folks have asked, the SCI
> > > > adapters are limited by PCI bus speeds.  If your system supports
> > > > 64-bit PCI you get much higher numbers.  If you have a system
> > > > that supports 100+ Megabyte/second PCI throughput, the SCI
> > > > adapters will exploit it.
> > > >
> > > > This test was performed in on a 32-bit PCI system with a PCI bus
> > > > architecture that's limited to 70 MB/S.
> > > >
> > > > Jeff
> > > >
> > > > -
> > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > Please read the FAQ at http://www.tux.org/lkml/
> > > >
> > > 
> > > =========================================================
> > > Todd Underwood, todd@unm.edu
> > > 
> > > criticaltv.com
> > > news, analysis and criticism.  about tv.
> > > and other stuff.
> > > 
> > > =========================================================
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:07       ` Todd
@ 2001-01-30 18:07         ` Jeff V. Merkey
  0 siblings, 0 replies; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-30 18:07 UTC (permalink / raw)
  To: Todd; +Cc: linux-kernel, jmerkey

On Tue, Jan 30, 2001 at 10:07:07AM -0700, Todd wrote:
> folx,
> 
> On Tue, 30 Jan 2001, Jeff V. Merkey wrote:
> > What numbers does G-Enet provide
> > doing userspace -> userspace transfers, and at what processor
> > overhead?
> 
> using stock 2.4 kernel and alteon acenic cards with stock firmware we're
> seeing 993 MBps userspace->userspace (running netperf UDP_STREAM tests,
> which run as userspace client and server) with 88% CPU utilization.
> 
> Using a modified version of the firmware that we wrote we're getting
> 993Mbps with 55% CPU utilization.

I was at 2% utilization running the PCI-SCI tests on a D320 PSB64 adapter.
I doubt you would get good numbers on the box I used with G-Enet -- this 
box is limited to 70 MB/S PCI throughput.  

> 
> > I posted the **ACCURATE** numbers from my test, but I did clarify that I
> > was using a system with a limp PCI bus.
> >
> > Jeff
> 
> i appreciate that.  i'm just trying to figure out why the numbers are so
> low compared to the network speed you mentioned.

Because I used an el-cheapo AMD box.  I provided the info more for a 
2.2.X/2.4.X comparison than anything.  

Jeff

> 
> todd
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:22       ` Pekka Pietikainen
@ 2001-01-30 18:18         ` Matti Aarnio
  2001-01-30 19:01         ` Jeff V. Merkey
  1 sibling, 0 replies; 12+ messages in thread
From: Matti Aarnio @ 2001-01-30 18:18 UTC (permalink / raw)
  To: Pekka Pietikainen; +Cc: Jeff V. Merkey, linux-kernel

On Tue, Jan 30, 2001 at 07:22:48PM +0200, Pekka Pietikainen wrote:
> On Tue, Jan 30, 2001 at 10:19:58AM -0700, Jeff V. Merkey wrote:
> > On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:
> > 
> > Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
> > implementation of the SCI LC3 chipsets, and even though they are 
> > bi-CMOS, the Link speed on the back end is still 500 MB/S --
> > very respectable.
> Sounds impressive (and expensive)

  Impressive yes, expensive ?  Everything is relative.

  Well, you can propably buy a truck-load of cheap 100BaseT cards
  with the price of Sun UPA connected SCI interface, but there is
  no point at connecting SCI anywhere but into the system core bus.

  PCI is "mediocre speed" IO-bus, never forget that.
  (But there is nothing better yet!  66+MHz 64bit PCI-X gives some
   hope for faster IO-busses, but is still quite inadequate..)

  People who want to use SCI have serious nonccNUMA PVM programs
  (Beowulf-like) where interconnect message latency may well be
  the difference in between a successfull system, and failure.
  (Even when all optimization is pushed into extreme and as little,
  and infrequent interconnect messages are needed as possible with
  given problem.  Reminds me of solving partial differential equations
  via FFT method in parallel system -- interconnect memory access speeds
  ruled the result.  Nothing beats Cray T3E there yet.)

  Giga-Ethernet has 9.9 Gbit/sec mode, as well as something close
  to 40 Gbit/sec.  Those may yet be superior interconnects - or
  maybe not.  Throwing around Ethernet frames is non-trivial task
  compared to SCI.   But what may yet emerge are chipsets and system
  busses able to sustain that kind of traffics, and optimize operation
  for SCI.   40GE uses bits serial optical transmission at 40+ GHz ...
  (E.g. 18/16 encoded codewords, or some such.  FE uses 5/4 encoding,
   if I remember correctly.)

> -- 
> Pekka Pietikainen

/Matti Aarnio
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released
  2001-01-30 17:22       ` Pekka Pietikainen
  2001-01-30 18:18         ` Matti Aarnio
@ 2001-01-30 19:01         ` Jeff V. Merkey
  1 sibling, 0 replies; 12+ messages in thread
From: Jeff V. Merkey @ 2001-01-30 19:01 UTC (permalink / raw)
  To: Pekka Pietikainen; +Cc: Todd, linux-kernel, jmerkey

On Tue, Jan 30, 2001 at 07:22:48PM +0200, Pekka Pietikainen wrote:
> On Tue, Jan 30, 2001 at 10:19:58AM -0700, Jeff V. Merkey wrote:
> > On Mon, Jan 29, 2001 at 09:41:21PM -0700, Todd wrote:
> > 
> > Sparc servers.  The adapters these drivers I posted support are a bi-CMOS 
> > implementation of the SCI LC3 chipsets, and even though they are 
> > bi-CMOS, the Link speed on the back end is still 500 MB/S --
> > very respectable.
> Sounds impressive (and expensive)

??? - Cheap PCI boards I think are the idea here.

> > 
> > in another system with **NO COPYING**.  Ethernet and LAN networking always 
> > copies data into userspace -- SCI has the ability to dump it directly 
> > into user space pages without copying.  That's what is cool about SCI, 
> Well, my GigE card does that too. Not with TCP, though :)
> (see http://oss.sgi.com/projects/stp)
> > processor utilitzation will be high, and there will be lots of 
> > copying going on in the system.   What numbers does G-Enet provide 
> > doing userspace -> userspace transfers, and at what processor 
> > overhead?  These are the types of things that are the metrics for 
> What I get is 102MB/s with 4% CPU use on a dual pIII/500 32/66 box sending to
> a dual pII/450 32/33 box (about 10M/s less the other way around, so 
> I'm assuming I'd get somewhat more with real 64/66 PCI buses on both 
> machines) 
> 
> > I could ask Dolphin for a GaAs version of the LC3 card (one board would
> > cost the equivalent to the income of a small third world nation), and 
> > rerun the tests on a Sparc system or Sequent system, and watch G-Enet
> > system suck wind in comparison.  
> Or you can buy an Alteon-based Netgear 620 for under $300. It all 
> depends on your budget and needs :)

Hummm...  The SCI Adapters are very close to the same price.  They don't
need switches and can be cabled via fiber, copper, or cables in stack 
mount boxes containing dozens (or hundreds) of systems fault tolerantly.  
Not requiring a hub is a plus ..... could save someone $500K in expenses
for a big cluster.  Switch vendors love G-Enet ... customers have to pay 
more for it since the high end switches are so costly.

Jeff

> 
> -- 
> Pekka Pietikainen
> 
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2001-01-30 18:18 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-01-29 21:27 [ANNOUNCE] Dolphin PCI-SCI RPM Drivers 1.1-4 released Jeff V. Merkey
2001-01-29 23:49 ` Jeff V. Merkey
2001-01-30  4:41   ` Todd
2001-01-30 17:19     ` Jeff V. Merkey
2001-01-30 17:07       ` Todd
2001-01-30 18:07         ` Jeff V. Merkey
2001-01-30 17:22       ` Pekka Pietikainen
2001-01-30 18:18         ` Matti Aarnio
2001-01-30 19:01         ` Jeff V. Merkey
2001-01-30 17:32       ` Jeff V. Merkey
2001-01-30 17:11         ` Todd
2001-01-30 17:49         ` Jeff V. Merkey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox