linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [IP over FC] qla2xipsrc perf (qla2x00ip driver)
@ 2002-10-16 15:18 Fabien Salvi
  0 siblings, 0 replies; 5+ messages in thread
From: Fabien Salvi @ 2002-10-16 15:18 UTC (permalink / raw)
  To: Linux SCSI list, Linux Net list

Hello,

Sorry for cross-posting to the 2 lists, but I think this topic can
concerns both...

I've made some tests on IP over FC.

Here are the elements :

- 2 high-perf servers with RAID SCSI disks, 1 GB RAM, dual PIII
processors and a 2.4.18 SMP enabled linux kernel
- 2 Qlogic qla2200f HBAs
- 1 Qlogic Sanbox2 FC switch
- 2 Intel optical Gigabit ethernet NIC

The 2 servers are connected to the FC switch with optical links.
There is a crossover cable between the 2 gigabit NIC

First, it's really great to use FC switch also for IP data, it's useful
when using NFS access mixed with Data on external FC RAID controller
(for the moment shared file system are not as reliable as we would like
and GFS is no more opensource).
For example, you can have some unique datas for each server and some
data (ie web data, etc...) for both, and using failover script, you can
have the survivor mounting the partition of the dead server and no
service failure...

So, no problem to compile and load qlogic driver.
Tests haven't shown any problems with reliability.

The only thing that surprise me is performance issues using FC over IP.

* with IP over FC :
I have around 21 MB/s performance using a big file (600 MB) transfer by
FTP

* with IP on Gigabit Ethernet :
I have around 33 MB/s with the same test using similar conditions (same
file, same directory to store it).

I renewed a lot of time the tests to be sure they are not altered with
I/O access and swap problems.
There were no other activities on the FC switch.

So, why do I have these performance issues ?
Is it inherant to the IP over FC architecture, to the qlogic driver ?
Is there any tuning ?

Thanks in advance for your help...

-- 
Fabien

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [IP over FC] qla2xipsrc perf (qla2x00ip driver)
@ 2002-10-16 15:37 Dheeraj Pandey
  2002-10-16 17:20 ` Fabien Salvi
  0 siblings, 1 reply; 5+ messages in thread
From: Dheeraj Pandey @ 2002-10-16 15:37 UTC (permalink / raw)
  To: 'Fabien Salvi', Linux SCSI list, Linux Net list


...
> * with IP over FC :
> I have around 21 MB/s performance using a big file (600 MB) 
> transfer by
> FTP
> 
> * with IP on Gigabit Ethernet :
> I have around 33 MB/s with the same test using similar 
> conditions (same
> file, same directory to store it).

Let us identify the bottleneck first. Is it the CPU0? Try observing "mpstat
-P ALL" and see which one of the CPU's is loaded, if at all.

Its important we identify what resource saturates in the IP-over-FC case,
which doesn't in the other.

Dheeraj

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [IP over FC] qla2xipsrc perf (qla2x00ip driver)
  2002-10-16 15:37 Dheeraj Pandey
@ 2002-10-16 17:20 ` Fabien Salvi
  0 siblings, 0 replies; 5+ messages in thread
From: Fabien Salvi @ 2002-10-16 17:20 UTC (permalink / raw)
  To: Dheeraj Pandey; +Cc: Linux SCSI list, Linux Net list

Dheeraj Pandey wrote:
> 
> ...
> > * with IP over FC :
> > I have around 21 MB/s performance using a big file (600 MB)
> > transfer by
> > FTP
> >
> > * with IP on Gigabit Ethernet :
> > I have around 33 MB/s with the same test using similar
> > conditions (same
> > file, same directory to store it).
> 
> Let us identify the bottleneck first. Is it the CPU0? Try observing "mpstat
> -P ALL" and see which one of the CPU's is loaded, if at all.
> 
> Its important we identify what resource saturates in the IP-over-FC case,
> which doesn't in the other.

Yes, you're right.

I should have give more informations about CPU utilization...
There is not really a CPU bottleneck I think.
I didn't use mpstat tool for that, but just top (thanks for your
information, I must admin I didn't know the mpstat tool...).

So, here are the results with IP over FC :
19:11:39     CPU   %user   %nice %system   %idle    intr/s
19:11:40     all    0.00    0.00   12.50   87.50   8033.00
19:11:40       0    0.00    0.00    4.00   96.00   8033.00
19:11:40       1    0.00    0.00   21.00   79.00   8033.00


And with Gigabit ethernet :

19:14:49     CPU   %user   %nice %system   %idle    intr/s
19:14:50     all    0.00    0.00   26.00   74.00  14432.00
19:14:50       0    0.00    0.00   40.00   60.00  14432.00
19:14:50       1    0.00    0.00   12.00   88.00  14432.00

Thanks in advance for your help !

-- 
Fabien SALVI      Centre de Ressources Informatiques
                  Archamps, France -- http://www.cri74.org
                  PingOO GNU/linux distribution : http://www.pingoo.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [IP over FC] qla2xipsrc perf (qla2x00ip driver)
@ 2002-10-16 17:48 Dheeraj Pandey
  2002-10-17  8:49 ` Fabien Salvi
  0 siblings, 1 reply; 5+ messages in thread
From: Dheeraj Pandey @ 2002-10-16 17:48 UTC (permalink / raw)
  To: 'Fabien Salvi'; +Cc: Linux SCSI list, Linux Net list


 
> > Its important we identify what resource saturates in the 
> IP-over-FC case,
> > which doesn't in the other.
> 
> Yes, you're right.
> 
> I should have give more informations about CPU utilization...
> There is not really a CPU bottleneck I think.
> I didn't use mpstat tool for that, but just top (thanks for your
> information, I must admin I didn't know the mpstat tool...).
> 
> So, here are the results with IP over FC :
> 19:11:39     CPU   %user   %nice %system   %idle    intr/s
> 19:11:40     all    0.00    0.00   12.50   87.50   8033.00
> 19:11:40       0    0.00    0.00    4.00   96.00   8033.00
> 19:11:40       1    0.00    0.00   21.00   79.00   8033.00
> 
> 
> And with Gigabit ethernet :
> 
> 19:14:49     CPU   %user   %nice %system   %idle    intr/s
> 19:14:50     all    0.00    0.00   26.00   74.00  14432.00
> 19:14:50       0    0.00    0.00   40.00   60.00  14432.00
> 19:14:50       1    0.00    0.00   12.00   88.00  14432.00

It already looks like there are 50% fewer interrupts to the CPU. Now that
could be because the FC pkts are larger than the GigE. Were you using Jumbo
frames for GigE? How different are the FC pkts, in terms of size?

By the way, is the FTP a read or a write? That is, is it TX traffic or RX
traffic on the nodes? You might *not* want to ignore the lesser CPU
utilization and fewer interrupts observation. That might hold the key.

Is it also possible to test the network bandwidth independent of any disk
I/O on the servers? This will narrow down the number of variables in the
system.

Dheeraj

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [IP over FC] qla2xipsrc perf (qla2x00ip driver)
  2002-10-16 17:48 [IP over FC] qla2xipsrc perf (qla2x00ip driver) Dheeraj Pandey
@ 2002-10-17  8:49 ` Fabien Salvi
  0 siblings, 0 replies; 5+ messages in thread
From: Fabien Salvi @ 2002-10-17  8:49 UTC (permalink / raw)
  To: Dheeraj Pandey; +Cc: Linux SCSI list, Linux Net list

Dheeraj Pandey wrote:
> 

> > So, here are the results with IP over FC :
> > 19:11:39     CPU   %user   %nice %system   %idle    intr/s
> > 19:11:40     all    0.00    0.00   12.50   87.50   8033.00
> > 19:11:40       0    0.00    0.00    4.00   96.00   8033.00
> > 19:11:40       1    0.00    0.00   21.00   79.00   8033.00
> >
> >
> > And with Gigabit ethernet :
> >
> > 19:14:49     CPU   %user   %nice %system   %idle    intr/s
> > 19:14:50     all    0.00    0.00   26.00   74.00  14432.00
> > 19:14:50       0    0.00    0.00   40.00   60.00  14432.00
> > 19:14:50       1    0.00    0.00   12.00   88.00  14432.00
> 
> It already looks like there are 50% fewer interrupts to the CPU. Now that
> could be because the FC pkts are larger than the GigE. Were you using Jumbo
> frames for GigE? How different are the FC pkts, in terms of size?

yes, MTU for GigE is 1500 and for FC 4096
I've tried to decrease MTU on FC to 2000 or to increase to 8192, but
each time, the transfer was a half of the maximum (10 MB/s).

> By the way, is the FTP a read or a write? That is, is it TX traffic or RX
> traffic on the nodes? You might *not* want to ignore the lesser CPU
> utilization and fewer interrupts observation. That might hold the key.
> 
> Is it also possible to test the network bandwidth independent of any disk
> I/O on the servers? This will narrow down the number of variables in the
> system.

Yes you're right.
I'm going to do some tests with ramdisks to prevent perturbation of I/O
traffic...

-- 
Fabien SALVI      Centre de Ressources Informatiques
                  Archamps, France -- http://www.cri74.org
                  PingOO GNU/linux distribution : http://www.pingoo.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2002-10-17  8:49 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-16 17:48 [IP over FC] qla2xipsrc perf (qla2x00ip driver) Dheeraj Pandey
2002-10-17  8:49 ` Fabien Salvi
  -- strict thread matches above, loose matches on Subject: below --
2002-10-16 15:37 Dheeraj Pandey
2002-10-16 17:20 ` Fabien Salvi
2002-10-16 15:18 Fabien Salvi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).