public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* high latency NFS
@ 2008-07-24 17:11 Michael Shuey
  2008-07-30 19:21 ` J. Bruce Fields
                   ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Michael Shuey @ 2008-07-24 17:11 UTC (permalink / raw)
  To: linux-kernel

I'm currently toying with Linux's NFS, to see just how fast it can go in a 
high-latency environment.  Right now, I'm simulating a 100ms delay between 
client and server with netem (just 100ms on the outbound packets from the 
client, rather than 50ms each way).  Oddly enough, I'm running into 
performance problems. :-)

According to iozone, my server can sustain about 90/85 MB/s (reads/writes) 
without any latency added.  After a pile of tweaks, and injecting 100ms of 
netem latency, I'm getting 6/40 MB/s (reads/writes).  I'd really like to 
know why writes are now so much faster than reads, and what sort of things 
might boost the read throughput.  Any suggestions?
1
The read throughput seems to be proportional to the latency - adding only 
10ms of delay gives 61 MB/s reads, in limited testing (need to look at it 
further).  While that's to be expected, to some extent, I'm hoping there's 
some form of readahead that can help me out here (assume big sequential 
reads).

iozone is reading/writing a file twice the size of memory on the client with 
a 32k block size.  I've tried raising this as high as 16 MB, but I still 
see around 6 MB/sec reads.

I'm using a 2.6.9 derivative (yes, I'm a RHEL4 fan).  Testing with a stock 
2.6, client and server, is the next order of business.

NFS mount is tcp, version 3.  rsize/wsize are 32k.  Both client and server 
have had tcp_rmem, tcp_wmem, wmem_max, rmem_max, wmem_default, and 
rmem_default tuned - tuning values are 12500000 for defaults (and minimum 
window sizes), 25000000 for the maximums.  Inefficient, yes, but I'm not 
concerned with memory efficiency at the moment.

Both client and server kernels have been modified to provide 
larger-than-normal RPC slot tables.  I allow a max of 1024, but I've found 
that actually enabling more than 490 entries in /proc causes mount to 
complain it can't allocate memory and die.  That was somewhat suprising, 
given I had 122 GB of free memory at the time...

I've also applied a couple patches to allow the NFS readahead to be a 
tunable number of RPC slots.  Currently, I set this to 489 on client and 
server (so it's one less than the max number of RPC slots).  Bandwidth 
delay product math says 380ish slots should be enough to keep a gigabit 
line full, so I suspect something else is preventing me from seeing the 
readahead I expect.

FYI, client and server are connected via gigabit ethernet.  There's a couple 
routers in the way, but they talk at 10gigE and can route wire speed.  
Traffic is IPv4, path MTU size is 9000 bytes.

Is there anything I'm missing?

-- 
Mike Shuey
Purdue University/ITaP

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2008-08-05 10:58 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-24 17:11 high latency NFS Michael Shuey
2008-07-30 19:21 ` J. Bruce Fields
2008-07-30 21:40   ` Shehjar Tikoo
2008-07-31  2:35     ` Michael Shuey
2008-07-31  3:15       ` J. Bruce Fields
2008-07-31  7:03         ` Neil Brown
2008-08-01  7:23           ` Dave Chinner
2008-08-01 19:15             ` J. Bruce Fields
2008-08-04  0:32               ` Dave Chinner
2008-08-04  1:11                 ` J. Bruce Fields
2008-08-04  2:14                   ` Dave Chinner
2008-08-04  9:18                   ` Bernd Schubert
2008-08-04  9:25                     ` Greg Banks
2008-08-04  1:29                 ` NeilBrown
2008-08-04  6:42                   ` Greg Banks
2008-08-04 19:07                     ` J. Bruce Fields
2008-08-05 10:51                       ` Greg Banks
2008-08-01 19:23             ` J. Bruce Fields
2008-08-04  0:38               ` Dave Chinner
2008-08-04  8:04   ` Greg Banks
2008-07-31  0:07 ` Lee Revell
2008-07-31 18:06 ` Enrico Weigelt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox