* Streaming perf problem on 10g @ 2010-11-03 17:58 Shehjar Tikoo 2010-11-03 18:33 ` Joe Landman 0 siblings, 1 reply; 5+ messages in thread From: Shehjar Tikoo @ 2010-11-03 17:58 UTC (permalink / raw) To: linux-nfs Hi All, I am running into a performance problem with 2.6.32-23 Ubuntu lucid on both client and server. The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in 64k blocks. The network is performing fine with many Gbps of iperf throughput. Yet, the dd write performance over the nfs mount point ranges from 96-105 Mbps for a 6gb file in 64k blocks. I've tried changing the tcp_slot_table_entries and the wsize but there is negligible gain from these. Does it sound like a client side inefficiency? Thanks -Shehjar ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Streaming perf problem on 10g 2010-11-03 17:58 Streaming perf problem on 10g Shehjar Tikoo @ 2010-11-03 18:33 ` Joe Landman 2010-11-03 18:47 ` fibreraid 0 siblings, 1 reply; 5+ messages in thread From: Joe Landman @ 2010-11-03 18:33 UTC (permalink / raw) To: Shehjar Tikoo; +Cc: linux-nfs On 11/03/2010 01:58 PM, Shehjar Tikoo wrote: > Hi All, > > I am running into a performance problem with 2.6.32-23 Ubuntu lucid on both client and server. > > The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in 64k blocks. > If the size of this file is comparable to or smaller than the client or server ram, this number is meaningless. > The network is performing fine with many Gbps of iperf throughput. > GbE gets you 1 Gbps. 10GbE may get you from 3-10 Gbps, depending upon many things. What are your numbers? > Yet, the dd write performance over the nfs mount point ranges from 96-105 Mbps for a 6gb file in 64k blocks. > Sounds like you are writing over the gigabit, and not the 10GbE interface. > I've tried changing the tcp_slot_table_entries and the wsize but there is negligible gain from these. > > Does it sound like a client side inefficiency? > Nope. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman@scalableinformatics.com web : http://scalableinformatics.com phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Streaming perf problem on 10g 2010-11-03 18:33 ` Joe Landman @ 2010-11-03 18:47 ` fibreraid 2010-11-04 8:20 ` Shehjar Tikoo 0 siblings, 1 reply; 5+ messages in thread From: fibreraid @ 2010-11-03 18:47 UTC (permalink / raw) To: Joe Landman; +Cc: Shehjar Tikoo, linux-nfs Hi Shehjar, Can you provide the exact dd command you are running both locally and for the NFS mount? -Tommy On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <joe.landman@gmail.com> wr= ote: > On 11/03/2010 01:58 PM, Shehjar Tikoo wrote: >> >> Hi All, >> >> I am running into a performance problem with 2.6.32-23 Ubuntu lucid = on >> both client and server. >> >> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb fil= e in >> 64k blocks. >> > > If the size of this file is comparable to or smaller than the client = or > server ram, this number is meaningless. > >> The network is performing fine with many Gbps of iperf throughput. >> > > GbE gets you 1 Gbps. =A010GbE may get you from 3-10 Gbps, depending u= pon many > things. =A0What are =A0your numbers? > >> Yet, the dd write performance over the nfs mount point ranges from 9= 6-105 >> Mbps for a 6gb file in 64k blocks. >> > > Sounds like you are writing over the gigabit, and not the 10GbE inter= face. > >> I've tried changing the tcp_slot_table_entries and the wsize but the= re is >> negligible gain from these. >> >> Does it sound like a client side inefficiency? >> > > Nope. > > -- > Joseph Landman, Ph.D > Founder and CEO > Scalable Informatics Inc. > email: landman-nyOC7EYE20mM0MU9lROt9DlRY1/6cnIP@public.gmane.org > web : http://scalableinformatics.com > phone: +1 734 786 8423 x121 > fax : +1 866 888 3112 > cell : +1 734 612 4615 > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" = in > the body of a message to majordomo@vger.kernel.org > More majordomo info at =A0http://vger.kernel.org/majordomo-info.html > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Streaming perf problem on 10g 2010-11-03 18:47 ` fibreraid @ 2010-11-04 8:20 ` Shehjar Tikoo 2010-11-05 11:43 ` fibreraid 0 siblings, 1 reply; 5+ messages in thread From: Shehjar Tikoo @ 2010-11-04 8:20 UTC (permalink / raw) To: fibreraid@gmail.com; +Cc: Joe Landman, linux-nfs fibreraid@gmail.com wrote: > Hi Shehjar, > > Can you provide the exact dd command you are running both locally and > for the NFS mount? on the ssd: # dd if=/dev/zero of=bigfile4 bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 0.690624 s, 1.5 GB/s # dd if=/dev/zero of=bigfile4 bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 1.72764 s, 607 MB/s The ssd file system is ext4 mounted as (rw,noatime,nodiratime,data=writeback) Here is another strangeness, using oflag=direct gives better performance: On nfs mount: # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 3.7063 s, 283 MB/s # rm /tmp/testmount/bigfile3 # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 9.66876 s, 108 MB/s The kernel on both server and client is 2.6.32-23, so I think this regression might be in play. http://thread.gmane.org/gmane.comp.file-systems.ext4/20360 Thanks -Shehjar > > -Tommy > > On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <joe.landman@gmail.com> wrote: >> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote: >>> Hi All, >>> >>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on >>> both client and server. >>> >>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in >>> 64k blocks. >>> >> If the size of this file is comparable to or smaller than the client or >> server ram, this number is meaningless. >> >>> The network is performing fine with many Gbps of iperf throughput. >>> >> GbE gets you 1 Gbps. 10GbE may get you from 3-10 Gbps, depending upon many >> things. What are your numbers? >> >>> Yet, the dd write performance over the nfs mount point ranges from 96-105 >>> Mbps for a 6gb file in 64k blocks. >>> >> Sounds like you are writing over the gigabit, and not the 10GbE interface. >> >>> I've tried changing the tcp_slot_table_entries and the wsize but there is >>> negligible gain from these. >>> >>> Does it sound like a client side inefficiency? >>> >> Nope. >> >> -- >> Joseph Landman, Ph.D >> Founder and CEO >> Scalable Informatics Inc. >> email: landman@scalableinformatics.com >> web : http://scalableinformatics.com >> phone: +1 734 786 8423 x121 >> fax : +1 866 888 3112 >> cell : +1 734 612 4615 >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Streaming perf problem on 10g 2010-11-04 8:20 ` Shehjar Tikoo @ 2010-11-05 11:43 ` fibreraid 0 siblings, 0 replies; 5+ messages in thread From: fibreraid @ 2010-11-05 11:43 UTC (permalink / raw) To: Shehjar Tikoo; +Cc: Joe Landman, linux-nfs Hi Shehjar, Have you tested with another file system besides ext4, like XFS or ReiserFS? How many SSD's in the configuration? What is the storage controller (SAS, SATA, PCIe direct-connect)? 1.5GB/sec a lot of speed...seems like at least 8 SSDs but please confirm. Also, you are not copying enough data in this test....how much DRAM in the server with the SSD? I would run dd with an IO amount at least double or triple the the amount of memory in the system. 1GB is not enough. -Tommy On Thu, Nov 4, 2010 at 1:20 AM, Shehjar Tikoo <shehjart@gluster.com> wrote: > fibreraid@gmail.com wrote: >> >> Hi Shehjar, >> >> Can you provide the exact dd command you are running both locally and >> for the NFS mount? > > on the ssd: > > # dd if=/dev/zero of=bigfile4 bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 0.690624 s, 1.5 GB/s > # dd if=/dev/zero of=bigfile4 bs=1M count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 1.72764 s, 607 MB/s > > The ssd file system is ext4 mounted as > (rw,noatime,nodiratime,data=writeback) > > Here is another strangeness, using oflag=direct gives better performance: > > On nfs mount: > # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 oflag=direct > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 3.7063 s, 283 MB/s > # rm /tmp/testmount/bigfile3 > # dd if=/dev/zero of=/tmp/testmount/bigfile3 bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 9.66876 s, 108 MB/s > > The kernel on both server and client is 2.6.32-23, so I think this > regression might be in play. > > http://thread.gmane.org/gmane.comp.file-systems.ext4/20360 > > Thanks > -Shehjar > >> >> -Tommy >> >> On Wed, Nov 3, 2010 at 11:33 AM, Joe Landman <joe.landman@gmail.com> >> wrote: >>> >>> On 11/03/2010 01:58 PM, Shehjar Tikoo wrote: >>>> >>>> Hi All, >>>> >>>> I am running into a performance problem with 2.6.32-23 Ubuntu lucid on >>>> both client and server. >>>> >>>> The disk is an SSD performing at 1.4 - 1.6Gbps for a dd of a 6gb file in >>>> 64k blocks. >>>> >>> If the size of this file is comparable to or smaller than the client or >>> server ram, this number is meaningless. >>> >>>> The network is performing fine with many Gbps of iperf throughput. >>>> >>> GbE gets you 1 Gbps. 10GbE may get you from 3-10 Gbps, depending upon >>> many >>> things. What are your numbers? >>> >>>> Yet, the dd write performance over the nfs mount point ranges from >>>> 96-105 >>>> Mbps for a 6gb file in 64k blocks. >>>> >>> Sounds like you are writing over the gigabit, and not the 10GbE >>> interface. >>> >>>> I've tried changing the tcp_slot_table_entries and the wsize but there >>>> is >>>> negligible gain from these. >>>> >>>> Does it sound like a client side inefficiency? >>>> >>> Nope. >>> >>> -- >>> Joseph Landman, Ph.D >>> Founder and CEO >>> Scalable Informatics Inc. >>> email: landman@scalableinformatics.com >>> web : http://scalableinformatics.com >>> phone: +1 734 786 8423 x121 >>> fax : +1 866 888 3112 >>> cell : +1 734 612 4615 >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> > > ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2010-11-05 11:43 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-11-03 17:58 Streaming perf problem on 10g Shehjar Tikoo 2010-11-03 18:33 ` Joe Landman 2010-11-03 18:47 ` fibreraid 2010-11-04 8:20 ` Shehjar Tikoo 2010-11-05 11:43 ` fibreraid
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).