public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* nfs-rdma performance
@ 2014-06-12 19:54 Mark Lehrer
       [not found] ` <CADvaNzXHOafuzav9RwB2SHXma7EUCxMQX6sNEL6r5-a_e37pLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Mark Lehrer @ 2014-06-12 19:54 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Awesome work on nfs-rdma in the later kernels!  I had been having
panic problems for awhile and now things appear to be quite reliable.

Now that things are more reliable, I would like to help work on speed
issues.  On this same hardware with SMB Direct and the standard
storage review 8k 70/30 test, I get combined read & write performance
of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
This is simply an unacceptable difference.

I'm using the standard settings -- connected mode, 65520 byte MTU,
nfs-server-side "async", lots of nfsd's, and nfsver=3 with large
buffers.  Does anyone have any tuning suggestions and/or places to
start looking for bottlenecks?

Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: nfs-rdma performance
       [not found] ` <CADvaNzXHOafuzav9RwB2SHXma7EUCxMQX6sNEL6r5-a_e37pLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-12 20:00   ` Steve Wise
       [not found]     ` <539A06C3.1030909-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
  2014-06-12 20:14   ` Wendy Cheng
  1 sibling, 1 reply; 5+ messages in thread
From: Steve Wise @ 2014-06-12 20:00 UTC (permalink / raw)
  To: Mark Lehrer, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 6/12/2014 2:54 PM, Mark Lehrer wrote:
> Awesome work on nfs-rdma in the later kernels!  I had been having
> panic problems for awhile and now things appear to be quite reliable.
>
> Now that things are more reliable, I would like to help work on speed
> issues.  On this same hardware with SMB Direct and the standard
> storage review 8k 70/30 test, I get combined read & write performance
> of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
> This is simply an unacceptable difference.
>
> I'm using the standard settings -- connected mode, 65520 byte MTU,
> nfs-server-side "async", lots of nfsd's, and nfsver=3 with large
> buffers.  Does anyone have any tuning suggestions and/or places to
> start looking for bottlenecks?

What RDMA device?

Steve.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: nfs-rdma performance
       [not found] ` <CADvaNzXHOafuzav9RwB2SHXma7EUCxMQX6sNEL6r5-a_e37pLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2014-06-12 20:00   ` Steve Wise
@ 2014-06-12 20:14   ` Wendy Cheng
  1 sibling, 0 replies; 5+ messages in thread
From: Wendy Cheng @ 2014-06-12 20:14 UTC (permalink / raw)
  To: Mark Lehrer; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On Thu, Jun 12, 2014 at 12:54 PM, Mark Lehrer <lehrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
> Awesome work on nfs-rdma in the later kernels!  I had been having
> panic problems for awhile and now things appear to be quite reliable.
>
> Now that things are more reliable, I would like to help work on speed
> issues.  On this same hardware with SMB Direct and the standard
> storage review 8k 70/30 test, I get combined read & write performance
> of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
> This is simply an unacceptable difference.
>
> I'm using the standard settings -- connected mode, 65520 byte MTU,
> nfs-server-side "async", lots of nfsd's, and nfsver=3 with large
> buffers.  Does anyone have any tuning suggestions and/or places to
> start looking for bottlenecks?
>

There is a tunable called "xprt_rdma_slot_table_entries" .. Increasing
that seemed to help a lot for me last year. Be aware that this tunable
is enclosed inside  "#ifdef RPC_DEBUG" so you might need to tweak the
source and rebuild the kmod.


-- Wendy
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: nfs-rdma performance
       [not found]     ` <539A06C3.1030909-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
@ 2014-06-12 23:06       ` Mark Lehrer
       [not found]         ` <CADvaNzUbqwvXFiDPNr4f-DC8s7kf7t=N0UXkwUXrKDPn6WgMAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Mark Lehrer @ 2014-06-12 23:06 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

I am using ConnectX-3 HCA's and Dell R720 servers.

On Thu, Jun 12, 2014 at 2:00 PM, Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org> wrote:
> On 6/12/2014 2:54 PM, Mark Lehrer wrote:
>>
>> Awesome work on nfs-rdma in the later kernels!  I had been having
>> panic problems for awhile and now things appear to be quite reliable.
>>
>> Now that things are more reliable, I would like to help work on speed
>> issues.  On this same hardware with SMB Direct and the standard
>> storage review 8k 70/30 test, I get combined read & write performance
>> of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
>> This is simply an unacceptable difference.
>>
>> I'm using the standard settings -- connected mode, 65520 byte MTU,
>> nfs-server-side "async", lots of nfsd's, and nfsver=3 with large
>> buffers.  Does anyone have any tuning suggestions and/or places to
>> start looking for bottlenecks?
>
>
> What RDMA device?
>
> Steve.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: nfs-rdma performance
       [not found]         ` <CADvaNzUbqwvXFiDPNr4f-DC8s7kf7t=N0UXkwUXrKDPn6WgMAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-13  7:25           ` Shirley Ma
  0 siblings, 0 replies; 5+ messages in thread
From: Shirley Ma @ 2014-06-13  7:25 UTC (permalink / raw)
  To: Mark Lehrer, linux-rdma-u79uwXL29TY76Z2rM5mHXA


On 06/12/2014 04:06 PM, Mark Lehrer wrote:
> I am using ConnectX-3 HCA's and Dell R720 servers.
> 
> On Thu, Jun 12, 2014 at 2:00 PM, Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org> wrote:
>> On 6/12/2014 2:54 PM, Mark Lehrer wrote:
>>>
>>> Awesome work on nfs-rdma in the later kernels!  I had been having
>>> panic problems for awhile and now things appear to be quite reliable.
>>>
>>> Now that things are more reliable, I would like to help work on speed
>>> issues.  On this same hardware with SMB Direct and the standard
>>> storage review 8k 70/30 test, I get combined read & write performance
>>> of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
>>> This is simply an unacceptable difference.

I was able to get close to 2.5GB/s with ConnectX-2 for direct I/O. What's your test case and wsize/rsize? Did you collect /proc/interrupts, cpu usage and profiling data?

>>>
>>> I'm using the standard settings -- connected mode, 65520 byte MTU,
>>> nfs-server-side "async", lots of nfsd's, and nfsver=3 with large
>>> buffers.  Does anyone have any tuning suggestions and/or places to
>>> start looking for bottlenecks?
>>
>>
>> What RDMA device?
>>
>> Steve.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-06-13  7:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-12 19:54 nfs-rdma performance Mark Lehrer
     [not found] ` <CADvaNzXHOafuzav9RwB2SHXma7EUCxMQX6sNEL6r5-a_e37pLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-12 20:00   ` Steve Wise
     [not found]     ` <539A06C3.1030909-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
2014-06-12 23:06       ` Mark Lehrer
     [not found]         ` <CADvaNzUbqwvXFiDPNr4f-DC8s7kf7t=N0UXkwUXrKDPn6WgMAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-13  7:25           ` Shirley Ma
2014-06-12 20:14   ` Wendy Cheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox