From: Anna Schumaker <Anna.Schumaker@Netapp.com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: linux-rdma <linux-rdma@vger.kernel.org>,
Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 5/9] SUNRPC: Initialize rpc_rqst outside of xprt->reserve_lock
Date: Wed, 7 Mar 2018 15:32:58 -0500 [thread overview]
Message-ID: <48c31eb1-4acf-534b-57d7-7a37e5cbe3b4@Netapp.com> (raw)
In-Reply-To: <6C6A09E2-88A2-4968-89CD-8F6DF959B815@oracle.com>
On 03/07/2018 03:23 PM, Chuck Lever wrote:
>
>
>> On Mar 7, 2018, at 3:00 PM, Anna Schumaker <Anna.Schumaker@netapp.com> wrote:
>>
>>
>>
>> On 03/06/2018 05:30 PM, Chuck Lever wrote:
>>>
>>>
>>>> On Mar 6, 2018, at 5:07 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>
>>>>
>>>>
>>>>> On Mar 6, 2018, at 5:02 PM, Anna Schumaker <Anna.Schumaker@netapp.com> wrote:
>>>>>
>>>>> Hi Chuck,
>>>>>
>>>>> I'm seeing a huge performance hit with this patch. I'm just running cthon over TCP, and it goes from finishing in 22 seconds to taking well over 5 minutes. I seem to only see this on the read and write tests, such as basic test5 taking a minute to finish:
>>>>>
>>>>> ./test5: read and write
>>>>> wrote 1048576 byte file 10 times in 60.35 seconds (173737 bytes/sec)
>>>>> read 1048576 byte file 10 times in 0.0 seconds (-2147483648 bytes/sec)
>>>>> ./test5 ok.
>>>>
>>>> OK. This looks like write is impacted, but this test doesn't
>>>> actually perform any reads on the wire. Try iozone with -I,
>>>> maybe? That would show results for both read and write.
>>>
>>> Hum.
>>>
>>> Stock v4.16-rc4:
>>>
>>> ./test5: read and write
>>> wrote 1048576 byte file 10 times in 0.2 seconds (350811642 bytes/sec)
>>> read 1048576 byte file 10 times in 0.0 seconds (-2147483648 bytes/sec)
>>> ./test5 ok.
>>>
>>>
>>> v4.16-rc4 with my full set of patches:
>>>
>>> ./test5: read and write
>>> wrote 1048576 byte file 10 times in 0.2 seconds (354236681 bytes/sec)
>>> read 1048576 byte file 10 times in 0.0 seconds (-2147483648 bytes/sec)
>>> ./test5 ok.
>>>
>>> I don't see a regression here. Let me know how it goes!
>>
>> I'm using rc4 too, so maybe it's something different in my setup?
>
> What is your setup, exactly? I assume your client is maybe a
> single CPU guest, and the server is the same, and both are
> hosted on one system?
Client is single CPU kvm guest with 1 gig ram, server is also kvm on the same system with 2 cpus and 4 gigs ram.
>
>
>> Making this change fixes the issue for me:
>>
>> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
>> index a394b4635f8e..273847f7e455 100644
>> --- a/net/sunrpc/xprt.c
>> +++ b/net/sunrpc/xprt.c
>> @@ -987,8 +987,6 @@ bool xprt_prepare_transmit(struct rpc_task *task)
>> task->tk_status = -EAGAIN;
>> goto out_unlock;
>> }
>> - if (likely(!bc_prealloc(req)))
>> - req->rq_xid = xprt_alloc_xid(xprt);
>> ret = true;
>> out_unlock:
>> spin_unlock_bh(&xprt->transport_lock);
>> @@ -1315,6 +1313,7 @@ void xprt_request_init(struct rpc_task *task)
>> req->rq_task = task;
>> req->rq_xprt = xprt;
>> req->rq_buffer = NULL;
>> + req->rq_xid = xprt_alloc_xid(xprt);
>
> xprt_alloc_xid is just
>
> 1299 static inline __be32 xprt_alloc_xid(struct rpc_xprt *xprt)
> 1300 {
> 1301 return (__force __be32)xprt->xid++;
> 1302 }
>
> I don't believe the new call site for xprt_request_init is
> adequately serialized for this to be safe in general. That's why
> I'm calling xprt_alloc_xid in xprt_prepare_transmit, behind the
> transport_lock.
This makes sense.
>
> However, I think we need to explain why that helps your performance
> issue, because it doesn't make sense to me that this would make a
> difference. Why did you think to try this change? Is there evidence
> on the wire of XID re-use, for example?
I selectively reverted parts of your original patch until I found the parts that kill my performance.
>
>
>> req->rq_connect_cookie = xprt->connect_cookie - 1;
>> req->rq_bytes_sent = 0;
>> req->rq_snd_buf.len = 0;
>>
>>
>> Anna
>>
>>>
>>>
>>>>> I haven't dug into this too deeply, but my best guess is that maybe it's due to adding a call to xprt_request_init() in net/sunrpc/clnt.c:call_reserveresult()
>>>>
>>>> It wasn't added there, it was moved from xprt_alloc_slot. So,
>>>> it's not new work per-RPC.
>>>>
>>>> Any additional information would be appreciated!
>>>>
>>>>
>>>>> Thoughts?
>>>>> Anna
>>>>>
>>>>> On 03/05/2018 03:13 PM, Chuck Lever wrote:
>>>>>> alloc_slot is a transport-specific op, but initializing an rpc_rqst
>>>>>> is common to all transports. Move initialization to common code in
>>>>>> preparation for adding a transport-specific alloc_slot to xprtrdma.
>>>>>>
>>>>>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>>>>>> ---
>>>>>> include/linux/sunrpc/xprt.h | 1 +
>>>>>> net/sunrpc/clnt.c | 1 +
>>>>>> net/sunrpc/xprt.c | 12 +++++++-----
>>>>>> 3 files changed, 9 insertions(+), 5 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
>>>>>> index 5fea0fb..9784e28 100644
>>>>>> --- a/include/linux/sunrpc/xprt.h
>>>>>> +++ b/include/linux/sunrpc/xprt.h
>>>>>> @@ -324,6 +324,7 @@ struct xprt_class {
>>>>>> struct rpc_xprt *xprt_create_transport(struct xprt_create *args);
>>>>>> void xprt_connect(struct rpc_task *task);
>>>>>> void xprt_reserve(struct rpc_task *task);
>>>>>> +void xprt_request_init(struct rpc_task *task);
>>>>>> void xprt_retry_reserve(struct rpc_task *task);
>>>>>> int xprt_reserve_xprt(struct rpc_xprt *xprt, struct rpc_task *task);
>>>>>> int xprt_reserve_xprt_cong(struct rpc_xprt *xprt, struct rpc_task *task);
>>>>>> diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
>>>>>> index 6e432ec..226f558 100644
>>>>>> --- a/net/sunrpc/clnt.c
>>>>>> +++ b/net/sunrpc/clnt.c
>>>>>> @@ -1546,6 +1546,7 @@ void rpc_force_rebind(struct rpc_clnt *clnt)
>>>>>> task->tk_status = 0;
>>>>>> if (status >= 0) {
>>>>>> if (task->tk_rqstp) {
>>>>>> + xprt_request_init(task);
>>>>>> task->tk_action = call_refresh;
>>>>>> return;
>>>>>> }
>>>>>> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
>>>>>> index 70f0050..a394b46 100644
>>>>>> --- a/net/sunrpc/xprt.c
>>>>>> +++ b/net/sunrpc/xprt.c
>>>>>> @@ -66,7 +66,7 @@
>>>>>> * Local functions
>>>>>> */
>>>>>> static void xprt_init(struct rpc_xprt *xprt, struct net *net);
>>>>>> -static void xprt_request_init(struct rpc_task *, struct rpc_xprt *);
>>>>>> +static __be32 xprt_alloc_xid(struct rpc_xprt *xprt);
>>>>>> static void xprt_connect_status(struct rpc_task *task);
>>>>>> static int __xprt_get_cong(struct rpc_xprt *, struct rpc_task *);
>>>>>> static void __xprt_put_cong(struct rpc_xprt *, struct rpc_rqst *);
>>>>>> @@ -987,6 +987,8 @@ bool xprt_prepare_transmit(struct rpc_task *task)
>>>>>> task->tk_status = -EAGAIN;
>>>>>> goto out_unlock;
>>>>>> }
>>>>>> + if (likely(!bc_prealloc(req)))
>>>>>> + req->rq_xid = xprt_alloc_xid(xprt);
>>>>>> ret = true;
>>>>>> out_unlock:
>>>>>> spin_unlock_bh(&xprt->transport_lock);
>>>>>> @@ -1163,10 +1165,10 @@ void xprt_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
>>>>>> out_init_req:
>>>>>> xprt->stat.max_slots = max_t(unsigned int, xprt->stat.max_slots,
>>>>>> xprt->num_reqs);
>>>>>> + spin_unlock(&xprt->reserve_lock);
>>>>>> +
>>>>>> task->tk_status = 0;
>>>>>> task->tk_rqstp = req;
>>>>>> - xprt_request_init(task, xprt);
>>>>>> - spin_unlock(&xprt->reserve_lock);
>>>>>> }
>>>>>> EXPORT_SYMBOL_GPL(xprt_alloc_slot);
>>>>>>
>>>>>> @@ -1303,8 +1305,9 @@ static inline void xprt_init_xid(struct rpc_xprt *xprt)
>>>>>> xprt->xid = prandom_u32();
>>>>>> }
>>>>>>
>>>>>> -static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
>>>>>> +void xprt_request_init(struct rpc_task *task)
>>>>>> {
>>>>>> + struct rpc_xprt *xprt = task->tk_xprt;
>>>>>> struct rpc_rqst *req = task->tk_rqstp;
>>>>>>
>>>>>> INIT_LIST_HEAD(&req->rq_list);
>>>>>> @@ -1312,7 +1315,6 @@ static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
>>>>>> req->rq_task = task;
>>>>>> req->rq_xprt = xprt;
>>>>>> req->rq_buffer = NULL;
>>>>>> - req->rq_xid = xprt_alloc_xid(xprt);
>>>>>> req->rq_connect_cookie = xprt->connect_cookie - 1;
>>>>>> req->rq_bytes_sent = 0;
>>>>>> req->rq_snd_buf.len = 0;
>>>>>>
>>>>
>>>> --
>>>> Chuck Lever
>>>>
>>>>
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> Chuck Lever
>
> --
> Chuck Lever
>
>
>
next prev parent reply other threads:[~2018-03-07 20:43 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-05 20:12 [PATCH 0/9] Second round of v4.17 NFS/RDMA client patches Chuck Lever
2018-03-05 20:12 ` [PATCH 1/9] SUNRPC: Move xprt_update_rtt callsite Chuck Lever
2018-03-05 20:13 ` [PATCH 2/9] SUNRPC: Make RTT measurement more precise (Receive) Chuck Lever
2018-03-05 20:13 ` [PATCH 3/9] SUNRPC: Make RTT measurement more precise (Send) Chuck Lever
2018-03-05 20:13 ` [PATCH 4/9] SUNRPC: Make num_reqs a non-atomic integer Chuck Lever
2018-03-05 20:13 ` [PATCH 5/9] SUNRPC: Initialize rpc_rqst outside of xprt->reserve_lock Chuck Lever
2018-03-06 22:02 ` Anna Schumaker
2018-03-06 22:07 ` Chuck Lever
2018-03-06 22:30 ` Chuck Lever
2018-03-07 20:00 ` Anna Schumaker
2018-03-07 20:23 ` Chuck Lever
2018-03-07 20:32 ` Anna Schumaker [this message]
2018-03-07 20:44 ` Chuck Lever
2018-03-05 20:13 ` [PATCH 6/9] SUNRPC: Add a ->free_slot transport callout Chuck Lever
2018-03-05 20:13 ` [PATCH 7/9] xprtrdma: Introduce ->alloc_slot call-out for xprtrdma Chuck Lever
2018-03-05 20:13 ` [PATCH 8/9] xprtrdma: Make rpc_rqst part of rpcrdma_req Chuck Lever
2018-03-05 20:13 ` [PATCH 9/9] xprtrdma: Allocate rpcrdma_reps during Receive completion Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48c31eb1-4acf-534b-57d7-7a37e5cbe3b4@Netapp.com \
--to=anna.schumaker@netapp.com \
--cc=chuck.lever@oracle.com \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).