From: Jeff Layton <jeff.layton@primarydata.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org,
Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [RFC PATCH 00/14] nfsd/sunrpc: add support for a workqueue-based nfsd
Date: Tue, 2 Dec 2014 14:26:27 -0500 [thread overview]
Message-ID: <20141202142627.6f59f693@tlielax.poochiereds.net> (raw)
In-Reply-To: <20141202191814.GK10918@htj.dyndns.org>
On Tue, 2 Dec 2014 14:18:14 -0500
Tejun Heo <tj@kernel.org> wrote:
> Hello, Jeff.
>
> On Tue, Dec 02, 2014 at 01:24:09PM -0500, Jeff Layton wrote:
> > 2) get some insight about the latency from those with a better
> > understanding of the CMWQ code. Any thoughts as to why we might be
> > seeing such high latency here? Any ideas of what we can do about it?
>
> The latency is prolly from concurrency management. Work items which
> participate in concurrency management (the ones on per-cpu workqueues
> w/o WQ_CPU_INTENSIVE set) tend to get penalized on latency side quite
> a bit as the "run" durations for all such work items end up being
> serialized on the cpu. Setting WQ_CPU_INTENSIVE on the workqueue
> disables concurrency management and so does making the workqueue
> unbound. If strict cpu locality is likely to be beneficial and each
> work item isn't likely to consume huge amount of cpu cycles,
> WQ_CPU_INTENSIVE would fit better; otherwise, WQ_UNBOUND to let the
> scheduler do its thing.
>
> Thanks.
>
Thanks Tejun,
I'm already using WQ_UNBOUND workqueues. If that exempts this code from
the concurrency management, then that's probably not the problem. The
jobs here aren't terribly CPU intensive, but they can sleep for a long
time while waiting on I/O, etc...
I don't think we necessarily need CPU locality (though that's nice to
have of course), but NUMA affinity will likely be important. It looked
like you had done some work a year or so ago to make unbound workqueues
prefer to queue work on the same NUMA node which meshes nicely with
what I think we want for this.
I'll keep looking at it -- let me know if you have any other thoughts
on the latency...
Cheers!
--
Jeff Layton <jlayton@primarydata.com>
next prev parent reply other threads:[~2014-12-02 19:26 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-02 18:24 [RFC PATCH 00/14] nfsd/sunrpc: add support for a workqueue-based nfsd Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 01/14] sunrpc: add a new svc_serv_ops struct and move sv_shutdown into it Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 02/14] sunrpc: move sv_function into sv_ops Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 03/14] sunrpc: move sv_module parm " Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 04/14] sunrpc: turn enqueueing a svc_xprt into a svc_serv operation Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 05/14] sunrpc: abstract out svc_set_num_threads to sv_ops Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 06/14] sunrpc: move pool_mode definitions into svc.h Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 07/14] sunrpc: factor svc_rqst allocation and freeing from sv_nrthreads refcounting Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 08/14] sunrpc: set up workqueue function in svc_xprt Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 09/14] sunrpc: add basic support for workqueue-based services Jeff Layton
2014-12-08 20:47 ` J. Bruce Fields
2014-12-08 20:49 ` Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 10/14] nfsd: keep a reference to the fs_struct in svc_rqst Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 11/14] nfsd: add support for workqueue based service processing Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 12/14] sunrpc: keep a cache of svc_rqsts for each NUMA node Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 13/14] sunrpc: add more tracepoints around svc_xprt handling Jeff Layton
2014-12-02 18:24 ` [RFC PATCH 14/14] sunrpc: add tracepoints around svc_sock handling Jeff Layton
2014-12-02 19:18 ` [RFC PATCH 00/14] nfsd/sunrpc: add support for a workqueue-based nfsd Tejun Heo
2014-12-02 19:26 ` Jeff Layton [this message]
2014-12-02 19:29 ` Tejun Heo
2014-12-02 19:26 ` Tejun Heo
2014-12-02 19:46 ` Jeff Layton
2014-12-03 1:11 ` NeilBrown
2014-12-03 1:29 ` Jeff Layton
2014-12-03 15:56 ` Tejun Heo
2014-12-03 16:04 ` Jeff Layton
2014-12-03 19:02 ` Jeff Layton
2014-12-03 19:08 ` Trond Myklebust
2014-12-03 19:20 ` Jeff Layton
2014-12-03 19:59 ` Trond Myklebust
2014-12-03 20:21 ` Jeff Layton
2014-12-03 20:44 ` Trond Myklebust
2014-12-04 11:47 ` Jeff Layton
2014-12-04 17:17 ` Shirley Ma
2014-12-04 17:28 ` Jeff Layton
2014-12-04 17:44 ` Shirley Ma
2014-12-03 16:50 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141202142627.6f59f693@tlielax.poochiereds.net \
--to=jeff.layton@primarydata.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=tj@kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox