From: Shaohua Li <shli@kernel.org>
To: NeilBrown <neilb@suse.de>
Cc: stan@hardwarefreak.com, linux-raid@vger.kernel.org,
dan.j.williams@gmail.com
Subject: Re: [patch 2/2 v3]raid5: create multiple threads to handle stripes
Date: Fri, 29 Mar 2013 10:34:07 +0800 [thread overview]
Message-ID: <20130329023407.GA19943@kernel.org> (raw)
In-Reply-To: <20130328174744.79b04058@notabene.brown>
On Thu, Mar 28, 2013 at 05:47:44PM +1100, NeilBrown wrote:
> On Tue, 12 Mar 2013 19:44:19 -0500 Stan Hoeppner <stan@hardwarefreak.com>
> wrote:
>
> > On 3/11/2013 8:39 PM, NeilBrown wrote:
> > > On Thu, 7 Mar 2013 15:31:23 +0800 Shaohua Li <shli@kernel.org> wrote:
> > ...
> > >>> #echo 1-3 > /sys/block/md0/md/auxth0/cpulist
> > >>> This will bind auxiliary thread 0 to cpu 1-3, and this thread will only handle
> > >>> stripes produced by cpu 1-3. User tool can further change the thread's
> > >>> affinity, but the thread can only handle stripes produced by cpu 1-3 till the
> > >>> sysfs entry is changed again.
> >
> > Would it not be better to use the existing cpusets infrastructure for
> > this, instead of manually binding threads to specific cores or sets of
> > cores?
> >
> > Also, I understand the hot cache issue driving the desire to have a raid
> > thread only process stripes created by its CPU. But what of the
> > scenario where an HPC user pins application threads to cores and needs
> > all the L1/L2 cache? Say this user has a dual socket 24 core NUMA
> > system with 2 NUMA nodes per socket, 4 nodes total. Each NUMA node has
> > 6 cores and shared L3 cache. The user pins 5 processes to 5 cores in
> > each node, and wants to pin a raid thread to the remaining core in each
> > node to handle the write IO generated by the 5 user threads on the node.
> >
> > Does your patch series allow this? Using the above example, if the user
> > creates 4 cpusets, can he assign a raid thread to that set and the
> > thread will execute on any core in the set, and only that set, on any
> > stripes created by any CPU in that set, and only that set?
> >
> > The infrastructure for this already exists, has since 2004. And it
> > seems is more flexible than what you've implemented here. I suggest we
> > make use of it, as it is the kernel standard for doing such things.
> >
> > See: http://man7.org/linux/man-pages/man7/cpuset.7.html
> >
> > > Hi Shaohua,
> > > I still have this sitting in my queue, but I haven't had a chance to look at
> > > is properly yet - I'm sorry about that. I'll try to get to it soon.
> >
>
> Thanks for this feedback. The interface is the thing I am most concerned
> about getting right at this stage, and is exactly what you are commenting on.
>
> The current code allows you to request N separate raid threads, and to tie
> each one to a subset of processors. This tying is in two senses. The
> thread can only run on cpus in the subset, and the requests queued by any
> given processor will preferentially be processed by threads tied to that
> processor.
>
> It does sound a lot like cpusets could be used instead of lists of CPUs.
> However it does merge the two different cpuset concepts which you seem to
> suggest might not be ideal, and maybe it isn't.
>
> A completely general solution might be to allow each thread to handle
> requests from one cpuset, and run on any processor in another cpuset.
> Would that be too much flexibility?
>
> cpusets are a config option, so we would need to only enable multithreads if
> CONFIG_CPUSETS were set. Is this unnecessarily restrictive? Are there any
> other cases of kernel threads binding to cpusets? If there aren't I'd be a
> but cautious of being the first, I as have very little familiarity with this
> stuff.
Frankly I don't like the cpuset way. It might just work, but it's just another
API to control process affinity and has no essential difference against my
approach (which directly sets process affinity). Generally we use cpuset
instead of process affinity is because of something like inherit affinity.
While the raid5 process doesn't involve those.
> I still like the idea of an 'ioctl' which a process can call and will cause
> it to start handling requests.
> The process could bind itself to whatever cpu or cpuset it wanted to, then
> could call the ioctl on the relevant md array, and pass in a bitmap of cpus
> which indicate which requests it wants to be responsible for. The current
> kernel thread will then only handle requests that no-one else has put their
> hand up for. This leave all the details of configuration in user-space
> (where I think it belongs).
The 'ioctl' way is interesting. But there are something we need answer:
1. How kernel knows if there will be process to handle one cpu's requests
before an 'ioctl' is called? I suppose you want 2 ioctls. One ioctl telles
kernel the process handles request from cpus of a cpumask. The other ioctl does
request handling. The process must sleep in the ioctl to wait requests.
2. If a process is killed in the middle, how kernel knows? Do you want to hook
something in task management code? For normal process exit, we need another
ioctl to tell kernel the process is exiting.
The only difference between this way and my way is if the request handling task
is userspace or kernel space. In both ways, you need set affinity and uses
ioctl/sysfs to control requests source for the process.
Thanks,
Shaohua
next prev parent reply other threads:[~2013-03-29 2:34 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-09 8:58 [patch 2/2 v3]raid5: create multiple threads to handle stripes Shaohua Li
2012-08-11 8:45 ` Jianpeng Ma
2012-08-13 0:21 ` Shaohua Li
2012-08-13 1:06 ` Jianpeng Ma
2012-08-13 2:13 ` Shaohua Li
2012-08-13 2:20 ` Shaohua Li
2012-08-13 2:25 ` Jianpeng Ma
2012-08-13 4:21 ` NeilBrown
2012-08-14 10:39 ` Jianpeng Ma
2012-08-15 3:51 ` Shaohua Li
2012-08-15 6:21 ` Jianpeng Ma
2012-08-15 8:04 ` Shaohua Li
2012-08-15 8:19 ` Jianpeng Ma
2012-09-24 11:15 ` Jianpeng Ma
2012-09-26 1:26 ` NeilBrown
2012-08-13 9:11 ` Jianpeng Ma
2012-08-13 4:29 ` NeilBrown
2012-08-13 6:22 ` Shaohua Li
2013-03-07 7:31 ` Shaohua Li
2013-03-12 1:39 ` NeilBrown
2013-03-13 0:44 ` Stan Hoeppner
2013-03-28 6:47 ` NeilBrown
2013-03-28 16:53 ` Stan Hoeppner
2013-03-29 2:34 ` Shaohua Li [this message]
2013-03-29 9:36 ` Stan Hoeppner
2013-04-01 1:57 ` Shaohua Li
2013-04-01 19:31 ` Stan Hoeppner
2013-04-02 0:39 ` Shaohua Li
2013-04-02 3:12 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130329023407.GA19943@kernel.org \
--to=shli@kernel.org \
--cc=dan.j.williams@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=stan@hardwarefreak.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).