From: Srinivas Eeda <srinivas.eeda@oracle.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: Linux-Fsdevel <linux-fsdevel@vger.kernel.org>,
fuse-devel <fuse-devel@lists.sourceforge.net>
Subject: Re: [fuse-devel] FUSE: fixes to improve scalability on NUMA systems
Date: Tue, 30 Apr 2013 11:28:56 -0700 [thread overview]
Message-ID: <51800D68.9010700@oracle.com> (raw)
In-Reply-To: <CAJfpegtBKX4AG-ZFAU7P6bEfF=4koHo8U+d2qCOCrvEP8EbtYw@mail.gmail.com>
Hi Miklos,
thanks a lot for your quick response :)
Miklos Szeredi wrote:
> On Tue, Apr 30, 2013 at 8:17 AM, Srinivas Eeda <srinivas.eeda@oracle.com> wrote:
>
> Why just NUMA? For example see this discussion a while back:
>
The reason I targeted NUMA is because NUMA machine is where I am seeing
significant performance issues. Even on a NUMA system if I bind all user
threads to a particular NUMA node, there is no notable performance
issue. The test I ran was to start multiple(from 4 to 128) "dd
if=/dev/zero of=/dbfsfilesxx bs=1M count=4000" on a system which has 8
NUMA nodes where each node has 20 cores. So total cpu's were 160.
> http://thread.gmane.org/gmane.comp.file-systems.fuse.devel/11832/
>
That was a good discussion. The problem discussed here is much more fine
grained than mine. Fix I emailed, proposes to bind requests to within a
NUMA node vs the above discussion that proposes to bind requests to
within cpu. Based on your agreement with Anand Avati I think you prefer
to bind requests to cpu.
http://article.gmane.org/gmane.comp.file-systems.fuse.devel/11909
Patch I proposed can easily be modified to do that. With my current
system in mind, currently my patch will split each queue to 8 (on 8 node
numa). With the change each queue will be split to 160. Currently my
libfuse fix will start 8 threads and bind one to each NUMA node, now it
will have to start 160 and bind them to cpus. If you prefer to see some
numbers I can modify the patch and run some tests.
Chances of processes migrating to different NUMA node is minimum. So I
didn't modify fuse header to carry a queue id. In the worst case where
the worker thread gets migrated to different NUMA node my fix will scan
all split queues till it find the request. But if we split the queues to
per cpu, there is a high chance that processes migrate to different
cpu's. So I think it will benefit that I add cpuid to the fuse in/out
headers.
> We should be improving scalability in small steps, each of which makes
> sense and improves the situation. Marking half the fuse_conn
> structure per-cpu or per-node is too large and is probably not even be
> the best step.
>
> For example we have various counters protected by fc->lock that could
> be done with per-cpu counters. Similarly, we could have per-cpu lists
> for requests, balancing requests only when necessary. After that we
> could add some heuristics to discourage balancing between numa nodes.
>
> To sum up: improving scalability for fuse would be nice, but don't
> just do it for NUMA and don't do it in one big step.
>
> Thanks,
> Miklos
>
next prev parent reply other threads:[~2013-04-30 18:25 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-30 6:17 FUSE: fixes to improve scalability on NUMA systems Srinivas Eeda
2013-04-30 6:17 ` [PATCH 1/3] fuse: add numa mount option Srinivas Eeda
2013-04-30 6:17 ` [PATCH 2/3] fuse: add fuse numa node struct Srinivas Eeda
2013-04-30 6:17 ` [PATCH 3/3] fuse: split fuse queues to help numa systems Srinivas Eeda
2013-04-30 16:29 ` [fuse-devel] FUSE: fixes to improve scalability on NUMA systems Miklos Szeredi
2013-04-30 18:28 ` Srinivas Eeda [this message]
2013-05-01 9:53 ` Miklos Szeredi
[not found] ` <CAJfpegvi=Npv1Da2gqDb50xWzO4GHusbrwZMn5tUp8hQ89AJjQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-05-08 9:11 ` Anand Avati
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51800D68.9010700@oracle.com \
--to=srinivas.eeda@oracle.com \
--cc=fuse-devel@lists.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=miklos@szeredi.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).