* [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool
@ 2015-06-29 7:45 Andreas Herrmann
2015-06-29 9:45 ` Will Deacon
0 siblings, 1 reply; 6+ messages in thread
From: Andreas Herrmann @ 2015-06-29 7:45 UTC (permalink / raw)
To: Will Deacon; +Cc: kvm@vger.kernel.org
With current code the number of threads added to the thread_pool
equals number of online CPUs. Thus on an OcteonIII cn78xx system we
usually have 48 threads per guest just for the thread_pool. IMHO this
is overkill for guests that just have a few vCPUs and/or if a guest is
pinned to a subset of host CPUs. E.g.
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
# ps -La | grep threadpool-work | wc -l
48
Don't change default behaviour (for sake of compatibility) but
introduce a new parameter ("-t" or "--threads") that allows to specify
number of threads to be created for the thread_pool:
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
# ps -La | grep threadpool-work | wc -l
4
Signed-off-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
---
builtin-run.c | 2 ++
include/kvm/kvm-config.h | 1 +
util/threadpool.c | 5 ++++-
3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/builtin-run.c b/builtin-run.c
index 1ee75ad..86de53d 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -131,6 +131,8 @@ void kvm_run_set_wrapper_sandbox(void)
" rootfs"), \
OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path", \
"Hugetlbfs path"), \
+ OPT_INTEGER('t', "threads", &(cfg)->nrthreads, \
+ "Number of threads in thread_pool"), \
\
OPT_GROUP("Kernel options:"), \
OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel", \
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..9cc50f5 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -27,6 +27,7 @@ struct kvm_config {
int active_console;
int debug_iodelay;
int nrcpus;
+ int nrthreads;
const char *kernel_cmdline;
const char *kernel_filename;
const char *vmlinux_filename;
diff --git a/util/threadpool.c b/util/threadpool.c
index e64aa26..620fdbd 100644
--- a/util/threadpool.c
+++ b/util/threadpool.c
@@ -124,7 +124,10 @@ static int thread_pool__addthread(void)
int thread_pool__init(struct kvm *kvm)
{
unsigned long i;
- unsigned int thread_count = sysconf(_SC_NPROCESSORS_ONLN);
+ unsigned int thread_count;
+
+ thread_count = kvm->cfg.nrthreads ? kvm->cfg.nrthreads :
+ sysconf(_SC_NPROCESSORS_ONLN);
running = true;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool
2015-06-29 7:45 [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool Andreas Herrmann
@ 2015-06-29 9:45 ` Will Deacon
2015-06-29 10:14 ` Andreas Herrmann
2015-06-29 11:43 ` [PATCH v2] " Andreas Herrmann
0 siblings, 2 replies; 6+ messages in thread
From: Will Deacon @ 2015-06-29 9:45 UTC (permalink / raw)
To: Andreas Herrmann; +Cc: kvm@vger.kernel.org
On Mon, Jun 29, 2015 at 08:45:44AM +0100, Andreas Herrmann wrote:
>
> With current code the number of threads added to the thread_pool
> equals number of online CPUs. Thus on an OcteonIII cn78xx system we
> usually have 48 threads per guest just for the thread_pool. IMHO this
> is overkill for guests that just have a few vCPUs and/or if a guest is
> pinned to a subset of host CPUs. E.g.
>
> # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
> # ps -La | grep threadpool-work | wc -l
> 48
>
> Don't change default behaviour (for sake of compatibility) but
> introduce a new parameter ("-t" or "--threads") that allows to specify
> number of threads to be created for the thread_pool:
>
> # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
> # ps -La | grep threadpool-work | wc -l
> 4
We should probably bound this on some minimum value. I assume things go
pear-shaped if you pass --threads 1 (or 0, or -1)?
Will
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool
2015-06-29 9:45 ` Will Deacon
@ 2015-06-29 10:14 ` Andreas Herrmann
2015-06-29 11:43 ` [PATCH v2] " Andreas Herrmann
1 sibling, 0 replies; 6+ messages in thread
From: Andreas Herrmann @ 2015-06-29 10:14 UTC (permalink / raw)
To: Will Deacon; +Cc: kvm@vger.kernel.org
On Mon, Jun 29, 2015 at 10:45:03AM +0100, Will Deacon wrote:
> On Mon, Jun 29, 2015 at 08:45:44AM +0100, Andreas Herrmann wrote:
> >
> > With current code the number of threads added to the thread_pool
> > equals number of online CPUs. Thus on an OcteonIII cn78xx system we
> > usually have 48 threads per guest just for the thread_pool. IMHO this
> > is overkill for guests that just have a few vCPUs and/or if a guest is
> > pinned to a subset of host CPUs. E.g.
> >
> > # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
> > # ps -La | grep threadpool-work | wc -l
> > 48
> >
> > Don't change default behaviour (for sake of compatibility) but
> > introduce a new parameter ("-t" or "--threads") that allows to specify
> > number of threads to be created for the thread_pool:
> >
> > # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
> > # ps -La | grep threadpool-work | wc -l
> > 4
>
> We should probably bound this on some minimum value. I assume things go
> pear-shaped if you pass --threads 1 (or 0, or -1)?
Ouch, yes, range must be checked (esp. for -1).
I think the passed value should be in [1, number of online CPUs].
Andreas
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH v2] kvmtool: Add parameter to specifiy number of threads in thread_pool
2015-06-29 9:45 ` Will Deacon
2015-06-29 10:14 ` Andreas Herrmann
@ 2015-06-29 11:43 ` Andreas Herrmann
2015-06-30 14:03 ` Will Deacon
1 sibling, 1 reply; 6+ messages in thread
From: Andreas Herrmann @ 2015-06-29 11:43 UTC (permalink / raw)
To: Will Deacon; +Cc: kvm@vger.kernel.org
With current code the number of threads added to the thread_pool
equals number of online CPUs. So on cn78xx we usually have 48 threads
per guest just for the thread_pool. IMHO this is overkill for guests
that just have a few vCPUs and/or if a guest is pinned to a subset of
host CPUs. E.g.
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
# ps -La | grep threadpool-work | wc -l
48
Don't change default behaviour (for sake of compatibility) but
introduce a new parameter ("-t" or "--threads") that allows to specify
number of threads to be created for the thread_pool:
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
# ps -La | grep threadpool-work | wc -l
4
Signed-off-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
---
builtin-run.c | 6 ++++++
include/kvm/kvm-config.h | 1 +
util/threadpool.c | 2 +-
3 files changed, 8 insertions(+), 1 deletion(-)
New in v2: paramter must be in range [1, number of online cpus]
otherwise the default (number of online cpus) will be used.
Andreas
diff --git a/builtin-run.c b/builtin-run.c
index 1ee75ad..40ab9c6 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -131,6 +131,8 @@ void kvm_run_set_wrapper_sandbox(void)
" rootfs"), \
OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path", \
"Hugetlbfs path"), \
+ OPT_INTEGER('t', "threads", &(cfg)->nrthreads, \
+ "Number of threads in thread_pool"), \
\
OPT_GROUP("Kernel options:"), \
OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel", \
@@ -590,6 +592,10 @@ static struct kvm *kvm_cmd_run_init(int argc, const char **argv)
if (!kvm->cfg.network)
kvm->cfg.network = DEFAULT_NETWORK;
+ if (!kvm->cfg.nrthreads || (kvm->cfg.nrthreads < 0) ||
+ ((unsigned int) kvm->cfg.nrthreads > nr_online_cpus))
+ kvm->cfg.nrthreads = nr_online_cpus;
+
memset(real_cmdline, 0, sizeof(real_cmdline));
kvm__arch_set_cmdline(real_cmdline, kvm->cfg.vnc || kvm->cfg.sdl || kvm->cfg.gtk);
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..9cc50f5 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -27,6 +27,7 @@ struct kvm_config {
int active_console;
int debug_iodelay;
int nrcpus;
+ int nrthreads;
const char *kernel_cmdline;
const char *kernel_filename;
const char *vmlinux_filename;
diff --git a/util/threadpool.c b/util/threadpool.c
index e64aa26..43595b8 100644
--- a/util/threadpool.c
+++ b/util/threadpool.c
@@ -124,7 +124,7 @@ static int thread_pool__addthread(void)
int thread_pool__init(struct kvm *kvm)
{
unsigned long i;
- unsigned int thread_count = sysconf(_SC_NPROCESSORS_ONLN);
+ unsigned int thread_count = kvm->cfg.nrthreads;
running = true;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH v2] kvmtool: Add parameter to specifiy number of threads in thread_pool
2015-06-29 11:43 ` [PATCH v2] " Andreas Herrmann
@ 2015-06-30 14:03 ` Will Deacon
0 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2015-06-30 14:03 UTC (permalink / raw)
To: Andreas Herrmann; +Cc: kvm@vger.kernel.org
Hi Andreas,
On Mon, Jun 29, 2015 at 12:43:33PM +0100, Andreas Herrmann wrote:
> With current code the number of threads added to the thread_pool
> equals number of online CPUs. So on cn78xx we usually have 48 threads
> per guest just for the thread_pool. IMHO this is overkill for guests
> that just have a few vCPUs and/or if a guest is pinned to a subset of
> host CPUs. E.g.
>
> # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
> # ps -La | grep threadpool-work | wc -l
> 48
>
> Don't change default behaviour (for sake of compatibility) but
> introduce a new parameter ("-t" or "--threads") that allows to specify
> number of threads to be created for the thread_pool:
>
> # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
> # ps -La | grep threadpool-work | wc -l
> 4
>
> Signed-off-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
> ---
> builtin-run.c | 6 ++++++
> include/kvm/kvm-config.h | 1 +
> util/threadpool.c | 2 +-
> 3 files changed, 8 insertions(+), 1 deletion(-)
>
> New in v2: paramter must be in range [1, number of online cpus]
> otherwise the default (number of online cpus) will be used.
I thought some more about this and started to wonder whether we can make
the threadpool self-balancing.
Given the fairly restricted nature of kvmtool's threading model, could
we not start with a small fixed number of threads (but no more than the
number of physical CPUs) and then create new threads on demand when we
detect that there is backlog in the job queue and there are spare CPUs?
I don't think it should be too hard, especially if we ignore the problem
of destroying idle threads and it removes the need for a magic tunable
on the cmdline.
Will
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool
@ 2015-01-06 13:13 Andreas Herrmann
0 siblings, 0 replies; 6+ messages in thread
From: Andreas Herrmann @ 2015-01-06 13:13 UTC (permalink / raw)
To: Pekka Enberg; +Cc: kvm
With current code the number of threads added to the thread_pool
equals number of online CPUs. IMHO on systems with many CPUs this is
overkill for guests that just have a few vCPUs and/or if a guest is
pinned to a subset of host CPUs. E.g. of a system with 48 cores
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ...
# ps -La | grep threadpool-work | wc -l
48
Don't change default behaviour (for sake of compatibility) but
introduce a new parameter ("-t" or "--threads") that allows to specify
number of threads to be created for the thread_pool:
# numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ...
# ps -La | grep threadpool-work | wc -l
4
Signed-off-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
---
tools/kvm/builtin-run.c | 2 ++
tools/kvm/include/kvm/kvm-config.h | 1 +
tools/kvm/util/threadpool.c | 5 ++++-
3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/kvm/builtin-run.c b/tools/kvm/builtin-run.c
index 1ee75ad..86de53d 100644
--- a/tools/kvm/builtin-run.c
+++ b/tools/kvm/builtin-run.c
@@ -131,6 +131,8 @@ void kvm_run_set_wrapper_sandbox(void)
" rootfs"), \
OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path", \
"Hugetlbfs path"), \
+ OPT_INTEGER('t', "threads", &(cfg)->nrthreads, \
+ "Number of threads in thread_pool"), \
\
OPT_GROUP("Kernel options:"), \
OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel", \
diff --git a/tools/kvm/include/kvm/kvm-config.h b/tools/kvm/include/kvm/kvm-config.h
index 386fa8c..9cc50f5 100644
--- a/tools/kvm/include/kvm/kvm-config.h
+++ b/tools/kvm/include/kvm/kvm-config.h
@@ -27,6 +27,7 @@ struct kvm_config {
int active_console;
int debug_iodelay;
int nrcpus;
+ int nrthreads;
const char *kernel_cmdline;
const char *kernel_filename;
const char *vmlinux_filename;
diff --git a/tools/kvm/util/threadpool.c b/tools/kvm/util/threadpool.c
index e64aa26..620fdbd 100644
--- a/tools/kvm/util/threadpool.c
+++ b/tools/kvm/util/threadpool.c
@@ -124,7 +124,10 @@ static int thread_pool__addthread(void)
int thread_pool__init(struct kvm *kvm)
{
unsigned long i;
- unsigned int thread_count = sysconf(_SC_NPROCESSORS_ONLN);
+ unsigned int thread_count;
+
+ thread_count = kvm->cfg.nrthreads ? kvm->cfg.nrthreads :
+ sysconf(_SC_NPROCESSORS_ONLN);
running = true;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-06-30 14:03 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-29 7:45 [PATCH] kvmtool: Add parameter to specifiy number of threads in thread_pool Andreas Herrmann
2015-06-29 9:45 ` Will Deacon
2015-06-29 10:14 ` Andreas Herrmann
2015-06-29 11:43 ` [PATCH v2] " Andreas Herrmann
2015-06-30 14:03 ` Will Deacon
-- strict thread matches above, loose matches on Subject: below --
2015-01-06 13:13 [PATCH] " Andreas Herrmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).