* implement runqueue per cpupool
@ 2017-09-12 0:45 anshulmakkar
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
` (3 more replies)
0 siblings, 4 replies; 16+ messages in thread
From: anshulmakkar @ 2017-09-12 0:45 UTC (permalink / raw)
To: xen-devel
Cc: jgross, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich
Attached patch series introduces the concept of runqueue
per cpupool.
"runqueue" configuration option can be specified with xl command
while configuring cpupool. This will define the basis for
grouping of cpus (cpu, core, socket, all) in that cpupool.
Series is combined of following patches:
[PATCH 1/3]: libxc related changes to add support for runqueue per cpupool
[PATCH 2/3]: libxl related changes to add support for runqueue per cpupool
[PATCH 3/3]: xen related changes.
On similar lines we can also have runqueue per pool configuration parameter
for Credit scheduler. I plan to send a separate patch series for credit
specific implementation.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2017-09-12 0:45 implement runqueue per cpupool anshulmakkar
@ 2017-09-12 0:45 ` anshulmakkar
2017-09-14 6:42 ` Juergen Gross
2017-09-14 13:28 ` Dario Faggioli
2017-09-12 0:45 ` [PATCH 2/3] credit2: libxl " anshulmakkar
` (2 subsequent siblings)
3 siblings, 2 replies; 16+ messages in thread
From: anshulmakkar @ 2017-09-12 0:45 UTC (permalink / raw)
To: xen-devel
Cc: jgross, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich, anshulmakkar
libxc receives scheduler specific configuration parametes from
libxl.
Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
---
tools/libxc/include/xenctrl.h | 6 +++++-
tools/libxc/xc_cpupool.c | 4 +++-
tools/python/xen/lowlevel/xc/xc.c | 3 ++-
3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 43151cb..e2157e9 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1077,17 +1077,21 @@ typedef struct xc_cpupoolinfo {
#define XC_CPUPOOL_POOLID_ANY 0xFFFFFFFF
+typedef xen_sysctl_sched_param_t xc_schedparam_t;
+
/**
* Create a new cpupool.
*
* @parm xc_handle a handle to an open hypervisor interface
* @parm ppoolid pointer to the new cpupool id (in/out)
* @parm sched_id id of scheduler to use for pool
+ * @parm sched_param parameter of the scheduler of the cpupool eg. runq for credit2
* return 0 on success, -1 on failure
*/
int xc_cpupool_create(xc_interface *xch,
uint32_t *ppoolid,
- uint32_t sched_id);
+ uint32_t sched_id,
+ xc_schedparam_t * sched_param);
/**
* Destroy a cpupool. Pool must be unused and have no cpu assigned.
diff --git a/tools/libxc/xc_cpupool.c b/tools/libxc/xc_cpupool.c
index fbd8cc9..fb2d183 100644
--- a/tools/libxc/xc_cpupool.c
+++ b/tools/libxc/xc_cpupool.c
@@ -36,7 +36,8 @@ static int do_sysctl_save(xc_interface *xch, struct xen_sysctl *sysctl)
int xc_cpupool_create(xc_interface *xch,
uint32_t *ppoolid,
- uint32_t sched_id)
+ uint32_t sched_id,
+ xc_schedparam_t * sched_params)
{
int err;
DECLARE_SYSCTL;
@@ -46,6 +47,7 @@ int xc_cpupool_create(xc_interface *xch,
sysctl.u.cpupool_op.cpupool_id = (*ppoolid == XC_CPUPOOL_POOLID_ANY) ?
XEN_SYSCTL_CPUPOOL_PAR_ANY : *ppoolid;
sysctl.u.cpupool_op.sched_id = sched_id;
+ sysctl.u.cpupool_op.sched_param = *sched_params;
if ( (err = do_sysctl_save(xch, &sysctl)) != 0 )
return err;
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index aa9f8e4..a83a23f 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1704,6 +1704,7 @@ static PyObject *pyxc_cpupool_create(XcObject *self,
PyObject *kwds)
{
uint32_t cpupool = XC_CPUPOOL_POOLID_ANY, sched = XEN_SCHEDULER_CREDIT;
+ xc_schedparam_t param;
static char *kwd_list[] = { "pool", "sched", NULL };
@@ -1711,7 +1712,7 @@ static PyObject *pyxc_cpupool_create(XcObject *self,
&sched))
return NULL;
- if ( xc_cpupool_create(self->xc_handle, &cpupool, sched) < 0 )
+ if ( xc_cpupool_create(self->xc_handle, &cpupool, sched, ¶m) < 0 )
return pyxc_error_to_exception(self->xc_handle);
return PyLongOrInt_FromLong(cpupool);
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/3] credit2: libxl related changes to add support for runqueue per cpupool.
2017-09-12 0:45 implement runqueue per cpupool anshulmakkar
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
@ 2017-09-12 0:45 ` anshulmakkar
2017-09-14 6:37 ` Juergen Gross
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
2017-09-14 4:21 ` implement " Juergen Gross
3 siblings, 1 reply; 16+ messages in thread
From: anshulmakkar @ 2017-09-12 0:45 UTC (permalink / raw)
To: xen-devel
Cc: jgross, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich, anshulmakkar
Introduces scheduler specific parameter at libxl level which are
passed on to libxc. eg runqueue for credit2
Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
---
tools/libxl/libxl.h | 2 +-
tools/libxl/libxl_cpupool.c | 15 +++++++++++++--
tools/libxl/libxl_types.idl | 46 ++++++++++++++++++++++++++++++++++-----------
tools/xl/xl_cpupool.c | 16 ++++++++++++++--
4 files changed, 63 insertions(+), 16 deletions(-)
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 91408b4..6617c64 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -2150,7 +2150,7 @@ int libxl_get_freecpus(libxl_ctx *ctx, libxl_bitmap *cpumap);
int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
libxl_scheduler sched,
libxl_bitmap cpumap, libxl_uuid *uuid,
- uint32_t *poolid);
+ uint32_t *poolid, const libxl_scheduler_params *sched_param);
int libxl_cpupool_destroy(libxl_ctx *ctx, uint32_t poolid);
int libxl_cpupool_rename(libxl_ctx *ctx, const char *name, uint32_t poolid);
int libxl_cpupool_cpuadd(libxl_ctx *ctx, uint32_t poolid, int cpu);
diff --git a/tools/libxl/libxl_cpupool.c b/tools/libxl/libxl_cpupool.c
index 85b0688..e3ce7b3 100644
--- a/tools/libxl/libxl_cpupool.c
+++ b/tools/libxl/libxl_cpupool.c
@@ -130,7 +130,7 @@ int libxl_get_freecpus(libxl_ctx *ctx, libxl_bitmap *cpumap)
int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
libxl_scheduler sched,
libxl_bitmap cpumap, libxl_uuid *uuid,
- uint32_t *poolid)
+ uint32_t *poolid, const libxl_scheduler_params *sched_params)
{
GC_INIT(ctx);
int rc;
@@ -138,6 +138,7 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
xs_transaction_t t;
char *uuid_string;
uint32_t xcpoolid;
+ xc_schedparam_t xc_sched_param;
/* Accept '0' as 'any poolid' for backwards compatibility */
if ( *poolid == LIBXL_CPUPOOL_POOLID_ANY
@@ -151,8 +152,18 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
GC_FREE;
return ERROR_NOMEM;
}
+ if (sched_params)
+ {
+ xc_sched_param.u.sched_credit2.ratelimit_us =
+ sched_params->u.credit2.ratelimit_us;
+ xc_sched_param.u.sched_credit2.runq = sched_params->u.credit2.runqueue;
+ xc_sched_param.u.sched_credit.tslice_ms = sched_params->u.credit.tslice_ms;
+ xc_sched_param.u.sched_credit.ratelimit_us = sched_params->u.credit.ratelimit_us;
+ }
+ else
+ xc_sched_param.u.sched_credit2.runq = LIBXL_CREDIT2_RUNQUEUE_DEFAULT;
- rc = xc_cpupool_create(ctx->xch, &xcpoolid, sched);
+ rc = xc_cpupool_create(ctx->xch, &xcpoolid, sched, &xc_sched_param);
if (rc) {
LOGEV(ERROR, rc, "Could not create cpupool");
GC_FREE;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 173d70a..f25429d 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -194,6 +194,16 @@ libxl_scheduler = Enumeration("scheduler", [
(9, "null"),
])
+# consistent with sched_credit2.c
+libxl_credit2_runqueue = Enumeration("credit2_runqueue", [
+ (0, "CPU"),
+ (1, "CORE"),
+ (2, "SOCKET"),
+ (3, "NODE"),
+ (4, "ALL"),
+ (5, "DEFAULT"),
+ ])
+
# Consistent with SHUTDOWN_* in sched.h (apart from UNKNOWN)
libxl_shutdown_reason = Enumeration("shutdown_reason", [
(-1, "unknown"),
@@ -326,15 +336,38 @@ libxl_dominfo = Struct("dominfo",[
("domain_type", libxl_domain_type),
], dir=DIR_OUT)
+libxl_sched_credit_params = Struct("sched_credit_params", [
+ ("tslice_ms", integer),
+ ("ratelimit_us", integer),
+ ], dispose_fn=None)
+
+libxl_sched_credit2_params = Struct("sched_credit2_params", [
+ ("ratelimit_us", integer),
+ ("runqueue", libxl_credit2_runqueue),
+ ], dispose_fn=None)
+
+libxl_scheduler_params = Struct("scheduler_params", [
+ ("u", KeyedUnion(None,libxl_scheduler_tpye "scheduler_type",
+ [("credit2", libxl_sched_credit2_params),
+ ("credit", libxl_sched_credit_params),
+ ("null", None),
+ ("arinc653", None),
+ ("rtds", None),
+ ("unknown", None),
+ ("sedf", None),
+ ])),
+ ])
+
libxl_cpupoolinfo = Struct("cpupoolinfo", [
("poolid", uint32),
("pool_name", string),
("sched", libxl_scheduler),
("n_dom", uint32),
- ("cpumap", libxl_bitmap)
+ ("cpumap", libxl_bitmap),
+ ("sched_param", libxl_scheduler_params),
], dir=DIR_OUT)
-libxl_channelinfo = Struct("channelinfo", [
+ibxl_channelinfo = Struct("channelinfo", [
("backend", string),
("backend_id", uint32),
("frontend", string),
@@ -910,15 +943,6 @@ libxl_pcitopology = Struct("pcitopology", [
("node", uint32),
], dir=DIR_OUT)
-libxl_sched_credit_params = Struct("sched_credit_params", [
- ("tslice_ms", integer),
- ("ratelimit_us", integer),
- ], dispose_fn=None)
-
-libxl_sched_credit2_params = Struct("sched_credit2_params", [
- ("ratelimit_us", integer),
- ], dispose_fn=None)
-
libxl_domain_remus_info = Struct("domain_remus_info",[
("interval", integer),
("allow_unsafe", libxl_defbool),
diff --git a/tools/xl/xl_cpupool.c b/tools/xl/xl_cpupool.c
index 273811b..dc419eb 100644
--- a/tools/xl/xl_cpupool.c
+++ b/tools/xl/xl_cpupool.c
@@ -43,6 +43,7 @@ int main_cpupoolcreate(int argc, char **argv)
char *name = NULL;
uint32_t poolid;
libxl_scheduler sched = 0;
+ libxl_scheduler_params sched_params;
XLU_ConfigList *cpus;
XLU_ConfigList *nodes;
int n_cpus, n_nodes, i, n;
@@ -207,16 +208,27 @@ int main_cpupoolcreate(int argc, char **argv)
} else
n_cpus = 0;
+ sched_params.u.credit2.runqueue = LIBXL_CREDIT2_RUNQUEUE_CORE;
+ if (!xlu_cfg_get_string (config, "runqueue", &buf, 0) &&
+ sched == LIBXL_SCHEDULER_CREDIT2) {
+ if ((libxl_credit2_runqueue_from_string(buf, &sched_params.u.credit2.runqueue)) < 0 ) {
+ fprintf(stderr, "Unknown runqueue option\n");
+ sched_params.u.credit2.runqueue = LIBXL_CREDIT2_RUNQUEUE_CORE; /*default CORE */
+ }
+ }
+
libxl_uuid_generate(&uuid);
printf("Using config file \"%s\"\n", config_src);
printf("cpupool name: %s\n", name);
printf("scheduler: %s\n", libxl_scheduler_to_string(sched));
+ if (sched == LIBXL_SCHEDULER_CREDIT2)
+ printf(" runq: %s\n", libxl_credit2_runqueue_to_string(sched_params.u.credit2.runqueue));
printf("number of cpus: %d\n", n_cpus);
if (!dryrun_only) {
poolid = LIBXL_CPUPOOL_POOLID_ANY;
- if (libxl_cpupool_create(ctx, name, sched, cpumap, &uuid, &poolid)) {
+ if (libxl_cpupool_create(ctx, name, sched, cpumap, &uuid, &poolid, &sched_params)) {
fprintf(stderr, "error on creating cpupool\n");
goto out_cfg;
}
@@ -587,7 +599,7 @@ int main_cpupoolnumasplit(int argc, char **argv)
xasprintf(&name, "Pool-node%d", node);
libxl_uuid_generate(&uuid);
poolid = 0;
- if (libxl_cpupool_create(ctx, name, sched, cpumap, &uuid, &poolid)) {
+ if (libxl_cpupool_create(ctx, name, sched, cpumap, &uuid, &poolid, NULL)) {
fprintf(stderr, "error on creating cpupool\n");
goto out;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/3] credit2: xen related changes to add support for runqueue per cpupool.
2017-09-12 0:45 implement runqueue per cpupool anshulmakkar
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
2017-09-12 0:45 ` [PATCH 2/3] credit2: libxl " anshulmakkar
@ 2017-09-12 0:45 ` anshulmakkar
2017-09-14 6:24 ` Juergen Gross
` (2 more replies)
2017-09-14 4:21 ` implement " Juergen Gross
3 siblings, 3 replies; 16+ messages in thread
From: anshulmakkar @ 2017-09-12 0:45 UTC (permalink / raw)
To: xen-devel
Cc: jgross, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich, anshulmakkar
Handles extra scheduler configuration paramers received
at init stage.
Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
---
xen/common/cpupool.c | 13 ++++++++-----
xen/common/sched_arinc653.c | 2 +-
xen/common/sched_credit.c | 2 +-
xen/common/sched_credit2.c | 8 +++++++-
xen/common/sched_null.c | 3 ++-
xen/common/sched_rt.c | 2 +-
xen/common/schedule.c | 11 +++++++----
xen/include/public/sysctl.h | 45 +++++++++++++++++++++++++++++----------------
xen/include/xen/sched-if.h | 3 ++-
xen/include/xen/sched.h | 4 +++-
10 files changed, 61 insertions(+), 32 deletions(-)
diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index 9998394..31bace4 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -129,12 +129,13 @@ void cpupool_put(struct cpupool *pool)
* - unknown scheduler
*/
static struct cpupool *cpupool_create(
- int poolid, unsigned int sched_id, int *perr)
+ int poolid, unsigned int sched_id,
+ xen_sysctl_sched_param_t param,
+ int *perr)
{
struct cpupool *c;
struct cpupool **q;
int last = 0;
-
*perr = -ENOMEM;
if ( (c = alloc_cpupool_struct()) == NULL )
return NULL;
@@ -171,7 +172,7 @@ static struct cpupool *cpupool_create(
}
else
{
- c->sched = scheduler_alloc(sched_id, perr);
+ c->sched = scheduler_alloc(sched_id, param, perr);
if ( c->sched == NULL )
{
spin_unlock(&cpupool_lock);
@@ -600,10 +601,11 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
case XEN_SYSCTL_CPUPOOL_OP_CREATE:
{
int poolid;
+ xen_sysctl_sched_param_t param = op->sched_param;
poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
CPUPOOLID_NONE: op->cpupool_id;
- c = cpupool_create(poolid, op->sched_id, &ret);
+ c = cpupool_create(poolid, op->sched_id, param, &ret);
if ( c != NULL )
{
op->cpupool_id = c->cpupool_id;
@@ -798,7 +800,8 @@ static int __init cpupool_presmp_init(void)
{
int err;
void *cpu = (void *)(long)smp_processor_id();
- cpupool0 = cpupool_create(0, 0, &err);
+ xen_sysctl_sched_param_t param;
+ cpupool0 = cpupool_create(0, 0, param, &err);
BUG_ON(cpupool0 == NULL);
cpupool_put(cpupool0);
cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 0b1b849..1f5d0d0 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -343,7 +343,7 @@ arinc653_sched_get(
* </ul>
*/
static int
-a653sched_init(struct scheduler *ops)
+a653sched_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
{
a653sched_priv_t *prv;
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 4fdaa08..44ceaf7 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -2160,7 +2160,7 @@ csched_dump(const struct scheduler *ops)
}
static int
-csched_init(struct scheduler *ops)
+csched_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
{
struct csched_private *prv;
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 0f93ad5..9b5bbbf 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -391,6 +391,7 @@ struct csched2_private {
unsigned int load_precision_shift; /* Precision of load calculations */
unsigned int load_window_shift; /* Lenght of load decaying window */
unsigned int ratelimit_us; /* Rate limiting for this scheduler */
+ unsigned runqueue; /* cpupool has its own type of runq */
cpumask_t active_queues; /* Runqueues with (maybe) active cpus */
struct csched2_runqueue_data *rqd; /* Data of the various runqueues */
@@ -3349,7 +3350,7 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
}
static int
-csched2_init(struct scheduler *ops)
+csched2_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
{
int i;
struct csched2_private *prv;
@@ -3410,6 +3411,11 @@ csched2_init(struct scheduler *ops)
/* initialize ratelimit */
prv->ratelimit_us = sched_ratelimit_us;
+ /* not need of type checking here if sched_para.type = credit2. Code
+ * block is here means we have type as credit2.
+ */
+ prv->runqueue = sched_param.u.sched_credit2.runq;
+
prv->load_precision_shift = opt_load_precision_shift;
prv->load_window_shift = opt_load_window_shift - LOADAVG_GRANULARITY_SHIFT;
ASSERT(opt_load_window_shift > 0);
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index b4a24ba..cab45c7 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -135,7 +135,8 @@ static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu,
return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu));
}
-static int null_init(struct scheduler *ops)
+static int null_init(struct scheduler *ops,
+ xen_sysctl_sched_param_t sched_param)
{
struct null_private *prv;
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 0ac5816..6593489 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -624,7 +624,7 @@ rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc)
* Init/Free related code
*/
static int
-rt_init(struct scheduler *ops)
+rt_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
{
int rc = -ENOMEM;
struct rt_private *prv = xzalloc(struct rt_private);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 8827921..8945cf0 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1676,6 +1676,7 @@ void __init scheduler_init(void)
{
struct domain *idle_domain;
int i;
+ xen_sysctl_sched_param_t param;
open_softirq(SCHEDULE_SOFTIRQ, schedule);
@@ -1704,9 +1705,9 @@ void __init scheduler_init(void)
if ( cpu_schedule_up(0) )
BUG();
register_cpu_notifier(&cpu_schedule_nfb);
-
+
printk("Using scheduler: %s (%s)\n", ops.name, ops.opt_name);
- if ( SCHED_OP(&ops, init) )
+ if ( SCHED_OP(&ops, init, param) )
panic("scheduler returned error on init");
if ( sched_ratelimit_us &&
@@ -1835,7 +1836,9 @@ struct scheduler *scheduler_get_default(void)
return &ops;
}
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
+struct scheduler *scheduler_alloc(unsigned int sched_id,
+ xen_sysctl_sched_param_t param,
+ int *perr)
{
int i;
struct scheduler *sched;
@@ -1851,7 +1854,7 @@ struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
if ( (sched = xmalloc(struct scheduler)) == NULL )
return NULL;
memcpy(sched, schedulers[i], sizeof(*sched));
- if ( (*perr = SCHED_OP(sched, init)) != 0 )
+ if ( (*perr = SCHED_OP(sched, init, param)) != 0 )
{
xfree(sched);
sched = NULL;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 7830b98..1182422 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -538,7 +538,34 @@ struct xen_sysctl_numainfo {
typedef struct xen_sysctl_numainfo xen_sysctl_numainfo_t;
DEFINE_XEN_GUEST_HANDLE(xen_sysctl_numainfo_t);
+struct xen_sysctl_credit_schedule {
+ /* Length of timeslice in milliseconds */
+#define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
+#define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
+ unsigned tslice_ms;
+ unsigned ratelimit_us;
+};
+typedef struct xen_sysctl_credit_schedule xen_sysctl_credit_schedule_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit_schedule_t);
+
+struct xen_sysctl_credit2_schedule {
+ unsigned ratelimit_us;
+ unsigned runq;
+};
+typedef struct xen_sysctl_credit2_schedule xen_sysctl_credit2_schedule_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit2_schedule_t);
+
+
/* XEN_SYSCTL_cpupool_op */
+struct xen_sysctl_sched_param {
+ union {
+ struct xen_sysctl_credit2_schedule sched_credit2;
+ struct xen_sysctl_credit_schedule sched_credit;
+ } u;
+};
+typedef struct xen_sysctl_sched_param xen_sysctl_sched_param_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_sched_param_t);
+
#define XEN_SYSCTL_CPUPOOL_OP_CREATE 1 /* C */
#define XEN_SYSCTL_CPUPOOL_OP_DESTROY 2 /* D */
#define XEN_SYSCTL_CPUPOOL_OP_INFO 3 /* I */
@@ -555,6 +582,8 @@ struct xen_sysctl_cpupool_op {
uint32_t cpu; /* IN: AR */
uint32_t n_dom; /* OUT: I */
struct xenctl_bitmap cpumap; /* OUT: IF */
+ /* IN: scheduler param relevant for cpupool */
+ xen_sysctl_sched_param_t sched_param;
};
typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
@@ -630,22 +659,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_arinc653_schedule_t);
#define XEN_SYSCTL_SCHED_RATELIMIT_MAX 500000
#define XEN_SYSCTL_SCHED_RATELIMIT_MIN 100
-struct xen_sysctl_credit_schedule {
- /* Length of timeslice in milliseconds */
-#define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
-#define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
- unsigned tslice_ms;
- unsigned ratelimit_us;
-};
-typedef struct xen_sysctl_credit_schedule xen_sysctl_credit_schedule_t;
-DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit_schedule_t);
-
-struct xen_sysctl_credit2_schedule {
- unsigned ratelimit_us;
-};
-typedef struct xen_sysctl_credit2_schedule xen_sysctl_credit2_schedule_t;
-DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit2_schedule_t);
-
/* XEN_SYSCTL_scheduler_op */
/* Set or get info? */
#define XEN_SYSCTL_SCHEDOP_putinfo 0
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index c4a4935..2272ea0 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -136,7 +136,8 @@ struct scheduler {
int (*global_init) (void);
- int (*init) (struct scheduler *);
+ int (*init) (struct scheduler *,
+ xen_sysctl_sched_param_t);
void (*deinit) (struct scheduler *);
void (*free_vdata) (const struct scheduler *, void *);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 5b8f8c6..43b7dae 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -832,7 +832,9 @@ void cpu_init(void);
struct scheduler;
struct scheduler *scheduler_get_default(void);
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
+struct scheduler *scheduler_alloc(unsigned int sched_id,
+ xen_sysctl_sched_param_t param,
+ int *perr);
void scheduler_free(struct scheduler *sched);
int schedule_cpu_switch(unsigned int cpu, struct cpupool *c);
void vcpu_force_reschedule(struct vcpu *v);
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: implement runqueue per cpupool
2017-09-12 0:45 implement runqueue per cpupool anshulmakkar
` (2 preceding siblings ...)
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
@ 2017-09-14 4:21 ` Juergen Gross
3 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2017-09-14 4:21 UTC (permalink / raw)
To: anshulmakkar, xen-devel
Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich
On 12/09/17 02:45, anshulmakkar wrote:
> Attached patch series introduces the concept of runqueue
> per cpupool.
>
> "runqueue" configuration option can be specified with xl command
> while configuring cpupool. This will define the basis for
> grouping of cpus (cpu, core, socket, all) in that cpupool.
>
> Series is combined of following patches:
> [PATCH 1/3]: libxc related changes to add support for runqueue per cpupool
> [PATCH 2/3]: libxl related changes to add support for runqueue per cpupool
> [PATCH 3/3]: xen related changes.
>
> On similar lines we can also have runqueue per pool configuration parameter
> for Credit scheduler. I plan to send a separate patch series for credit
> specific implementation.
>
This series won't compile when patch 3 isn't applied.
You need to split patch 3 into two: one patch to extend the hypercall
interface which should go in first, and the other part of the hypervisor
code being the last patch.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] credit2: xen related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
@ 2017-09-14 6:24 ` Juergen Gross
2017-09-14 10:03 ` Jan Beulich
2017-09-14 14:08 ` Dario Faggioli
2 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2017-09-14 6:24 UTC (permalink / raw)
To: anshulmakkar, xen-devel
Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich
On 12/09/17 02:45, anshulmakkar wrote:
> Handles extra scheduler configuration paramers received
> at init stage.
>
> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
> ---
> xen/common/cpupool.c | 13 ++++++++-----
> xen/common/sched_arinc653.c | 2 +-
> xen/common/sched_credit.c | 2 +-
> xen/common/sched_credit2.c | 8 +++++++-
> xen/common/sched_null.c | 3 ++-
> xen/common/sched_rt.c | 2 +-
> xen/common/schedule.c | 11 +++++++----
> xen/include/public/sysctl.h | 45 +++++++++++++++++++++++++++++----------------
> xen/include/xen/sched-if.h | 3 ++-
> xen/include/xen/sched.h | 4 +++-
> 10 files changed, 61 insertions(+), 32 deletions(-)
>
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 9998394..31bace4 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -129,12 +129,13 @@ void cpupool_put(struct cpupool *pool)
> * - unknown scheduler
> */
> static struct cpupool *cpupool_create(
> - int poolid, unsigned int sched_id, int *perr)
> + int poolid, unsigned int sched_id,
> + xen_sysctl_sched_param_t param,
I'd prefer param to be passed via a pointer. Passing a struct as a
function parameter isn't good practice, especially in case it is passed
down multiple levels.
> + int *perr)
> {
> struct cpupool *c;
> struct cpupool **q;
> int last = 0;
> -
> *perr = -ENOMEM;
> if ( (c = alloc_cpupool_struct()) == NULL )
> return NULL;
> @@ -171,7 +172,7 @@ static struct cpupool *cpupool_create(
> }
> else
> {
> - c->sched = scheduler_alloc(sched_id, perr);
> + c->sched = scheduler_alloc(sched_id, param, perr);
> if ( c->sched == NULL )
> {
> spin_unlock(&cpupool_lock);
> @@ -600,10 +601,11 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
> case XEN_SYSCTL_CPUPOOL_OP_CREATE:
> {
> int poolid;
> + xen_sysctl_sched_param_t param = op->sched_param;
>
> poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
> CPUPOOLID_NONE: op->cpupool_id;
> - c = cpupool_create(poolid, op->sched_id, &ret);
> + c = cpupool_create(poolid, op->sched_id, param, &ret);
> if ( c != NULL )
> {
> op->cpupool_id = c->cpupool_id;
> @@ -798,7 +800,8 @@ static int __init cpupool_presmp_init(void)
> {
> int err;
> void *cpu = (void *)(long)smp_processor_id();
> - cpupool0 = cpupool_create(0, 0, &err);
> + xen_sysctl_sched_param_t param;
> + cpupool0 = cpupool_create(0, 0, param, &err);
param is uninitialized here!
> BUG_ON(cpupool0 == NULL);
> cpupool_put(cpupool0);
> cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
> diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
> index 0b1b849..1f5d0d0 100644
> --- a/xen/common/sched_arinc653.c
> +++ b/xen/common/sched_arinc653.c
> @@ -343,7 +343,7 @@ arinc653_sched_get(
> * </ul>
> */
> static int
> -a653sched_init(struct scheduler *ops)
> +a653sched_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
> {
> a653sched_priv_t *prv;
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index 4fdaa08..44ceaf7 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -2160,7 +2160,7 @@ csched_dump(const struct scheduler *ops)
> }
>
> static int
> -csched_init(struct scheduler *ops)
> +csched_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
> {
> struct csched_private *prv;
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 0f93ad5..9b5bbbf 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -391,6 +391,7 @@ struct csched2_private {
> unsigned int load_precision_shift; /* Precision of load calculations */
> unsigned int load_window_shift; /* Lenght of load decaying window */
> unsigned int ratelimit_us; /* Rate limiting for this scheduler */
> + unsigned runqueue; /* cpupool has its own type of runq */
>
> cpumask_t active_queues; /* Runqueues with (maybe) active cpus */
> struct csched2_runqueue_data *rqd; /* Data of the various runqueues */
> @@ -3349,7 +3350,7 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
> }
>
> static int
> -csched2_init(struct scheduler *ops)
> +csched2_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
> {
> int i;
> struct csched2_private *prv;
> @@ -3410,6 +3411,11 @@ csched2_init(struct scheduler *ops)
> /* initialize ratelimit */
> prv->ratelimit_us = sched_ratelimit_us;
>
> + /* not need of type checking here if sched_para.type = credit2. Code
> + * block is here means we have type as credit2.
> + */
> + prv->runqueue = sched_param.u.sched_credit2.runq;
> +
> prv->load_precision_shift = opt_load_precision_shift;
> prv->load_window_shift = opt_load_window_shift - LOADAVG_GRANULARITY_SHIFT;
> ASSERT(opt_load_window_shift > 0);
> diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
> index b4a24ba..cab45c7 100644
> --- a/xen/common/sched_null.c
> +++ b/xen/common/sched_null.c
> @@ -135,7 +135,8 @@ static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu,
> return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu));
> }
>
> -static int null_init(struct scheduler *ops)
> +static int null_init(struct scheduler *ops,
> + xen_sysctl_sched_param_t sched_param)
> {
> struct null_private *prv;
>
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 0ac5816..6593489 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -624,7 +624,7 @@ rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc)
> * Init/Free related code
> */
> static int
> -rt_init(struct scheduler *ops)
> +rt_init(struct scheduler *ops, xen_sysctl_sched_param_t sched_param)
> {
> int rc = -ENOMEM;
> struct rt_private *prv = xzalloc(struct rt_private);
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 8827921..8945cf0 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1676,6 +1676,7 @@ void __init scheduler_init(void)
> {
> struct domain *idle_domain;
> int i;
> + xen_sysctl_sched_param_t param;
>
> open_softirq(SCHEDULE_SOFTIRQ, schedule);
>
> @@ -1704,9 +1705,9 @@ void __init scheduler_init(void)
> if ( cpu_schedule_up(0) )
> BUG();
> register_cpu_notifier(&cpu_schedule_nfb);
> -
> +
unrelated white space change
> printk("Using scheduler: %s (%s)\n", ops.name, ops.opt_name);
> - if ( SCHED_OP(&ops, init) )
> + if ( SCHED_OP(&ops, init, param) )
Again param not initialized.
> panic("scheduler returned error on init");
>
> if ( sched_ratelimit_us &&
> @@ -1835,7 +1836,9 @@ struct scheduler *scheduler_get_default(void)
> return &ops;
> }
>
> -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
> +struct scheduler *scheduler_alloc(unsigned int sched_id,
> + xen_sysctl_sched_param_t param,
> + int *perr)
> {
> int i;
> struct scheduler *sched;
> @@ -1851,7 +1854,7 @@ struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr)
> if ( (sched = xmalloc(struct scheduler)) == NULL )
> return NULL;
> memcpy(sched, schedulers[i], sizeof(*sched));
> - if ( (*perr = SCHED_OP(sched, init)) != 0 )
> + if ( (*perr = SCHED_OP(sched, init, param)) != 0 )
> {
> xfree(sched);
> sched = NULL;
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 7830b98..1182422 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -538,7 +538,34 @@ struct xen_sysctl_numainfo {
> typedef struct xen_sysctl_numainfo xen_sysctl_numainfo_t;
> DEFINE_XEN_GUEST_HANDLE(xen_sysctl_numainfo_t);
>
> +struct xen_sysctl_credit_schedule {
> + /* Length of timeslice in milliseconds */
> +#define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
> +#define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
> + unsigned tslice_ms;
> + unsigned ratelimit_us;
> +};
> +typedef struct xen_sysctl_credit_schedule xen_sysctl_credit_schedule_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit_schedule_t);
> +
> +struct xen_sysctl_credit2_schedule {
> + unsigned ratelimit_us;
> + unsigned runq;
While moving those structs mind to use unsigned int?
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] credit2: libxl related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 2/3] credit2: libxl " anshulmakkar
@ 2017-09-14 6:37 ` Juergen Gross
2017-11-16 21:10 ` Anshul Makkar
0 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2017-09-14 6:37 UTC (permalink / raw)
To: anshulmakkar, xen-devel
Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich
On 12/09/17 02:45, anshulmakkar wrote:
> Introduces scheduler specific parameter at libxl level which are
> passed on to libxc. eg runqueue for credit2
>
> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
> ---
> tools/libxl/libxl.h | 2 +-
> tools/libxl/libxl_cpupool.c | 15 +++++++++++++--
> tools/libxl/libxl_types.idl | 46 ++++++++++++++++++++++++++++++++++-----------
> tools/xl/xl_cpupool.c | 16 ++++++++++++++--
> 4 files changed, 63 insertions(+), 16 deletions(-)
>
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 91408b4..6617c64 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -2150,7 +2150,7 @@ int libxl_get_freecpus(libxl_ctx *ctx, libxl_bitmap *cpumap);
> int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
> libxl_scheduler sched,
> libxl_bitmap cpumap, libxl_uuid *uuid,
> - uint32_t *poolid);
> + uint32_t *poolid, const libxl_scheduler_params *sched_param);
You are modifying an exported libxl function. This requires
compatibility hooks (look e.g. how this is handled in libxl.h for
functions like libxl_get_memory_target)
> int libxl_cpupool_destroy(libxl_ctx *ctx, uint32_t poolid);
> int libxl_cpupool_rename(libxl_ctx *ctx, const char *name, uint32_t poolid);
> int libxl_cpupool_cpuadd(libxl_ctx *ctx, uint32_t poolid, int cpu);
> diff --git a/tools/libxl/libxl_cpupool.c b/tools/libxl/libxl_cpupool.c
> index 85b0688..e3ce7b3 100644
> --- a/tools/libxl/libxl_cpupool.c
> +++ b/tools/libxl/libxl_cpupool.c
> @@ -130,7 +130,7 @@ int libxl_get_freecpus(libxl_ctx *ctx, libxl_bitmap *cpumap)
> int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
> libxl_scheduler sched,
> libxl_bitmap cpumap, libxl_uuid *uuid,
> - uint32_t *poolid)
> + uint32_t *poolid, const libxl_scheduler_params *sched_params)
> {
> GC_INIT(ctx);
> int rc;
> @@ -138,6 +138,7 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
> xs_transaction_t t;
> char *uuid_string;
> uint32_t xcpoolid;
> + xc_schedparam_t xc_sched_param;
>
> /* Accept '0' as 'any poolid' for backwards compatibility */
> if ( *poolid == LIBXL_CPUPOOL_POOLID_ANY
> @@ -151,8 +152,18 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
> GC_FREE;
> return ERROR_NOMEM;
> }
> + if (sched_params)
> + {
> + xc_sched_param.u.sched_credit2.ratelimit_us =
> + sched_params->u.credit2.ratelimit_us;
> + xc_sched_param.u.sched_credit2.runq = sched_params->u.credit2.runqueue;
> + xc_sched_param.u.sched_credit.tslice_ms = sched_params->u.credit.tslice_ms;
> + xc_sched_param.u.sched_credit.ratelimit_us = sched_params->u.credit.ratelimit_us;
Don't you need some input parameter validation here?
> + }
> + else
> + xc_sched_param.u.sched_credit2.runq = LIBXL_CREDIT2_RUNQUEUE_DEFAULT;
So you are passing the LIBXL defines down to the hypervisor expecting
they match. I think this is a major layering violation.
>
> - rc = xc_cpupool_create(ctx->xch, &xcpoolid, sched);
> + rc = xc_cpupool_create(ctx->xch, &xcpoolid, sched, &xc_sched_param);
> if (rc) {
> LOGEV(ERROR, rc, "Could not create cpupool");
> GC_FREE;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 173d70a..f25429d 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -194,6 +194,16 @@ libxl_scheduler = Enumeration("scheduler", [
> (9, "null"),
> ])
>
> +# consistent with sched_credit2.c
> +libxl_credit2_runqueue = Enumeration("credit2_runqueue", [
> + (0, "CPU"),
> + (1, "CORE"),
> + (2, "SOCKET"),
> + (3, "NODE"),
> + (4, "ALL"),
> + (5, "DEFAULT"),
> + ])
> +
> # Consistent with SHUTDOWN_* in sched.h (apart from UNKNOWN)
> libxl_shutdown_reason = Enumeration("shutdown_reason", [
> (-1, "unknown"),
> @@ -326,15 +336,38 @@ libxl_dominfo = Struct("dominfo",[
> ("domain_type", libxl_domain_type),
> ], dir=DIR_OUT)
>
> +libxl_sched_credit_params = Struct("sched_credit_params", [
> + ("tslice_ms", integer),
> + ("ratelimit_us", integer),
> + ], dispose_fn=None)
> +
> +libxl_sched_credit2_params = Struct("sched_credit2_params", [
> + ("ratelimit_us", integer),
> + ("runqueue", libxl_credit2_runqueue),
> + ], dispose_fn=None)
> +
> +libxl_scheduler_params = Struct("scheduler_params", [
> + ("u", KeyedUnion(None,libxl_scheduler_tpye "scheduler_type",
> + [("credit2", libxl_sched_credit2_params),
> + ("credit", libxl_sched_credit_params),
> + ("null", None),
> + ("arinc653", None),
> + ("rtds", None),
> + ("unknown", None),
> + ("sedf", None),
> + ])),
> + ])
> +
> libxl_cpupoolinfo = Struct("cpupoolinfo", [
> ("poolid", uint32),
> ("pool_name", string),
> ("sched", libxl_scheduler),
> ("n_dom", uint32),
> - ("cpumap", libxl_bitmap)
> + ("cpumap", libxl_bitmap),
> + ("sched_param", libxl_scheduler_params),
You need a LIBXL_HAVE_* define in libxl.h to indicate presence of the
new struct member.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
@ 2017-09-14 6:42 ` Juergen Gross
2017-09-14 12:58 ` Dario Faggioli
2017-09-14 13:28 ` Dario Faggioli
1 sibling, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2017-09-14 6:42 UTC (permalink / raw)
To: anshulmakkar, xen-devel
Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, marmarek, robert.vanvossen, tim,
josh.whitehead, mengxu, jbeulich
On 12/09/17 02:45, anshulmakkar wrote:
> libxc receives scheduler specific configuration parametes from
> libxl.
>
> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
> ---
> tools/libxc/include/xenctrl.h | 6 +++++-
> tools/libxc/xc_cpupool.c | 4 +++-
> tools/python/xen/lowlevel/xc/xc.c | 3 ++-
> 3 files changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 43151cb..e2157e9 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -1077,17 +1077,21 @@ typedef struct xc_cpupoolinfo {
>
> #define XC_CPUPOOL_POOLID_ANY 0xFFFFFFFF
>
> +typedef xen_sysctl_sched_param_t xc_schedparam_t;
> +
> /**
> * Create a new cpupool.
> *
> * @parm xc_handle a handle to an open hypervisor interface
> * @parm ppoolid pointer to the new cpupool id (in/out)
> * @parm sched_id id of scheduler to use for pool
> + * @parm sched_param parameter of the scheduler of the cpupool eg. runq for credit2
I would drop "eg. runq for credit2"
> * return 0 on success, -1 on failure
> */
> int xc_cpupool_create(xc_interface *xch,
> uint32_t *ppoolid,
> - uint32_t sched_id);
> + uint32_t sched_id,
> + xc_schedparam_t * sched_param);
>
> /**
> * Destroy a cpupool. Pool must be unused and have no cpu assigned.
> diff --git a/tools/libxc/xc_cpupool.c b/tools/libxc/xc_cpupool.c
> index fbd8cc9..fb2d183 100644
> --- a/tools/libxc/xc_cpupool.c
> +++ b/tools/libxc/xc_cpupool.c
> @@ -36,7 +36,8 @@ static int do_sysctl_save(xc_interface *xch, struct xen_sysctl *sysctl)
>
> int xc_cpupool_create(xc_interface *xch,
> uint32_t *ppoolid,
> - uint32_t sched_id)
> + uint32_t sched_id,
> + xc_schedparam_t * sched_params)
Coding style (omit the space after "*").
> {
> int err;
> DECLARE_SYSCTL;
> @@ -46,6 +47,7 @@ int xc_cpupool_create(xc_interface *xch,
> sysctl.u.cpupool_op.cpupool_id = (*ppoolid == XC_CPUPOOL_POOLID_ANY) ?
> XEN_SYSCTL_CPUPOOL_PAR_ANY : *ppoolid;
> sysctl.u.cpupool_op.sched_id = sched_id;
> + sysctl.u.cpupool_op.sched_param = *sched_params;
> if ( (err = do_sysctl_save(xch, &sysctl)) != 0 )
> return err;
>
> diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
> index aa9f8e4..a83a23f 100644
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -1704,6 +1704,7 @@ static PyObject *pyxc_cpupool_create(XcObject *self,
> PyObject *kwds)
> {
> uint32_t cpupool = XC_CPUPOOL_POOLID_ANY, sched = XEN_SCHEDULER_CREDIT;
> + xc_schedparam_t param;
>
> static char *kwd_list[] = { "pool", "sched", NULL };
This needs to be extended for the sched_params, or you need to set sane
default values in param in case you don't want to support them in the
python bindings.
Another possibility would be to drop the cpupool python bindings
completely (which I would prefer, TBH).
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] credit2: xen related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
2017-09-14 6:24 ` Juergen Gross
@ 2017-09-14 10:03 ` Jan Beulich
2017-09-14 14:08 ` Dario Faggioli
2 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2017-09-14 10:03 UTC (permalink / raw)
To: anshulmakkar
Cc: Juergen Gross, tim, sstabellini, wei.liu2, George.Dunlap,
andrew.cooper3, dario.faggioli, ian.jackson, marmarek,
robert.vanvossen, xen-devel, josh.whitehead, mengxu
>>> On 12.09.17 at 02:45, <anshulmakkar@gmail.com> wrote:
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -538,7 +538,34 @@ struct xen_sysctl_numainfo {
> typedef struct xen_sysctl_numainfo xen_sysctl_numainfo_t;
> DEFINE_XEN_GUEST_HANDLE(xen_sysctl_numainfo_t);
>
> +struct xen_sysctl_credit_schedule {
> + /* Length of timeslice in milliseconds */
> +#define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
> +#define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
> + unsigned tslice_ms;
> + unsigned ratelimit_us;
> +};
> +typedef struct xen_sysctl_credit_schedule xen_sysctl_credit_schedule_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit_schedule_t);
No new unnecessary typedefs and handles anymore please now
that a patch is almost ready to go in to remove those.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2017-09-14 6:42 ` Juergen Gross
@ 2017-09-14 12:58 ` Dario Faggioli
2019-01-17 16:10 ` anshul
0 siblings, 1 reply; 16+ messages in thread
From: Dario Faggioli @ 2017-09-14 12:58 UTC (permalink / raw)
To: Juergen Gross, anshulmakkar, xen-devel
Cc: sstabellini, wei.liu2, George.Dunlap, andrew.cooper3, ian.jackson,
marmarek, robert.vanvossen, tim, josh.whitehead, mengxu, jbeulich
[-- Attachment #1.1: Type: text/plain, Size: 1609 bytes --]
On Thu, 2017-09-14 at 08:42 +0200, Juergen Gross wrote:
> > --- a/tools/libxc/include/xenctrl.h
> > +++ b/tools/libxc/include/xenctrl.h
> > @@ -1077,17 +1077,21 @@ typedef struct xc_cpupoolinfo {
> >
> > #define XC_CPUPOOL_POOLID_ANY 0xFFFFFFFF
> >
> > +typedef xen_sysctl_sched_param_t xc_schedparam_t;
> > +
> > /**
> > * Create a new cpupool.
> > *
> > * @parm xc_handle a handle to an open hypervisor interface
> > * @parm ppoolid pointer to the new cpupool id (in/out)
> > * @parm sched_id id of scheduler to use for pool
> > + * @parm sched_param parameter of the scheduler of the cpupool eg.
> > runq for credit2
>
> I would drop "eg. runq for credit2"
>
+1
> > --- a/tools/python/xen/lowlevel/xc/xc.c
> > +++ b/tools/python/xen/lowlevel/xc/xc.c
> > @@ -1704,6 +1704,7 @@ static PyObject *pyxc_cpupool_create(XcObject
> > *self,
> > PyObject *kwds)
> > {
> > uint32_t cpupool = XC_CPUPOOL_POOLID_ANY, sched =
> > XEN_SCHEDULER_CREDIT;
> > + xc_schedparam_t param;
> >
> > static char *kwd_list[] = { "pool", "sched", NULL };
>
> [..]
> Another possibility would be to drop the cpupool python bindings
> completely (which I would prefer, TBH).
>
+1
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
2017-09-14 6:42 ` Juergen Gross
@ 2017-09-14 13:28 ` Dario Faggioli
1 sibling, 0 replies; 16+ messages in thread
From: Dario Faggioli @ 2017-09-14 13:28 UTC (permalink / raw)
To: anshulmakkar, xen-devel; +Cc: jgross, George.Dunlap, ian.jackson, Wei Liu
[-- Attachment #1.1: Type: text/plain, Size: 649 bytes --]
[Trimming the Cc-list a bit]
On Tue, 2017-09-12 at 01:45 +0100, anshulmakkar wrote:
> libxc receives scheduler specific configuration parametes from
> libxl.
>
> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
>
Apart from what Juergen said (including the thing that the series won't
compile with this patch at the front), this patch looks fine to me.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] credit2: xen related changes to add support for runqueue per cpupool.
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
2017-09-14 6:24 ` Juergen Gross
2017-09-14 10:03 ` Jan Beulich
@ 2017-09-14 14:08 ` Dario Faggioli
2 siblings, 0 replies; 16+ messages in thread
From: Dario Faggioli @ 2017-09-14 14:08 UTC (permalink / raw)
To: anshulmakkar, xen-devel
Cc: jgross, sstabellini, wei.liu2, George.Dunlap, andrew.cooper3,
ian.jackson, marmarek, robert.vanvossen, tim, josh.whitehead,
mengxu, jbeulich
[-- Attachment #1.1: Type: text/plain, Size: 5222 bytes --]
On Tue, 2017-09-12 at 01:45 +0100, anshulmakkar wrote:
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -129,12 +129,13 @@ void cpupool_put(struct cpupool *pool)
> * - unknown scheduler
> */
> static struct cpupool *cpupool_create(
> - int poolid, unsigned int sched_id, int *perr)
> + int poolid, unsigned int sched_id,
> + xen_sysctl_sched_param_t param,
> + int *perr)
>
I second Juergen's opinion about as much as possible of these
xen_sysctl_sched_param to move around functions as (const?) pointers.
> {
> struct cpupool *c;
> struct cpupool **q;
> int last = 0;
> -
Spurious blank line deletion.
> *perr = -ENOMEM;
> if ( (c = alloc_cpupool_struct()) == NULL )
> return NULL;
> @@ -600,10 +601,11 @@ int cpupool_do_sysctl(struct
> xen_sysctl_cpupool_op *op)
> case XEN_SYSCTL_CPUPOOL_OP_CREATE:
> {
> int poolid;
> + xen_sysctl_sched_param_t param = op->sched_param;
>
> poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ?
> CPUPOOLID_NONE: op->cpupool_id;
> - c = cpupool_create(poolid, op->sched_id, &ret);
> + c = cpupool_create(poolid, op->sched_id, param, &ret);
>
Why you need the 'param' temporary variable?
> @@ -798,7 +800,8 @@ static int __init cpupool_presmp_init(void)
> {
> int err;
> void *cpu = (void *)(long)smp_processor_id();
> - cpupool0 = cpupool_create(0, 0, &err);
> + xen_sysctl_sched_param_t param;
> + cpupool0 = cpupool_create(0, 0, param, &err);
>
And in fact, if you use pointers, here you can pass NULL (to mean "just
use default parameters").
> BUG_ON(cpupool0 == NULL);
> cpupool_put(cpupool0);
> cpu_callback(&cpu_nfb, CPU_ONLINE, cpu);
> --- a/xen/common/sched_arinc653.c
> +++ b/xen/common/sched_arinc653.c
> @@ -343,7 +343,7 @@ arinc653_sched_get(
> * </ul>
> */
> static int
> -a653sched_init(struct scheduler *ops)
> +a653sched_init(struct scheduler *ops, xen_sysctl_sched_param_t
> sched_param)
> {
> a653sched_priv_t *prv;
>
And here, and in other schedulers that doesn't take parameters, still
if you use pointers, you can check that things are being done
properly, by putting an
ASSERT(sched_param == NULL);
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -3410,6 +3411,11 @@ csched2_init(struct scheduler *ops)
> /* initialize ratelimit */
> prv->ratelimit_us = sched_ratelimit_us;
>
> + /* not need of type checking here if sched_para.type = credit2.
> Code
> + * block is here means we have type as credit2.
> + */
> + prv->runqueue = sched_param.u.sched_credit2.runq;
> +
I don't understand what the comment is trying to say (and its style is
wrong: missing the opening 'wing').
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -555,6 +582,8 @@ struct xen_sysctl_cpupool_op {
> uint32_t cpu; /* IN: AR */
> uint32_t n_dom; /* OUT: I */
> struct xenctl_bitmap cpumap; /* OUT: IF */
> + /* IN: scheduler param relevant for cpupool */
> + xen_sysctl_sched_param_t sched_param;
> };
>
For the comment, follow the same convention used for other fields
(i.e., for now, 'IN: C').
We will certainly want to be able to also retrieve the scheduler
parameter set for a certain pool, at which point this will have to
become 'IN: C OUT: I'... but that's for another patch series, I
guess.
> typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
> DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
> @@ -630,22 +659,6 @@
> DEFINE_XEN_GUEST_HANDLE(xen_sysctl_arinc653_schedule_t);
> #define XEN_SYSCTL_SCHED_RATELIMIT_MAX 500000
> #define XEN_SYSCTL_SCHED_RATELIMIT_MIN 100
>
> -struct xen_sysctl_credit_schedule {
> - /* Length of timeslice in milliseconds */
> -#define XEN_SYSCTL_CSCHED_TSLICE_MAX 1000
> -#define XEN_SYSCTL_CSCHED_TSLICE_MIN 1
> - unsigned tslice_ms;
> - unsigned ratelimit_us;
> -};
> -typedef struct xen_sysctl_credit_schedule
> xen_sysctl_credit_schedule_t;
> -DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit_schedule_t);
> -
> -struct xen_sysctl_credit2_schedule {
> - unsigned ratelimit_us;
> -};
> -typedef struct xen_sysctl_credit2_schedule
> xen_sysctl_credit2_schedule_t;
> -DEFINE_XEN_GUEST_HANDLE(xen_sysctl_credit2_schedule_t);
> -
>
You're mixing moving and changing code. This is something we prefer to
avoid. Please, so the moving in a pre-patch.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] credit2: libxl related changes to add support for runqueue per cpupool.
2017-09-14 6:37 ` Juergen Gross
@ 2017-11-16 21:10 ` Anshul Makkar
2017-11-17 6:58 ` Juergen Gross
0 siblings, 1 reply; 16+ messages in thread
From: Anshul Makkar @ 2017-11-16 21:10 UTC (permalink / raw)
To: Juergen Gross, xen-devel
Cc: wei.liu2, konrad.wilk, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, jbeulich
[Trimming the Cc-list a bit]
On 9/14/17 7:37 AM, Juergen Gross wrote:
> On 12/09/17 02:45, anshulmakkar wrote:
>> Introduces scheduler specific parameter at libxl level which are
>> passed on to libxc. eg runqueue for credit2
>>
>> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
>>
>> int libxl_cpupool_destroy(libxl_ctx *ctx, uint32_t poolid);
>> int libxl_cpupool_rename(libxl_ctx *ctx, const char *name, uint32_t poolid);
>> int libxl_cpupool_cpuadd(libxl_ctx *ctx, uint32_t poolid, int cpu);
>> diff --git a/tools/libxl/libxl_cpupool.c b/tools/libxl/libxl_cpupool.c
>> index 85b0688..e3ce7b3 100644
>> --- a/tools/libxl/libxl_cpupool.c
>> +++ b/tools/libxl/libxl_cpupool.c
>> @@ -130,7 +130,7 @@ int libxl_get_freecpus(libxl_ctx *ctx, libxl_bitmap *cpumap)
>> int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
>> libxl_scheduler sched,
>> libxl_bitmap cpumap, libxl_uuid *uuid,
>> - uint32_t *poolid)
>> + uint32_t *poolid, const libxl_scheduler_params *sched_params)
>> {
>> GC_INIT(ctx);
>> int rc;
>> @@ -138,6 +138,7 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
>> xs_transaction_t t;
>> char *uuid_string;
>> uint32_t xcpoolid;
>> + xc_schedparam_t xc_sched_param;
>>
>> /* Accept '0' as 'any poolid' for backwards compatibility */
>> if ( *poolid == LIBXL_CPUPOOL_POOLID_ANY
>> @@ -151,8 +152,18 @@ int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
>> GC_FREE;
>> return ERROR_NOMEM;
>> }
>> + if (sched_params)
>> + {
>> + xc_sched_param.u.sched_credit2.ratelimit_us =
>> + sched_params->u.credit2.ratelimit_us;
>> + xc_sched_param.u.sched_credit2.runq = sched_params->u.credit2.runqueue;
>> + xc_sched_param.u.sched_credit.tslice_ms = sched_params->u.credit.tslice_ms;
>> + xc_sched_param.u.sched_credit.ratelimit_us = sched_params->u.credit.ratelimit_us;
> Don't you need some input parameter validation here?
Agree. Will perform validation.
>> + }
>> + else
>> + xc_sched_param.u.sched_credit2.runq = LIBXL_CREDIT2_RUNQUEUE_DEFAULT;
> So you are passing the LIBXL defines down to the hypervisor expecting
> they match. I think this is a major layering violation.
I need to pass the DEFAULT runq arrangement if the user has not selected
any option and I want to do it near to the top level (libxc) so that
consistency
can be maintained at the lower scheduler layer.
Please can you suggest alternative that will maintain layering consistency.
>
>
>
> Juergen
anshul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] credit2: libxl related changes to add support for runqueue per cpupool.
2017-11-16 21:10 ` Anshul Makkar
@ 2017-11-17 6:58 ` Juergen Gross
0 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2017-11-17 6:58 UTC (permalink / raw)
To: Anshul Makkar, xen-devel
Cc: wei.liu2, konrad.wilk, George.Dunlap, andrew.cooper3,
dario.faggioli, ian.jackson, jbeulich
On 16/11/17 22:10, Anshul Makkar wrote:
> [Trimming the Cc-list a bit]
>
>
> On 9/14/17 7:37 AM, Juergen Gross wrote:
>> On 12/09/17 02:45, anshulmakkar wrote:
>>> Introduces scheduler specific parameter at libxl level which are
>>> passed on to libxc. eg runqueue for credit2
>>>
>>> Signed-off-by: Anshul Makkar <anshulmakkar@gmail.com>
>>>
>>> int libxl_cpupool_destroy(libxl_ctx *ctx, uint32_t poolid);
>>> int libxl_cpupool_rename(libxl_ctx *ctx, const char *name, uint32_t
>>> poolid);
>>> int libxl_cpupool_cpuadd(libxl_ctx *ctx, uint32_t poolid, int cpu);
>>> diff --git a/tools/libxl/libxl_cpupool.c b/tools/libxl/libxl_cpupool.c
>>> index 85b0688..e3ce7b3 100644
>>> --- a/tools/libxl/libxl_cpupool.c
>>> +++ b/tools/libxl/libxl_cpupool.c
>>> @@ -130,7 +130,7 @@ int libxl_get_freecpus(libxl_ctx *ctx,
>>> libxl_bitmap *cpumap)
>>> int libxl_cpupool_create(libxl_ctx *ctx, const char *name,
>>> libxl_scheduler sched,
>>> libxl_bitmap cpumap, libxl_uuid *uuid,
>>> - uint32_t *poolid)
>>> + uint32_t *poolid, const
>>> libxl_scheduler_params *sched_params)
>>> {
>>> GC_INIT(ctx);
>>> int rc;
>>> @@ -138,6 +138,7 @@ int libxl_cpupool_create(libxl_ctx *ctx, const
>>> char *name,
>>> xs_transaction_t t;
>>> char *uuid_string;
>>> uint32_t xcpoolid;
>>> + xc_schedparam_t xc_sched_param;
>>> /* Accept '0' as 'any poolid' for backwards compatibility */
>>> if ( *poolid == LIBXL_CPUPOOL_POOLID_ANY
>>> @@ -151,8 +152,18 @@ int libxl_cpupool_create(libxl_ctx *ctx, const
>>> char *name,
>>> GC_FREE;
>>> return ERROR_NOMEM;
>>> }
>>> + if (sched_params)
>>> + {
>>> + xc_sched_param.u.sched_credit2.ratelimit_us =
>>> +
>>> sched_params->u.credit2.ratelimit_us;
>>> + xc_sched_param.u.sched_credit2.runq =
>>> sched_params->u.credit2.runqueue;
>>> + xc_sched_param.u.sched_credit.tslice_ms =
>>> sched_params->u.credit.tslice_ms;
>>> + xc_sched_param.u.sched_credit.ratelimit_us =
>>> sched_params->u.credit.ratelimit_us;
>> Don't you need some input parameter validation here?
> Agree. Will perform validation.
>>> + }
>>> + else
>>> + xc_sched_param.u.sched_credit2.runq =
>>> LIBXL_CREDIT2_RUNQUEUE_DEFAULT;
>> So you are passing the LIBXL defines down to the hypervisor expecting
>> they match. I think this is a major layering violation.
> I need to pass the DEFAULT runq arrangement if the user has not selected
> any option and I want to do it near to the top level (libxc) so that
> consistency
> can be maintained at the lower scheduler layer.
> Please can you suggest alternative that will maintain layering consistency.
So either have some glue code translating the LIBXL defines to the
hypervisor ones, or add some statements triggering a build failure in
case they don't match.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2017-09-14 12:58 ` Dario Faggioli
@ 2019-01-17 16:10 ` anshul
2019-01-17 16:17 ` Juergen Gross
0 siblings, 1 reply; 16+ messages in thread
From: anshul @ 2019-01-17 16:10 UTC (permalink / raw)
To: Dario Faggioli, Juergen Gross, xen-devel
Cc: George.Dunlap, wei.liu2, ian.jackson, jbeulich
On 14/09/2017 13:58, Dario Faggioli wrote:
> On Thu, 2017-09-14 at 08:42 +0200, Juergen Gross wrote:
>>> --- a/tools/libxc/include/xenctrl.h
>>> +++ b/tools/libxc/include/xenctrl.h
>>> @@ -1077,17 +1077,21 @@ typedef struct xc_cpupoolinfo {
>>>
>>> #define XC_CPUPOOL_POOLID_ANY 0xFFFFFFFF
>>>
>>> +typedef xen_sysctl_sched_param_t xc_schedparam_t;
>>> +
>>> /**
>>> * Create a new cpupool.
>>> *
>>> * @parm xc_handle a handle to an open hypervisor interface
>>> * @parm ppoolid pointer to the new cpupool id (in/out)
>>> * @parm sched_id id of scheduler to use for pool
>>> + * @parm sched_param parameter of the scheduler of the cpupool eg.
>>> runq for credit2
>> I would drop "eg. runq for credit2"
>>
> +1
>
>>> --- a/tools/python/xen/lowlevel/xc/xc.c
>>> +++ b/tools/python/xen/lowlevel/xc/xc.c
>>> @@ -1704,6 +1704,7 @@ static PyObject *pyxc_cpupool_create(XcObject
>>> *self,
>>> PyObject *kwds)
>>> {
>>> uint32_t cpupool = XC_CPUPOOL_POOLID_ANY, sched =
>>> XEN_SCHEDULER_CREDIT;
>>> + xc_schedparam_t param;
>>>
>>> static char *kwd_list[] = { "pool", "sched", NULL };
>> [..]
>> Another possibility would be to drop the cpupool python bindings
>> completely (which I would prefer, TBH).
>>
> +1
Juergen, please can you clarify on this. Do you mean that I should
remove the
complete cpupool handling from python APIs i.e remove all of pyxc_cpupool_*
APIs .
Also, it was some time back when I floated this patch. Does the
requirement to remove cpupool python bindings
still holds.
>
> Regards,
> Dario
Anshul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] credit2: libxc related changes to add support for runqueue per cpupool.
2019-01-17 16:10 ` anshul
@ 2019-01-17 16:17 ` Juergen Gross
0 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2019-01-17 16:17 UTC (permalink / raw)
To: anshul, Dario Faggioli, xen-devel
Cc: George.Dunlap, wei.liu2, ian.jackson, jbeulich
On 17/01/2019 17:10, anshul wrote:
>
> On 14/09/2017 13:58, Dario Faggioli wrote:
>> On Thu, 2017-09-14 at 08:42 +0200, Juergen Gross wrote:
>>>> --- a/tools/libxc/include/xenctrl.h
>>>> +++ b/tools/libxc/include/xenctrl.h
>>>> @@ -1077,17 +1077,21 @@ typedef struct xc_cpupoolinfo {
>>>> #define XC_CPUPOOL_POOLID_ANY 0xFFFFFFFF
>>>> +typedef xen_sysctl_sched_param_t xc_schedparam_t;
>>>> +
>>>> /**
>>>> * Create a new cpupool.
>>>> *
>>>> * @parm xc_handle a handle to an open hypervisor interface
>>>> * @parm ppoolid pointer to the new cpupool id (in/out)
>>>> * @parm sched_id id of scheduler to use for pool
>>>> + * @parm sched_param parameter of the scheduler of the cpupool eg.
>>>> runq for credit2
>>> I would drop "eg. runq for credit2"
>>>
>> +1
>>
>>>> --- a/tools/python/xen/lowlevel/xc/xc.c
>>>> +++ b/tools/python/xen/lowlevel/xc/xc.c
>>>> @@ -1704,6 +1704,7 @@ static PyObject *pyxc_cpupool_create(XcObject
>>>> *self,
>>>> PyObject *kwds)
>>>> {
>>>> uint32_t cpupool = XC_CPUPOOL_POOLID_ANY, sched =
>>>> XEN_SCHEDULER_CREDIT;
>>>> + xc_schedparam_t param;
>>>> static char *kwd_list[] = { "pool", "sched", NULL };
>>> [..]
>>> Another possibility would be to drop the cpupool python bindings
>>> completely (which I would prefer, TBH).
>>>
>> +1
> Juergen, please can you clarify on this. Do you mean that I should
> remove the
>
> complete cpupool handling from python APIs i.e remove all of pyxc_cpupool_*
>
> APIs .
Yes.
> Also, it was some time back when I floated this patch. Does the
> requirement to remove cpupool python bindings
>
> still holds.
Its not a requirement, but a suggestion.
There are hardly any users of the Python bindings and as far as we know
none of them is using them for cpupool handling.
So instead of adapting them to interface modifications removing them
seems to be the better option.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2019-01-17 16:17 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-12 0:45 implement runqueue per cpupool anshulmakkar
2017-09-12 0:45 ` [PATCH 1/3] credit2: libxc related changes to add support for " anshulmakkar
2017-09-14 6:42 ` Juergen Gross
2017-09-14 12:58 ` Dario Faggioli
2019-01-17 16:10 ` anshul
2019-01-17 16:17 ` Juergen Gross
2017-09-14 13:28 ` Dario Faggioli
2017-09-12 0:45 ` [PATCH 2/3] credit2: libxl " anshulmakkar
2017-09-14 6:37 ` Juergen Gross
2017-11-16 21:10 ` Anshul Makkar
2017-11-17 6:58 ` Juergen Gross
2017-09-12 0:45 ` [PATCH 3/3] credit2: xen " anshulmakkar
2017-09-14 6:24 ` Juergen Gross
2017-09-14 10:03 ` Jan Beulich
2017-09-14 14:08 ` Dario Faggioli
2017-09-14 4:21 ` implement " Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).