From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Cc: keir@xen.org, Ian.Campbell@citrix.com,
stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
Ian.Jackson@eu.citrix.com, JBeulich@suse.com,
dgdegra@tycho.nsa.gov
Subject: [PATCH v9 4/6] x86: enable CQM monitoring for each domain RMID
Date: Wed, 19 Feb 2014 14:32:42 +0800 [thread overview]
Message-ID: <1392791564-37170-5-git-send-email-dongxiao.xu@intel.com> (raw)
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
xen/arch/x86/domain.c | 5 +++++
xen/arch/x86/pqos.c | 14 ++++++++++++++
xen/include/asm-x86/msr-index.h | 1 +
xen/include/asm-x86/pqos.h | 1 +
4 files changed, 21 insertions(+)
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
{
memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
vcpu_save_fpu(p);
+ if ( system_supports_cqm() && cqm->used_rmid )
+ cqm_assoc_rmid(0);
p->arch.ctxt_switch_from(p);
}
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
}
vcpu_restore_fpu_eager(n);
n->arch.ctxt_switch_to(n);
+
+ if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+ cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
}
gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
custom_param("pqos", parse_pqos_param);
struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
static void __init init_cqm(void)
{
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+ rmid_mask = ~(~0ull << get_count_order(ebx));
+
if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
init_cqm();
}
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
}
+void cqm_assoc_rmid(unsigned int rmid)
+{
+ uint64_t val;
+ uint64_t new_val;
+
+ rdmsrl(MSR_IA32_PQR_ASSOC, val);
+ new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+ if ( val != new_val )
+ wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
/* Platform QoS register */
#define MSR_IA32_QOSEVTSEL 0x00000c8d
#define MSR_IA32_QMC 0x00000c8e
+#define MSR_IA32_PQR_ASSOC 0x00000c8f
#endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 4372af6..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -54,5 +54,6 @@ void init_platform_qos(void);
int alloc_cqm_rmid(struct domain *d);
void free_cqm_rmid(struct domain *d);
void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
#endif
--
1.7.9.5
next prev parent reply other threads:[~2014-02-19 6:32 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-19 6:32 [PATCH v9 0/6] enable Cache QoS Monitoring (CQM) feature Dongxiao Xu
2014-02-19 6:32 ` [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring feature Dongxiao Xu
2014-02-24 14:12 ` Jan Beulich
2014-03-03 13:21 ` Xu, Dongxiao
2014-03-04 8:10 ` Jan Beulich
2014-03-18 2:02 ` Xu, Dongxiao
2014-03-18 9:57 ` Jan Beulich
2014-03-18 10:02 ` Xu, Dongxiao
2014-03-18 10:09 ` Jan Beulich
2014-03-18 10:15 ` Xu, Dongxiao
2014-03-18 10:28 ` Jan Beulich
2014-03-18 10:46 ` Xu, Dongxiao
2014-03-18 10:51 ` Andrew Cooper
2014-03-18 14:44 ` Xu, Dongxiao
2014-03-18 15:42 ` Jan Beulich
2014-03-18 10:58 ` Jan Beulich
2014-03-18 14:33 ` Xu, Dongxiao
2014-03-18 15:26 ` Jan Beulich
2014-02-19 6:32 ` [PATCH v9 2/6] x86: dynamically attach/detach CQM service for a guest Dongxiao Xu
2014-02-24 14:15 ` Jan Beulich
2014-02-19 6:32 ` [PATCH v9 3/6] x86: collect CQM information from all sockets Dongxiao Xu
2014-02-24 14:23 ` Jan Beulich
2014-02-19 6:32 ` Dongxiao Xu [this message]
2014-02-24 14:26 ` [PATCH v9 4/6] x86: enable CQM monitoring for each domain RMID Jan Beulich
2014-02-19 6:32 ` [PATCH v9 5/6] xsm: add platform QoS related xsm policies Dongxiao Xu
2014-02-19 6:32 ` [PATCH v9 6/6] tools: enable Cache QoS Monitoring feature for libxl/libxc Dongxiao Xu
2014-02-19 9:39 ` Ian Campbell
2014-03-03 13:46 ` Xu, Dongxiao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1392791564-37170-5-git-send-email-dongxiao.xu@intel.com \
--to=dongxiao.xu@intel.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=keir@xen.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).