xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: dongxiao.xu@intel.com
To: xen-devel@lists.xen.org
Cc: keir@xen.org, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
	Ian.Jackson@eu.citrix.com, JBeulich@suse.com,
	dgdegra@tycho.nsa.gov
Subject: [PATCH v3 0/7] enable Cache QoS Monitoring (CQM) feature
Date: Fri, 29 Nov 2013 13:48:04 +0800	[thread overview]
Message-ID: <1385704092-89546-1-git-send-email-dongxiao.xu@intel.com> (raw)

From: Dongxiao Xu <dongxiao.xu@intel.com>

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm (domid)

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
RMID count    56        RMID available    53
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

Dongxiao Xu (7):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: handle CQM resource when creating/destroying guests
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   48 ++++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    5 +
 tools/libxl/libxl_create.c                   |    3 +
 tools/libxl/libxl_pqos.c                     |  108 +++++++++++++
 tools/libxl/libxl_types.idl                  |    1 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  138 ++++++++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/cpu/intel.c                     |    6 +
 xen/arch/x86/domain.c                        |   13 ++
 xen/arch/x86/domctl.c                        |   40 +++++
 xen/arch/x86/pqos.c                          |  219 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   89 +++++++++++
 xen/common/domctl.c                          |    5 +-
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   46 ++++++
 xen/include/public/domctl.h                  |   26 +++
 xen/include/public/sysctl.h                  |   11 ++
 xen/include/xen/sched.h                      |    3 +
 xen/xsm/flask/hooks.c                        |    7 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 29 files changed, 830 insertions(+), 7 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5

             reply	other threads:[~2013-11-29  5:48 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-29  5:48 dongxiao.xu [this message]
2013-11-29  5:48 ` [PATCH v3 1/7] x86: detect and initialize Cache QoS Monitoring feature dongxiao.xu
2013-11-29 13:54   ` Andrew Cooper
2013-11-29 15:05     ` Jan Beulich
2013-11-30  0:41       ` Xu, Dongxiao
2013-12-02  2:17       ` Xu, Dongxiao
2013-12-02  9:22         ` Xu, Dongxiao
2013-11-29 13:59   ` Andrew Cooper
2013-11-30  0:42     ` Xu, Dongxiao
2013-11-29  5:48 ` [PATCH v3 2/7] x86: handle CQM resource when creating/destroying guests dongxiao.xu
2013-11-29 14:18   ` Andrew Cooper
2013-11-29 15:01     ` Andrew Cooper
2013-11-29 15:07     ` Jan Beulich
2013-11-29  5:48 ` [PATCH v3 3/7] x86: dynamically attach/detach CQM service for a guest dongxiao.xu
2013-11-29 14:22   ` Andrew Cooper
2013-11-29  5:48 ` [PATCH v3 4/7] x86: collect CQM information from all sockets dongxiao.xu
2013-11-29 14:53   ` Andrew Cooper
2013-11-30  1:27     ` Xu, Dongxiao
2013-11-29  5:48 ` [PATCH v3 5/7] x86: enable CQM monitoring for each domain RMID dongxiao.xu
2013-11-29 14:56   ` Andrew Cooper
2013-11-30  1:27     ` Xu, Dongxiao
2013-11-29  5:48 ` [PATCH v3 6/7] xsm: add platform QoS related xsm policies dongxiao.xu
2013-11-29 15:50   ` Daniel De Graaf
2013-11-29  5:48 ` [PATCH v3 7/7] tools: enable Cache QoS Monitoring feature for libxl/libxc dongxiao.xu
2013-11-29 15:29 ` [PATCH v3 0/7] enable Cache QoS Monitoring (CQM) feature Ian Campbell
2013-11-29 15:36   ` Jan Beulich
2013-11-29 15:41     ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1385704092-89546-1-git-send-email-dongxiao.xu@intel.com \
    --to=dongxiao.xu@intel.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).