xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v2 8/8] x86: enable CQM monitoring for each domain RMID
Date: Mon, 25 Nov 2013 16:32:41 +0000	[thread overview]
Message-ID: <52937BA9.10708@citrix.com> (raw)
In-Reply-To: <40776A41FC278F40B59438AD47D147A9118BB66A@SHSMSX104.ccr.corp.intel.com>

On 25/11/13 07:22, Xu, Dongxiao wrote:
>> -----Original Message-----
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> Sent: Thursday, November 21, 2013 10:19 PM
>> To: Xu, Dongxiao
>> Cc: xen-devel@lists.xen.org
>> Subject: Re: [Xen-devel] [PATCH v2 8/8] x86: enable CQM monitoring for each
>> domain RMID
>>
>> On 21/11/13 07:20, dongxiao.xu@intel.com wrote:
>>> From: Dongxiao Xu <dongxiao.xu@intel.com>
>>>
>>> If the CQM service is attached to a domain, its related RMID will be set
>>> to hardware for monitoring when the domain's vcpu is scheduled in. When
>>> the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
>>> for monitoring.
>>>
>>> Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
>>> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
>>> ---
>>>  xen/arch/x86/domain.c           |    5 +++++
>>>  xen/arch/x86/pqos.c             |   21 ++++++++++++++++++++-
>>>  xen/include/asm-x86/msr-index.h |    1 +
>>>  xen/include/asm-x86/pqos.h      |    1 +
>>>  4 files changed, 27 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>>> index 9725649..1eda0ab 100644
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -1372,6 +1372,8 @@ static void __context_switch(void)
>>>      {
>>>          memcpy(&p->arch.user_regs, stack_regs,
>> CTXT_SWITCH_STACK_BYTES);
>>>          vcpu_save_fpu(p);
>>> +        if ( system_supports_cqm() )
>>> +            cqm_assoc_rmid(0);
>> So actions from the idle domain are accounted against the previously
>> scheduled vcpu?
> No. Considering the following cases:
>  - Context switch from a normal domain vcpu (p) to an idle domain vcpu (n), then we will associate RMID=0 to the CPU hardware on ctxt_switch_from(p), so that idle domain is accounted to RMID=0;
>  - Context switch from an idle domain vcpu (p) to a normal domain vcpu (n), then we will associate the domain's RMID on ctxt_switch_to(n).

Ah of course.

>
>>>          p->arch.ctxt_switch_from(p);
>>>      }
>>>
>>> @@ -1396,6 +1398,9 @@ static void __context_switch(void)
>>>          }
>>>          vcpu_restore_fpu_eager(n);
>>>          n->arch.ctxt_switch_to(n);
>>> +
>>> +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid >
>> 0 )
>>
>> n->domain->arch.pqos_cqm_rmid can only be greater than 0 if the system
>> already supports cqm()
> This should be changed on v2 patch, sorry.
>
>>> +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
>> What happens to subsequent Xen accesses before returning to the guest?
>>
>> What happens for Xen accesses in interrupt handlers?
> The measurement is not that accurate for CQM feature.
> CQM is somewhat like xentop, where xentop doesn't exactly differentiate the context switch cost and interrupt handling cost, so we adopt the similar logics to CQM.

Ok.

~Andrew

  reply	other threads:[~2013-11-25 16:32 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-21  7:20 [PATCH v2 0/8] enable Cache QoS Monitoring (CQM) feature dongxiao.xu
2013-11-21  7:20 ` [PATCH v2 1/8] x86: detect and initialize Cache QoS Monitoring feature dongxiao.xu
2013-11-21 12:14   ` Andrew Cooper
2013-11-21 12:19     ` Andrew Cooper
2013-11-25  3:06     ` Xu, Dongxiao
2013-11-25 15:40       ` Andrew Cooper
2013-11-25  8:57     ` Xu, Dongxiao
2013-11-25 15:58       ` Andrew Cooper
2013-11-21  7:20 ` [PATCH v2 2/8] x86: handle CQM resource when creating/destroying guests dongxiao.xu
2013-11-21 12:33   ` Andrew Cooper
2013-11-25  3:21     ` Xu, Dongxiao
2013-11-25 16:02       ` Andrew Cooper
2013-11-21  7:20 ` [PATCH v2 3/8] tools: " dongxiao.xu
2013-11-21  7:20 ` [PATCH v2 4/8] x86: dynamically attach/detach CQM service for a guest dongxiao.xu
2013-11-21 12:50   ` Andrew Cooper
2013-11-25  3:26     ` Xu, Dongxiao
2013-11-25 16:05       ` Andrew Cooper
2013-11-25 21:06   ` Konrad Rzeszutek Wilk
2013-11-21  7:20 ` [PATCH v2 5/8] tools: " dongxiao.xu
2013-11-25 21:00   ` Konrad Rzeszutek Wilk
2013-11-25 21:01     ` Konrad Rzeszutek Wilk
2013-11-21  7:20 ` [PATCH v2 6/8] x86: get per domain CQM information dongxiao.xu
2013-11-21 14:09   ` Andrew Cooper
2013-11-25  6:20     ` Xu, Dongxiao
2013-11-25 16:28       ` Andrew Cooper
2013-11-21  7:20 ` [PATCH v2 7/8] tools: " dongxiao.xu
2013-11-21  7:20 ` [PATCH v2 8/8] x86: enable CQM monitoring for each domain RMID dongxiao.xu
2013-11-21 14:19   ` Andrew Cooper
2013-11-25  7:22     ` Xu, Dongxiao
2013-11-25 16:32       ` Andrew Cooper [this message]
2013-11-21 14:36 ` [PATCH v2 0/8] enable Cache QoS Monitoring (CQM) feature Andrew Cooper
2013-11-25  7:24   ` Xu, Dongxiao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52937BA9.10708@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=dongxiao.xu@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).