From: Chao Gao <chao.gao@intel.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Kevin Tian <kevin.tian@intel.com>,
Jan Beulich <jbeulich@suse.com>,
Jun Nakajima <jun.nakajima@intel.com>,
Chao Gao <chao.gao@intel.com>
Subject: [PATCH v2 5/5] VT-d PI: Don't add vCPU to PI blocking list for a case
Date: Thu, 11 May 2017 14:04:12 +0800 [thread overview]
Message-ID: <1494482652-42356-6-git-send-email-chao.gao@intel.com> (raw)
In-Reply-To: <1494482652-42356-1-git-send-email-chao.gao@intel.com>
Currently, blocked vCPUs are added to PI blocking list if its
domain has assign devices. Actually, some blocked vCPUs will not
be woken up by wakeup interrupt generated by VT-d hardware. They
may be woken up by IPIs or other interrupts from emulated devices.
For these vCPUs we don't add them to PI blocking list.
If a vCPU is blocked prior to its getting bound with a IRTE, we need
adding this vCPU to blocking list when we bind a vCPU with a IRTE.
In that case, arch_vcpu_block() may be called from another vCPU which
the current implementation can't handle. This patch expands the
arch_vcpu_block(), removing some restrictions expressed by
assertions and handling the target vCPU according to its status and
its PI blocking list lock (v->arch.hvm_vmx.pi_blocking.lock).
Signed-off-by: Chao Gao <chao.gao@intel.com>
---
xen/arch/x86/hvm/vmx/vmx.c | 20 +++++++++++++-------
xen/drivers/passthrough/vtd/intremap.c | 18 ++++++++++++++++++
2 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 45a372e..03d5fce 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -161,6 +161,14 @@ static void vmx_vcpu_block(struct vcpu *v)
struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
spinlock_t *pi_blocking_list_lock;
+ /* If no IRTE refers to 'pi_desc', no further operation needs */
+ if ( v->domain->arch.hvm_domain.pi_ops.in_use &&
+ !v->domain->arch.hvm_domain.pi_ops.in_use(v) )
+ return;
+
+ if ( !test_bit(_VPF_blocked, &v->pause_flags) )
+ return;
+
/*
* After pCPU goes down, the per-cpu PI blocking list is cleared.
* To make sure the parameter vCPU is added to the chosen pCPU's
@@ -183,13 +191,11 @@ static void vmx_vcpu_block(struct vcpu *v)
old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL,
pi_blocking_list_lock);
-
- /*
- * 'v->arch.hvm_vmx.pi_blocking.lock' should be NULL before
- * being assigned to a new value, since the vCPU is currently
- * running and it cannot be on any blocking list.
- */
- ASSERT(old_lock == NULL);
+ if ( old_lock )
+ {
+ spin_unlock_irqrestore(pi_blocking_list_lock, flags);
+ return;
+ }
atomic_inc(&per_cpu(vmx_pi_blocking, dest_cpu).counter);
HVMTRACE_4D(PI_LIST_ADD, v->domain->domain_id, v->vcpu_id, dest_cpu,
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index 99f1cce..806e397 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -622,6 +622,20 @@ static void pi_put_ref(struct pi_desc *pi_desc)
v->domain->arch.hvm_domain.pi_ops.put_ref(v);
}
+static bool pi_in_use(struct pi_desc *pi_desc)
+{
+ struct vcpu *v;
+
+ if ( !pi_desc )
+ return 0;
+
+ v = pi_desc_to_vcpu(pi_desc);
+ ASSERT(is_hvm_domain(v->domain));
+ if ( v->domain->arch.hvm_domain.pi_ops.in_use )
+ return v->domain->arch.hvm_domain.pi_ops.in_use(v);
+ return 0;
+}
+
static int msi_msg_to_remap_entry(
struct iommu *iommu, struct pci_dev *pdev,
struct msi_desc *msi_desc, struct msi_msg *msg)
@@ -996,6 +1010,7 @@ int pi_update_irte(struct pi_desc *pi_desc, const struct pirq *pirq,
struct msi_desc *msi_desc;
struct pi_desc *old_pi_desc;
int rc;
+ bool first_ref;
desc = pirq_spin_lock_irq_desc(pirq, NULL);
if ( !desc )
@@ -1009,7 +1024,10 @@ int pi_update_irte(struct pi_desc *pi_desc, const struct pirq *pirq,
}
old_pi_desc = msi_desc->pi_desc;
+ first_ref = !pi_in_use(pi_desc);
pi_get_ref(pi_desc);
+ if ( pi_desc && first_ref )
+ arch_vcpu_block(pi_desc_to_vcpu(pi_desc));
msi_desc->pi_desc = pi_desc;
msi_desc->gvec = gvec;
--
1.8.3.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
prev parent reply other threads:[~2017-05-11 6:04 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-11 6:04 [PATCH v2 0/5] mitigate the per-pCPU blocking list may be too long Chao Gao
2017-05-11 6:04 ` [PATCH v2 1/5] xentrace: add TRC_HVM_PI_LIST_ADD Chao Gao
2017-05-15 1:33 ` Tian, Kevin
2017-05-15 8:57 ` Chao Gao
2017-05-15 15:14 ` George Dunlap
2017-05-11 6:04 ` [PATCH v2 2/5] vcpu: track hvm vcpu number on the system Chao Gao
2017-05-11 11:35 ` Wei Liu
2017-05-11 11:37 ` Wei Liu
2017-05-12 8:23 ` Chao Gao
2017-05-11 6:04 ` [PATCH v2 3/5] VT-d PI: restrict the vcpu number on a given pcpu Chao Gao
2017-05-15 5:24 ` Tian, Kevin
2017-05-15 9:27 ` Chao Gao
2017-05-15 15:48 ` George Dunlap
2017-05-15 16:13 ` Chao Gao
2017-05-11 6:04 ` [PATCH v2 4/5] VT-d PI: Adding reference count to pi_desc Chao Gao
2017-05-15 14:42 ` George Dunlap
2017-05-11 6:04 ` Chao Gao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1494482652-42356-6-git-send-email-chao.gao@intel.com \
--to=chao.gao@intel.com \
--cc=andrew.cooper3@citrix.com \
--cc=jbeulich@suse.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).