From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
Paul Durrant <paul.durrant@citrix.com>,
Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/hvm: Fix deadlock in emulation of rep mov to or from VRAM.
Date: Mon, 13 Jul 2015 10:29:03 +0100 [thread overview]
Message-ID: <1436779743-10987-1-git-send-email-paul.durrant@citrix.com> (raw)
Razvan Cojocaru reported a hypervisor deadlock with the following stack:
(XEN) [<ffff82d08012c3f1>] _spin_lock+0x31/0x54
(XEN) [<ffff82d0801d09b6>] stdvga_mem_accept+0x3b/0x125
(XEN) [<ffff82d0801cb23a>] hvm_find_io_handler+0x68/0x8a
(XEN) [<ffff82d0801cb410>] hvm_mmio_internal+0x37/0x67
(XEN) [<ffff82d0801c2403>] __hvm_copy+0xe9/0x37d
(XEN) [<ffff82d0801c3e5d>] hvm_copy_from_guest_phys+0x14/0x16
(XEN) [<ffff82d0801cb107>] hvm_process_io_intercept+0x10b/0x1d6
(XEN) [<ffff82d0801cb291>] hvm_io_intercept+0x35/0x5b
(XEN) [<ffff82d0801bb440>] hvmemul_do_io+0x1ff/0x2c1
(XEN) [<ffff82d0801bc0b9>] hvmemul_do_io_addr+0x117/0x163
(XEN) [<ffff82d0801bc129>] hvmemul_do_mmio_addr+0x24/0x26
(XEN) [<ffff82d0801bcbb5>] hvmemul_rep_movs+0x1ef/0x335
(XEN) [<ffff82d080198b49>] x86_emulate+0x56c9/0x13088
(XEN) [<ffff82d0801bbd26>] _hvm_emulate_one+0x186/0x281
(XEN) [<ffff82d0801bc1e8>] hvm_emulate_one+0x10/0x12
(XEN) [<ffff82d0801cb63e>] handle_mmio+0x54/0xd2
(XEN) [<ffff82d0801cb700>] handle_mmio_with_translation+0x44/0x46
(XEN) [<ffff82d0801c27f6>] hvm_hap_nested_page_fault+0x15f/0x589
(XEN) [<ffff82d0801e9741>] vmx_vmexit_handler+0x150e/0x188d
(XEN) [<ffff82d0801ee7d1>] vmx_asm_vmexit_handler+0x41/0xc0
The problem here is the call to hvm_mmio_internal() being made by
__hvm_copy().
When the emulated VRAM access was originally started by
hvm_io_intercept() a few frames up the stack, it would have called
stdvga_mem_accept() which would then have acquired the per-domain
stdvga lock. Unfortunately the call to hvm_mmio_internal(), to avoid
a costly P2M walk, speculatively calls stdvga_mem_accept() again to
see if the page handed to __hvm_copy() is actually an internally
emulated page and hence the vcpu deadlocks.
The fix is to do the range-check in stdvga_mem_accept() without taking
the stdvga lock. This is safe because the range is constant and we know
the I/O will never actually be accepted by the stdvga device model
because hvmemul_do_io_addr() makes sure that the source of the I/O is
actually RAM.
Reported-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
xen/arch/x86/hvm/stdvga.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index ebb3b42..6306fa2 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -490,11 +490,18 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
{
struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm_domain.stdvga;
+ /*
+ * The range check must be done without taking the lock, to avoid
+ * deadlock when hvm_mmio_internal() is called from
+ * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept().
+ */
+ if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) ||
+ (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
+ return 0;
+
spin_lock(&s->lock);
- if ( !s->stdvga ||
- (hvm_mmio_first_byte(p) < VGA_MEM_BASE) ||
- (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
+ if ( !s->stdvga )
goto reject;
if ( p->dir == IOREQ_WRITE && p->count > 1 )
--
1.7.10.4
reply other threads:[~2015-07-13 9:29 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1436779743-10987-1-git-send-email-paul.durrant@citrix.com \
--to=paul.durrant@citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=jbeulich@suse.com \
--cc=keir@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).