From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Durrant Subject: [PATCH] x86/hvm: Fix deadlock in emulation of rep mov to or from VRAM. Date: Mon, 13 Jul 2015 10:29:03 +0100 Message-ID: <1436779743-10987-1-git-send-email-paul.durrant@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org Cc: Andrew Cooper , Paul Durrant , Keir Fraser , Jan Beulich List-Id: xen-devel@lists.xenproject.org Razvan Cojocaru reported a hypervisor deadlock with the following stack: (XEN) [] _spin_lock+0x31/0x54 (XEN) [] stdvga_mem_accept+0x3b/0x125 (XEN) [] hvm_find_io_handler+0x68/0x8a (XEN) [] hvm_mmio_internal+0x37/0x67 (XEN) [] __hvm_copy+0xe9/0x37d (XEN) [] hvm_copy_from_guest_phys+0x14/0x16 (XEN) [] hvm_process_io_intercept+0x10b/0x1d6 (XEN) [] hvm_io_intercept+0x35/0x5b (XEN) [] hvmemul_do_io+0x1ff/0x2c1 (XEN) [] hvmemul_do_io_addr+0x117/0x163 (XEN) [] hvmemul_do_mmio_addr+0x24/0x26 (XEN) [] hvmemul_rep_movs+0x1ef/0x335 (XEN) [] x86_emulate+0x56c9/0x13088 (XEN) [] _hvm_emulate_one+0x186/0x281 (XEN) [] hvm_emulate_one+0x10/0x12 (XEN) [] handle_mmio+0x54/0xd2 (XEN) [] handle_mmio_with_translation+0x44/0x46 (XEN) [] hvm_hap_nested_page_fault+0x15f/0x589 (XEN) [] vmx_vmexit_handler+0x150e/0x188d (XEN) [] vmx_asm_vmexit_handler+0x41/0xc0 The problem here is the call to hvm_mmio_internal() being made by __hvm_copy(). When the emulated VRAM access was originally started by hvm_io_intercept() a few frames up the stack, it would have called stdvga_mem_accept() which would then have acquired the per-domain stdvga lock. Unfortunately the call to hvm_mmio_internal(), to avoid a costly P2M walk, speculatively calls stdvga_mem_accept() again to see if the page handed to __hvm_copy() is actually an internally emulated page and hence the vcpu deadlocks. The fix is to do the range-check in stdvga_mem_accept() without taking the stdvga lock. This is safe because the range is constant and we know the I/O will never actually be accepted by the stdvga device model because hvmemul_do_io_addr() makes sure that the source of the I/O is actually RAM. Reported-by: Razvan Cojocaru Signed-off-by: Paul Durrant Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/hvm/stdvga.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index ebb3b42..6306fa2 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -490,11 +490,18 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, { struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm_domain.stdvga; + /* + * The range check must be done without taking the lock, to avoid + * deadlock when hvm_mmio_internal() is called from + * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). + */ + if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || + (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + return 0; + spin_lock(&s->lock); - if ( !s->stdvga || - (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || - (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + if ( !s->stdvga ) goto reject; if ( p->dir == IOREQ_WRITE && p->count > 1 ) -- 1.7.10.4