xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] libxl: Remove qxl support for the 4.3 release
@ 2013-09-18  8:07 Jan Beulich
  2013-09-18 12:29 ` Fabio Fantoni
  2013-09-19 10:09 ` George Dunlap
  0 siblings, 2 replies; 13+ messages in thread
From: Jan Beulich @ 2013-09-18  8:07 UTC (permalink / raw)
  To: Fabio Fantoni; +Cc: George Dunlap, Andrew Cooper, Keir Fraser, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2264 bytes --]

>>> On 16.09.13 at 16:10, Fabio Fantoni <fabio.fantoni@m2r.biz> wrote:
> Il 05/07/2013 18:59, George Dunlap ha scritto:
>> On Wed, May 29, 2013 at 11:25 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>>> On 29/05/2013 08:43, Ian Campbell wrote:
>>>> On Tue, 2013-05-28 at 19:09 +0100, Keir Fraser wrote:
>>>>> On 28/05/2013 17:51, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
>>>>>
>>>>>> George Dunlap writes ("[PATCH] libxl: Remove qxl support for the 4.3
>>>>>> release"):
>>>>>>> The qxl drivers for Windows and Linux end up calling instructions
>>>>>>> that cannot be used for MMIO at the moment.  Just for the 4.3 release,
>>>>>>> remove qxl support.
>>>>>>>
>>>>>>> This patch should be reverted as soon as the 4.4 development window opens.
>>>>>>>
>>>>>>> The issue in question:
>>>>>>>
>>>>>>> (XEN) emulate.c:88:d18 bad mmio size 16
>>>>>>> (XEN) io.c:201:d18 MMIO emulation failed @ 0033:7fd2de390430: f3 0f 6f
>>>>>>> 19 41 83 e8 403
>>>>>>>
>>>>>>> The instruction in question is "movdqu (%rcx),%xmm3".  Xen knows how
>>>>>>> to emulate it, but unfortunately %xmm3 is 16 bytes long, and the interface
>>>>>>> between Xen and qemu at the moment would appear to only allow MMIO accesses
>>>>>>> of 8 bytes.
>>>>>>>
>>>>>>> It's too late in the release cycle to find a fix or a workaround.
>>>>>> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
>>>>> It could be plumbed through hvmemul_do_io's multi-cycle read/write logic,
>>>>> and done as two 8-byte cycles to qemu. This would avoid bloating the ioreq
>>>>> structure that communicates to qemu.
>>>> Are you proposing we do this for 4.3? I'm not sure how big that change
>>>> would be in terms of impact (just that one instruction, any 16 byte
>>>> operand?).
>>>>
>>>> Of course even if we did this for 4.3 we don't know what the next issue
>>>> will be with QXL.
>>>>
>>>> Ian.
>>> Furthermore, AVX instruction emulation would require support for 32byte
>>> operands.  I don't see the multi-cycle logic scaling sensibly.
>> Andrew, Keir, Jan, does any one of you fancy taking this on for 4.4?
> 
> Is there someone that can add full support for SSE on hvm domUs?
> Thanks for any reply.

Mind giving the attached patch a try?

Jan


[-- Attachment #2: x86-HVM-emul-split-large.patch --]
[-- Type: text/plain, Size: 6729 bytes --]

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -438,6 +438,7 @@ static int __hvmemul_read(
 {
     struct vcpu *curr = current;
     unsigned long addr, reps = 1;
+    unsigned int off, chunk = min_t(unsigned int, bytes, sizeof(long));
     uint32_t pfec = PFEC_page_present;
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     paddr_t gpa;
@@ -447,16 +448,38 @@ static int __hvmemul_read(
         seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
     if ( rc != X86EMUL_OKAY )
         return rc;
+    off = addr & (PAGE_SIZE - 1);
+    /*
+     * We only need to handle sizes actual instruction operands can have. All
+     * such sizes are either powers of 2 or the sum of two powers of 2. Thus
+     * picking as initial chunk size the largest power of 2 not greater than
+     * the total size will always result in only power-of-2 size requests
+     * issued to hvmemul_do_mmio() (hvmemul_do_io() rejects non-powers-of-2).
+     */
+    while ( chunk & (chunk - 1) )
+        chunk &= chunk - 1;
+    if ( off + bytes > PAGE_SIZE )
+        while ( off & (chunk - 1) )
+            chunk >>= 1;
 
     if ( unlikely(vio->mmio_gva == (addr & PAGE_MASK)) && vio->mmio_gva )
     {
-        unsigned int off = addr & (PAGE_SIZE - 1);
         if ( access_type == hvm_access_insn_fetch )
             return X86EMUL_UNHANDLEABLE;
         gpa = (((paddr_t)vio->mmio_gpfn << PAGE_SHIFT) | off);
-        if ( (off + bytes) <= PAGE_SIZE )
-            return hvmemul_do_mmio(gpa, &reps, bytes, 0,
-                                   IOREQ_READ, 0, p_data);
+        while ( (off + chunk) <= PAGE_SIZE )
+        {
+            rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_READ, 0, p_data);
+            if ( rc != X86EMUL_OKAY || bytes == chunk )
+                return rc;
+            addr += chunk;
+            off += chunk;
+            gpa += chunk;
+            p_data += chunk;
+            bytes -= chunk;
+            if ( bytes < chunk )
+                chunk = bytes;
+        }
     }
 
     if ( (seg != x86_seg_none) &&
@@ -473,14 +496,32 @@ static int __hvmemul_read(
         return X86EMUL_EXCEPTION;
     case HVMCOPY_unhandleable:
         return X86EMUL_UNHANDLEABLE;
-    case  HVMCOPY_bad_gfn_to_mfn:
+    case HVMCOPY_bad_gfn_to_mfn:
         if ( access_type == hvm_access_insn_fetch )
             return X86EMUL_UNHANDLEABLE;
-        rc = hvmemul_linear_to_phys(
-            addr, &gpa, bytes, &reps, pfec, hvmemul_ctxt);
-        if ( rc != X86EMUL_OKAY )
-            return rc;
-        return hvmemul_do_mmio(gpa, &reps, bytes, 0, IOREQ_READ, 0, p_data);
+        rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
+                                    hvmemul_ctxt);
+        while ( rc == X86EMUL_OKAY )
+        {
+            rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_READ, 0, p_data);
+            if ( rc != X86EMUL_OKAY || bytes == chunk )
+                break;
+            addr += chunk;
+            off += chunk;
+            p_data += chunk;
+            bytes -= chunk;
+            if ( bytes < chunk )
+                chunk = bytes;
+            if ( off < PAGE_SIZE )
+                gpa += chunk;
+            else
+            {
+                rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
+                                            hvmemul_ctxt);
+                off = 0;
+            }
+        }
+        return rc;
     case HVMCOPY_gfn_paged_out:
         return X86EMUL_RETRY;
     case HVMCOPY_gfn_shared:
@@ -537,6 +578,7 @@ static int hvmemul_write(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     struct vcpu *curr = current;
     unsigned long addr, reps = 1;
+    unsigned int off, chunk = min_t(unsigned int, bytes, sizeof(long));
     uint32_t pfec = PFEC_page_present | PFEC_write_access;
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     paddr_t gpa;
@@ -546,14 +588,30 @@ static int hvmemul_write(
         seg, offset, bytes, &reps, hvm_access_write, hvmemul_ctxt, &addr);
     if ( rc != X86EMUL_OKAY )
         return rc;
+    off = addr & (PAGE_SIZE - 1);
+    /* See the respective comment in __hvmemul_read(). */
+    while ( chunk & (chunk - 1) )
+        chunk &= chunk - 1;
+    if ( off + bytes > PAGE_SIZE )
+        while ( off & (chunk - 1) )
+            chunk >>= 1;
 
     if ( unlikely(vio->mmio_gva == (addr & PAGE_MASK)) && vio->mmio_gva )
     {
-        unsigned int off = addr & (PAGE_SIZE - 1);
         gpa = (((paddr_t)vio->mmio_gpfn << PAGE_SHIFT) | off);
-        if ( (off + bytes) <= PAGE_SIZE )
-            return hvmemul_do_mmio(gpa, &reps, bytes, 0,
-                                   IOREQ_WRITE, 0, p_data);
+        while ( (off + chunk) <= PAGE_SIZE )
+        {
+            rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_WRITE, 0, p_data);
+            if ( rc != X86EMUL_OKAY || bytes == chunk )
+                return rc;
+            addr += chunk;
+            off += chunk;
+            gpa += chunk;
+            p_data += chunk;
+            bytes -= chunk;
+            if ( bytes < chunk )
+                chunk = bytes;
+        }
     }
 
     if ( (seg != x86_seg_none) &&
@@ -569,12 +627,29 @@ static int hvmemul_write(
     case HVMCOPY_unhandleable:
         return X86EMUL_UNHANDLEABLE;
     case HVMCOPY_bad_gfn_to_mfn:
-        rc = hvmemul_linear_to_phys(
-            addr, &gpa, bytes, &reps, pfec, hvmemul_ctxt);
-        if ( rc != X86EMUL_OKAY )
-            return rc;
-        return hvmemul_do_mmio(gpa, &reps, bytes, 0,
-                               IOREQ_WRITE, 0, p_data);
+        rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
+                                    hvmemul_ctxt);
+        while ( rc == X86EMUL_OKAY )
+        {
+            rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_WRITE, 0, p_data);
+            if ( rc != X86EMUL_OKAY || bytes == chunk )
+                break;
+            addr += chunk;
+            off += chunk;
+            p_data += chunk;
+            bytes -= chunk;
+            if ( bytes < chunk )
+                chunk = bytes;
+            if ( off < PAGE_SIZE )
+                gpa += chunk;
+            else
+            {
+                rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
+                                            hvmemul_ctxt);
+                off = 0;
+            }
+        }
+        return rc;
     case HVMCOPY_gfn_paged_out:
         return X86EMUL_RETRY;
     case HVMCOPY_gfn_shared:

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-09-19 11:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-18  8:07 [PATCH] libxl: Remove qxl support for the 4.3 release Jan Beulich
2013-09-18 12:29 ` Fabio Fantoni
2013-09-18 12:42   ` Jan Beulich
2013-09-18 14:12     ` Fabio Fantoni
2013-09-18 14:30       ` Jan Beulich
2013-09-18 15:26         ` Fabio Fantoni
     [not found]         ` <5239C616.8000507@m2r.biz>
2013-09-18 15:35           ` Jan Beulich
     [not found]           ` <5239E46802000078000F4764@nat28.tlf.novell.com>
2013-09-19  9:22             ` Fabio Fantoni
     [not found]             ` <523AC243.1050406@m2r.biz>
2013-09-19 10:01               ` Jan Beulich
     [not found]               ` <523AE7AB02000078000F4A15@nat28.tlf.novell.com>
2013-09-19 10:04                 ` George Dunlap
     [not found]                 ` <523ACC26.6030307@eu.citrix.com>
2013-09-19 11:08                   ` Fabio Fantoni
2013-09-19 10:09 ` George Dunlap
2013-09-19 10:15   ` Processed: " xen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).