xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Chao Gao <chao.gao@intel.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages
Date: Wed, 13 Dec 2017 07:39:20 +0800	[thread overview]
Message-ID: <20171212233918.GA36363@op-computing> (raw)
In-Reply-To: <29f1803cb91f44fc86d6832ea44a4f27@AMSPEX02CL03.citrite.net>

On Tue, Dec 12, 2017 at 09:07:46AM +0000, Paul Durrant wrote:
>> -----Original Message-----
>[snip]
>> 
>> Hi, Paul.
>> 
>> I merged the two qemu patches, the privcmd patch [1] and did some tests.
>> I encountered a small issue and report it to you, so you can pay more
>> attention to it when doing some tests. The symptom is that using the new
>> interface to map grant table in xc_dom_gnttab_seed() always fails. After
>> adding some printk in privcmd, I found it is
>> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping ioreq
>> server doesn't have such an issue.
>> 
>> [1]
>> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
>> 9a05e6712
>> 
>
>Chao,
>
>  That privcmd patch is out of date. I've just pushed a new one:
>
>http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f00199f5f12cef401c6370c94a1140de9b318fc
>
>  Give that a try. I've been using it for a few weeks now.

Mapping ioreq server always fails, while mapping grant table succeeds.

QEMU fails with following log:
xenforeignmemory: error: ioctl failed: Device or resource busy
qemu-system-i386: failed to map ioreq server resources: error 16
handle=0x5614a6df5e00
qemu-system-i386: xen hardware virtual machine initialisation failed

Xen encountered the following error:
(XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but real_pg_owner d0
(XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn ffffffffffffffff) from L1 entry 8000005da5841227 for l1e_owner d0, pg_owner d2

I only fixed some obvious issues with a patch to your privcmd patch:
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
        if (xen_feature(XENFEAT_auto_translated_physmap))
                return -EOPNOTSUPP;
 
-       return do_remap_gfn(vma, addr, &gfn, nr, NULL, prot, domid, pages);
+       return do_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false, pages
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
 
@@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
         * cause of "wrong memory was mapped in".
         */
        BUG_ON(err_ptr == NULL);
-        do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
-                    false, pages);
+       return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
+                       false, pages);
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2017-12-12 23:39 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao [this message]
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171212233918.GA36363@op-computing \
    --to=chao.gao@intel.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).