xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, 'Chao Gao' <chao.gao@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages
Date: Wed, 13 Dec 2017 17:50:04 +0000	[thread overview]
Message-ID: <b4c030d86cf54a53b5cad54d1041d988@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <b5e1ec23f3b7412d984f0c9aa5ec888e@AMSPEX02CL03.citrite.net>

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 13 December 2017 10:49
> To: 'Chao Gao' <chao.gao@intel.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> <wei.liu2@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Tim
> (Xen.org) <tim@xen.org>; George Dunlap <George.Dunlap@citrix.com>;
> xen-devel@lists.xen.org; Jan Beulich <jbeulich@suse.com>; Ian Jackson
> <Ian.Jackson@citrix.com>
> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
> IOREQ page to 4 pages
> 
> > -----Original Message-----
> > From: Chao Gao [mailto:chao.gao@intel.com]
> > Sent: 12 December 2017 23:39
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu
> > <wei.liu2@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> Tim
> > (Xen.org) <tim@xen.org>; George Dunlap <George.Dunlap@citrix.com>;
> > xen-devel@lists.xen.org; Jan Beulich <jbeulich@suse.com>; Ian Jackson
> > <Ian.Jackson@citrix.com>
> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> > pages
> >
> > On Tue, Dec 12, 2017 at 09:07:46AM +0000, Paul Durrant wrote:
> > >> -----Original Message-----
> > >[snip]
> > >>
> > >> Hi, Paul.
> > >>
> > >> I merged the two qemu patches, the privcmd patch [1] and did some
> > tests.
> > >> I encountered a small issue and report it to you, so you can pay more
> > >> attention to it when doing some tests. The symptom is that using the
> new
> > >> interface to map grant table in xc_dom_gnttab_seed() always fails.
> After
> > >> adding some printk in privcmd, I found it is
> > >> xen_remap_domain_gfn_array() that fails with errcode -16. Mapping
> > ioreq
> > >> server doesn't have such an issue.
> > >>
> > >> [1]
> > >>
> >
> http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce5
> > >> 9a05e6712
> > >>
> > >
> > >Chao,
> > >
> > >  That privcmd patch is out of date. I've just pushed a new one:
> > >
> >
> >http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=9f
> > 00199f5f12cef401c6370c94a1140de9b318fc
> > >
> > >  Give that a try. I've been using it for a few weeks now.
> >
> > Mapping ioreq server always fails, while mapping grant table succeeds.
> >
> > QEMU fails with following log:
> > xenforeignmemory: error: ioctl failed: Device or resource busy
> > qemu-system-i386: failed to map ioreq server resources: error 16
> > handle=0x5614a6df5e00
> > qemu-system-i386: xen hardware virtual machine initialisation failed
> >
> > Xen encountered the following error:
> > (XEN) [13118.909787] mm.c:1003:d0v109 pg_owner d2 l1e_owner d0, but
> > real_pg_owner d0
> > (XEN) [13118.918122] mm.c:1079:d0v109 Error getting mfn 5da5841 (pfn
> > ffffffffffffffff) from L1 entry 8000005da5841227 for l1e_owner d0,
> pg_owner
> > d2
> 
> Hmm. That looks like it is because the ioreq server pages are not owned by
> the correct domain. The Xen patch series underwent some changes later in
> review and I did not re-test my QEMU patch after that so I wonder if
> mapping IOREQ pages has simply become broken. I'll investigate.
> 

I have reproduced the problem locally now. Will try to figure out the bug tomorrow.

  Paul

>   Paul
> 
> >
> > I only fixed some obvious issues with a patch to your privcmd patch:
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -181,7 +181,7 @@ int xen_remap_domain_gfn_range(struct
> > vm_area_struct *vma,
> >         if (xen_feature(XENFEAT_auto_translated_physmap))
> >                 return -EOPNOTSUPP;
> >
> > -       return do_remap_gfn(vma, addr, &gfn, nr, NULL, prot, domid, pages);
> > +       return do_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false,
> > pages
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range);
> >
> > @@ -200,8 +200,8 @@ int xen_remap_domain_gfn_array(struct
> > vm_area_struct *vma,
> >          * cause of "wrong memory was mapped in".
> >          */
> >         BUG_ON(err_ptr == NULL);
> > -        do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> > -                    false, pages);
> > +       return do_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid,
> > +                       false, pages);
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array);
> >
> > Thanks
> > Chao
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2017-12-13 17:50 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant [this message]
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b4c030d86cf54a53b5cad54d1041d988@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=jbeulich@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).