xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Paul Durrant <Paul.Durrant@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	"Lv, Zhiyuan" <zhiyuan.lv@intel.com>
Subject: Re: [PATCH v6 3/4] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages.
Date: Fri, 9 Sep 2016 14:21:29 +0800	[thread overview]
Message-ID: <57D254E9.9000807@linux.intel.com> (raw)
In-Reply-To: <57D247F6.9010503@linux.intel.com>



On 9/9/2016 1:26 PM, Yu Zhang wrote:
>
> >>> On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -95,6 +95,41 @@ static const struct hvm_io_handler null_handler = {
> >      .ops = &null_ops
> >  };
> >
> > +static int mem_read(const struct hvm_io_handler *io_handler,
> > +                    uint64_t addr,
> > +                    uint32_t size,
> > +                    uint64_t *data)
> > +{
> > +    struct domain *currd = current->domain;
> > +    unsigned long gmfn = paddr_to_pfn(addr);
> > +    unsigned long offset = addr & ~PAGE_MASK;
> > +    struct page_info *page = get_page_from_gfn(currd, gmfn, NULL,
> > P2M_UNSHARE);
> > +    uint8_t *p;
>
> const (and preferably also void)
>

Thanks, I think we do not need this variable with 
hvm_copy_from_guest_phys().

> > +    ASSERT(offset + size < PAGE_SIZE);
>
> Surely <= ?
>

Yes. Thanks.

> > +    if ( !page )
> > +        return X86EMUL_UNHANDLEABLE;
> > +
> > +    p = __map_domain_page(page);
> > +    p += offset;
> > +    memcpy(data, p, size);
> > +
> > +    unmap_domain_page(p);
> > +    put_page(page);
>
> But anyway - I think rather than all this open coding you would
> better call hvm_copy_from_guest_phys().
>

Agree.

> > +static const struct hvm_io_ops mem_ops = {
> > +    .read = mem_read,
> > +    .write = null_write
> > +};
> > +
> > +static const struct hvm_io_handler mem_handler = {
> > +    .ops = &mem_ops
> > +};
>
> I think the mem_ prefix for both objects is a bad one, considering
> that this isn't suitable for general memory handling.

How about ioreq_server_read/ops? It is only for this special p2m type.

>
> > @@ -204,7 +239,15 @@ static int hvmemul_do_io(
> >          /* If there is no suitable backing DM, just ignore accesses */
> >          if ( !s )
> >          {
> > -            rc = hvm_process_io_intercept(&null_handler, &p);
> > +            /*
> > +             * For p2m_ioreq_server pages accessed with 
> read-modify-write
> > +             * instructions, we provide a read handler to copy the 
> data to
> > +             * the buffer.
> > +             */
> > +            if ( p2mt == p2m_ioreq_server )
>
> Please add unlikely() here, or aid the compiler in avoiding any
> branch by ...
>
> > +                rc = hvm_process_io_intercept(&mem_handler, &p);
> > +            else
> > +                rc = hvm_process_io_intercept(&null_handler, &p);
>
> ... using a conditional expression for the first function argument.
>
OK. I prefer to add the unlikely().

> And the comment ahead of the if() now also needs adjustment
> (perhaps you want to merge the one you add into that one).
>

OK. And IIUC, you mean merge to the original comments above the "if (!s)"?
Like this:
         /*
          * For p2m_ioreq_server pages accessed with read-modify-write
          * instructions, we provide a read handler to copy the data to
          * the buffer. For other cases, if there is no suitable backing
          * DM, we just ignore accesses.
          */
         if ( !s )

> Jan
>

Thanks
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-09-09  6:21 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-02 10:47 [PATCH v6 0/4] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-09-02 10:47 ` [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-09-05 13:31   ` Jan Beulich
2016-09-05 17:20     ` George Dunlap
2016-09-06  7:58       ` Jan Beulich
2016-09-06  8:03         ` Paul Durrant
2016-09-06  8:13           ` Jan Beulich
2016-09-06 10:00             ` Yu Zhang
2016-09-09  5:55     ` Yu Zhang
2016-09-09  8:09       ` Jan Beulich
2016-09-09  8:59         ` Yu Zhang
2016-09-05 17:23   ` George Dunlap
     [not found]   ` <57D24730.2050904@linux.intel.com>
2016-09-09  5:51     ` Yu Zhang
2016-09-21 13:04       ` George Dunlap
2016-09-22  9:12         ` Yu Zhang
2016-09-22 11:32           ` George Dunlap
2016-09-22 16:02             ` Yu Zhang
2016-09-23 10:35               ` George Dunlap
2016-09-26  6:57                 ` Yu Zhang
2016-09-26  6:58           ` Yu Zhang
2016-09-02 10:47 ` [PATCH v6 2/4] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2016-09-05 13:49   ` Jan Beulich
     [not found]   ` <57D24782.6010701@linux.intel.com>
2016-09-09  5:56     ` Yu Zhang
2016-09-02 10:47 ` [PATCH v6 3/4] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2016-09-05 14:10   ` Jan Beulich
     [not found]   ` <57D247F6.9010503@linux.intel.com>
2016-09-09  6:21     ` Yu Zhang [this message]
2016-09-09  8:12       ` Jan Beulich
2016-09-02 10:47 ` [PATCH v6 4/4] x86/ioreq server: Reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2016-09-05 14:47   ` Jan Beulich
     [not found]   ` <57D24813.2090903@linux.intel.com>
2016-09-09  7:24     ` Yu Zhang
2016-09-09  8:20       ` Jan Beulich
2016-09-09  9:24         ` Yu Zhang
2016-09-09  9:44           ` Jan Beulich
2016-09-09  9:56             ` Yu Zhang
2016-09-09 10:09               ` Jan Beulich
2016-09-09 10:01                 ` Yu Zhang
2016-09-20  2:57                 ` Yu Zhang
2016-09-22 18:06                   ` George Dunlap
2016-09-23  1:31                     ` Yu Zhang
2016-09-06 10:57 ` [PATCH v6 0/4] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57D254E9.9000807@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=George.Dunlap@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=Xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).