xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>, xen-devel@lists.xenproject.org
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v5 02/16] x86/hvm: remove multiple open coded 'chunking' loops
Date: Thu, 2 Jul 2015 16:37:45 +0100	[thread overview]
Message-ID: <55955AC9.4080406@citrix.com> (raw)
In-Reply-To: <1435669558-5421-3-git-send-email-paul.durrant@citrix.com>

On 30/06/15 14:05, Paul Durrant wrote:
> ...in hvmemul_read/write()
>
> Add hvmemul_phys_mmio_access() and hvmemul_linear_mmio_access() functions
> to reduce code duplication.
>
> NOTE: This patch also introduces a change in 'chunking' around a page
>       boundary. Previously (for example) an 8 byte access at the last
>       byte of a page would get carried out as 8 single-byte accesses.
>       It will now be carried out as a single-byte access, followed by
>       a 4-byte access, a 2-byte access and then another single-byte
>       access.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  xen/arch/x86/hvm/emulate.c |  223 +++++++++++++++++++++++---------------------
>  1 file changed, 116 insertions(+), 107 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 8b60843..b67f5db 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -539,6 +539,117 @@ static int hvmemul_virtual_to_linear(
>      return X86EMUL_EXCEPTION;
>  }
>  
> +static int hvmemul_phys_mmio_access(
> +    paddr_t gpa, unsigned int size, uint8_t dir, uint8_t *buffer)
> +{
> +    unsigned long one_rep = 1;
> +    unsigned int chunk;
> +    int rc;
> +
> +    /* Accesses must fall within a page */

Full stop.

> +    BUG_ON((gpa & (PAGE_SIZE - 1)) + size > PAGE_SIZE);

~PAGE_MASK as opposed to (PAGE_SIZE - 1)

> +
> +    /*
> +     * hvmemul_do_io() cannot handle non-power-of-2 accesses or
> +     * accesses larger than sizeof(long), so choose the highest power
> +     * of 2 not exceeding sizeof(long) as the 'chunk' size.
> +     */
> +    chunk = 1 << (fls(size) - 1);

Depending on size, chunk can become undefined (shifting by 31 or -1) or
zero (shifting by 32).

How about

if ( size > sizeof(long) )
    chunk = sizeof(long);
else
    chunk = 1U << (fls(size) - 1);

?

> +    if ( chunk > sizeof (long) )
> +        chunk = sizeof (long);
> +
> +    for ( ;; )
> +    {
> +        rc = hvmemul_do_mmio_buffer(gpa, &one_rep, chunk, dir, 0,
> +                                    buffer);
> +        if ( rc != X86EMUL_OKAY )
> +            break;
> +
> +        /* Advance to the next chunk */

Full stop.

> +        gpa += chunk;
> +        buffer += chunk;
> +        size -= chunk;
> +
> +        if ( size == 0 )
> +            break;
> +
> +        /*
> +         * If the chunk now exceeds the remaining size, choose the next
> +         * lowest power of 2 that will fit.
> +         */
> +        while ( chunk > size )
> +            chunk >>= 1;
> +    }
> +
> +    return rc;
> +}
> +
> +static int hvmemul_linear_mmio_access(
> +    unsigned long gla, unsigned int size, uint8_t dir, uint8_t *buffer,
> +    uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn)
> +{
> +    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
> +    unsigned long page_off = gla & (PAGE_SIZE - 1);

Can be int as opposed to long, and "offset" appears to be the prevailing
name.  Also, ~PAGE_MASK.

> +    unsigned int chunk;
> +    paddr_t gpa;
> +    unsigned long one_rep = 1;
> +    int rc;
> +
> +    chunk = min_t(unsigned int, size, PAGE_SIZE - page_off);
> +
> +    if ( known_gpfn )
> +        gpa = pfn_to_paddr(vio->mmio_gpfn) | page_off;
> +    else
> +    {
> +        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
> +                                    hvmemul_ctxt);
> +        if ( rc != X86EMUL_OKAY )
> +            return rc;
> +    }
> +
> +    for ( ;; )
> +    {
> +        rc = hvmemul_phys_mmio_access(gpa, chunk, dir, buffer);
> +        if ( rc != X86EMUL_OKAY )
> +            break;
> +
> +        gla += chunk;
> +        buffer += chunk;
> +        size -= chunk;
> +
> +        if ( size == 0 )
> +            break;
> +
> +        ASSERT((gla & (PAGE_SIZE - 1)) == 0);

~PAGE_MASK.

> +        ASSERT(size < PAGE_SIZE);

Nothing I can see here prevents size being greater than PAGE_SIZE. 
chunk strictly will be, but size -= chunk can still leave size greater
than a page.

~Andrew

> +        chunk = size;
> +        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
> +                                    hvmemul_ctxt);
> +        if ( rc != X86EMUL_OKAY )
> +            return rc;
> +    }
> +
> +    return rc;
> +}
> +

  reply	other threads:[~2015-07-02 15:37 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-30 13:05 [PATCH v5 00/16] x86/hvm: I/O emulation cleanup and fix Paul Durrant
2015-06-30 13:05 ` [PATCH v5 01/16] x86/hvm: make sure emulation is retried if domain is shutting down Paul Durrant
2015-06-30 13:45   ` Andrew Cooper
2015-06-30 16:14     ` Don Slutz
2015-06-30 16:29       ` Paul Durrant
2015-06-30 13:05 ` [PATCH v5 02/16] x86/hvm: remove multiple open coded 'chunking' loops Paul Durrant
2015-07-02 15:37   ` Andrew Cooper [this message]
2015-07-02 15:55     ` Paul Durrant
2015-07-02 16:03       ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 03/16] x86/hvm: change hvm_mmio_read_t and hvm_mmio_write_t length argument Paul Durrant
2015-07-02 15:39   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 04/16] x86/hvm: restrict port numbers to uint16_t and sizes to unsigned int Paul Durrant
2015-07-02 15:54   ` Andrew Cooper
2015-07-02 15:56     ` Paul Durrant
2015-06-30 13:05 ` [PATCH v5 05/16] x86/hvm: unify internal portio and mmio intercepts Paul Durrant
2015-07-02 14:52   ` Roger Pau Monné
2015-07-02 15:02     ` Paul Durrant
2015-07-02 15:12       ` Roger Pau Monné
2015-07-02 15:12         ` Paul Durrant
2015-07-02 16:29   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 06/16] x86/hvm: add length to mmio check op Paul Durrant
2015-07-02 16:37   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 07/16] x86/hvm: unify dpci portio intercept with standard portio intercept Paul Durrant
2015-07-02 16:50   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 08/16] x86/hvm: unify stdvga mmio intercept with standard mmio intercept Paul Durrant
2015-07-02 16:55   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 09/16] x86/hvm: limit reps to avoid the need to handle retry Paul Durrant
2015-07-02 17:10   ` Andrew Cooper
2015-07-02 17:14     ` Paul Durrant
2015-07-02 17:31       ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 10/16] x86/hvm: only call hvm_io_assist() from hvm_wait_for_io() Paul Durrant
2015-07-03 15:03   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 11/16] x86/hvm: split I/O completion handling from state model Paul Durrant
2015-07-03 15:08   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 12/16] x86/hvm: remove HVMIO_dispatched I/O state Paul Durrant
2015-07-03 15:12   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 13/16] x86/hvm: remove hvm_io_state enumeration Paul Durrant
2015-07-03 15:13   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 14/16] x86/hvm: use ioreq_t to track in-flight state Paul Durrant
2015-07-03 15:15   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 15/16] x86/hvm: always re-emulate I/O from a buffer Paul Durrant
2015-07-03 15:26   ` Andrew Cooper
2015-06-30 13:05 ` [PATCH v5 16/16] x86/hvm: track large memory mapped accesses by buffer offset Paul Durrant
2015-07-03 15:26   ` Andrew Cooper
2015-06-30 14:48 ` [PATCH v5 00/16] x86/hvm: I/O emulation cleanup and fix Fabio Fantoni
2015-07-07 11:19   ` Fabio Fantoni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55955AC9.4080406@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).