From: Julien Grall <julien.grall@arm.com>
To: Shanker Donthineni <shankerd@codeaurora.org>,
xen-devel <xen-devel@lists.xensource.com>,
Stefano Stabellini <sstabellini@kernel.org>
Cc: Philip Elcan <pelcan@codeaurora.org>,
Vikram Sethi <vikrams@codeaurora.org>
Subject: Re: [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup
Date: Thu, 14 Jul 2016 17:46:02 +0100 [thread overview]
Message-ID: <5787C1CA.6050003@arm.com> (raw)
In-Reply-To: <1468513081-12002-2-git-send-email-shankerd@codeaurora.org>
Hi Shanker,
On 14/07/16 17:18, Shanker Donthineni wrote:
> As the number of I/O handlers increase, the overhead associated with
> linear lookup also increases. The system might have maximum of 144
> (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
> it would require 144 iterations for finding a matching handler. Now
> it is time for us to change from linear (complexity O(n)) to a binary
> search (complexity O(log n) for reducing mmio handler lookup overhead.
>
> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
> ---
> Changes since v3:
> Moved the function bsearch() to common file xen/common/bsearch.c.
>
> Changes since v2:
> Converted mmio lookup code to a critical section.
> Copied the function bsreach() from Linux kernel.
>
> xen/arch/arm/io.c | 42 +++++++++++++++++++++++++-----------------
> 1 file changed, 25 insertions(+), 17 deletions(-)
>
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index 40330f0..0471ba8 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -20,6 +20,8 @@
> #include <xen/lib.h>
> #include <xen/spinlock.h>
> #include <xen/sched.h>
> +#include <xen/sort.h>
> +#include <xen/bsearch.h>
> #include <asm/current.h>
> #include <asm/mmio.h>
>
> @@ -70,27 +72,29 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v,
> handler->priv);
> }
>
> -static const struct mmio_handler *find_mmio_handler(struct domain *d,
> - paddr_t gpa)
> +static int cmp_mmio_handler(const void *key, const void *elem)
Would it be worth to mention in a comment that we don't support overlapping?
> {
> - const struct mmio_handler *handler;
> - unsigned int i;
> - struct vmmio *vmmio = &d->arch.vmmio;
> + const struct mmio_handler *handler0 = key;
> + const struct mmio_handler *handler1 = elem;
>
> - read_lock(&vmmio->lock);
> + if ( handler0->addr < handler1->addr )
> + return -1;
>
> - for ( i = 0; i < vmmio->num_entries; i++ )
> - {
> - handler = &vmmio->handlers[i];
> + if ( handler0->addr > (handler1->addr + handler1->size) )
> + return 1;
>
> - if ( (gpa >= handler->addr) &&
> - (gpa < (handler->addr + handler->size)) )
> - break;
> - }
> + return 0;
> +}
>
> - if ( i == vmmio->num_entries )
> - handler = NULL;
> +static const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t gpa)
Why have you changed the prototype of find_mmio_handler?
> +{
> + struct vmmio *vmmio = &v->domain->arch.vmmio;
> + struct mmio_handler key = {.addr = gpa};
I know it is not currently the case, but should not we take into account
the size of the access?
> + const struct mmio_handler *handler;
>
> + read_lock(&vmmio->lock);
> + handler = bsearch(&key, vmmio->handlers, vmmio->num_entries,
> + sizeof(*handler), cmp_mmio_handler);
> read_unlock(&vmmio->lock);
>
> return handler;
> @@ -99,9 +103,9 @@ static const struct mmio_handler *find_mmio_handler(struct domain *d,
> int handle_mmio(mmio_info_t *info)
> {
> struct vcpu *v = current;
> - const struct mmio_handler *handler = NULL;
> + const struct mmio_handler *handler;
Why this change?
>
> - handler = find_mmio_handler(v->domain, info->gpa);
> + handler = find_mmio_handler(v, info->gpa);
ditto.
> if ( !handler )
> return 0;
>
> @@ -131,6 +135,10 @@ void register_mmio_handler(struct domain *d,
>
> vmmio->num_entries++;
>
> + /* Sort mmio handlers in ascending order based on base address */
> + sort(vmmio->handlers, vmmio->num_entries, sizeof(struct mmio_handler),
> + cmp_mmio_handler, NULL);
The indentation looks wrong here.
> +
> write_unlock(&vmmio->lock);
> }
>
>
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-14 16:46 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-14 16:17 [PATCH V4 08/11] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
2016-07-14 16:18 ` [PATCH V4 10/11] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
2016-07-14 16:46 ` Julien Grall [this message]
2016-07-15 2:35 ` Shanker Donthineni
2016-07-15 9:52 ` Julien Grall
2016-07-15 13:18 ` Shanker Donthineni
2016-07-14 16:18 ` [PATCH V4 11/11] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5787C1CA.6050003@arm.com \
--to=julien.grall@arm.com \
--cc=pelcan@codeaurora.org \
--cc=shankerd@codeaurora.org \
--cc=sstabellini@kernel.org \
--cc=vikrams@codeaurora.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).