From: Shanker Donthineni <shankerd@codeaurora.org>
To: Julien Grall <julien.grall@arm.com>,
xen-devel <xen-devel@lists.xensource.com>,
Stefano Stabellini <sstabellini@kernel.org>
Cc: Philip Elcan <pelcan@codeaurora.org>,
Vikram Sethi <vikrams@codeaurora.org>
Subject: Re: [PATCH V2 09/10] xen/arm: io: Use binary search for mmio handler lookup
Date: Mon, 27 Jun 2016 09:50:26 -0500 [thread overview]
Message-ID: <57713D32.4050806@codeaurora.org> (raw)
In-Reply-To: <57712AC9.8050302@arm.com>
On 06/27/2016 08:31 AM, Julien Grall wrote:
> Hi Shanker,
>
> On 26/06/16 18:48, Shanker Donthineni wrote:
>> As the number of I/O handlers increase, the overhead associated with
>> linear lookup also increases. The system might have maximum of 144
>> (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
>> it would require 144 iterations for finding a matching handler. Now
>> it is time for us to change from linear (complexity O(n)) to a binary
>> search (complexity O(log n) for reducing mmio handler lookup overhead.
>>
>> Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
>> ---
>> xen/arch/arm/io.c | 50
> +++++++++++++++++++++++++++++++++++++++-----------
>> 1 file changed, 39 insertions(+), 11 deletions(-)
>>
>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>> index a5b2c2d..abf49fb 100644
>> --- a/xen/arch/arm/io.c
>> +++ b/xen/arch/arm/io.c
>> @@ -20,6 +20,7 @@
>> #include <xen/lib.h>
>> #include <xen/spinlock.h>
>> #include <xen/sched.h>
>> +#include <xen/sort.h>
>> #include <asm/current.h>
>> #include <asm/mmio.h>
>>
>> @@ -70,23 +71,38 @@ static int handle_write(const struct mmio_handler
> *handler, struct vcpu *v,
>> handler->priv);
>> }
>>
>> -int handle_mmio(mmio_info_t *info)
>> +const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t
> addr)
>> {
>> - struct vcpu *v = current;
>> - int i;
>> - const struct mmio_handler *handler = NULL;
>> const struct vmmio *vmmio = &v->domain->arch.vmmio;
>> + const struct mmio_handler *handler = vmmio->handlers;
>> + unsigned int eidx = vmmio->num_entries;
>> + unsigned int midx = eidx / 2;
>> + unsigned int sidx = 0;
>>
>> - for ( i = 0; i < vmmio->num_entries; i++ )
>> + /* Do binary search for matching mmio handler */
>> + while ( sidx != midx )
>> {
>> - handler = &vmmio->handlers[i];
>> -
>> - if ( (info->gpa >= handler->addr) &&
>> - (info->gpa < (handler->addr + handler->size)) )
>> - break;
>> + if ( addr < handler[midx].addr )
>> + eidx = midx;
>> + else
>> + sidx = midx;
>> + midx = sidx + (eidx - sidx) / 2;
>
> This binary search can be simplified. For instance, why do you want to
> compute midx at the end rather than at the beginning. This would avoid
> to have "unsigned int midx = eidx / 2" at the beginning.
>
Let me try to use "do while()" loop logic to simplify the above code logic.
>> }
>>
>> - if ( i == vmmio->num_entries )
>> + if ( (addr >= handler[sidx].addr) &&
>> + (addr < (handler[sidx].addr + handler[sidx].size)) )
>> + return handler + sidx;
>
> Please use a temporary variable for handler[sidx]. So it will be
> easier to read the code.
>
>> +
>> + return NULL;
>> +}
>> +
>> +int handle_mmio(mmio_info_t *info)
>> +{
>> + const struct mmio_handler *handler;
>> + struct vcpu *v = current;
>> +
>> + handler = find_mmio_handler(v, info->gpa);
>
> I would have expected some locking here. Could you explain why it is
> safe to looking find the handler with your solution?
>
> For what is worth, there was no locking before because
> register_mmio_handler was always adding the handler at the end of the
> array. This is not true anymore because you are sorting the array.
>
The function register_mmio_handler() is called only during dom0/domU
domain build code path. We don't need locking here until unless some
code inserting mmio handlers at runtime.
>> + if ( !handler )
>> return 0;
>>
>> if ( info->dabt.write )
>> @@ -95,6 +111,14 @@ int handle_mmio(mmio_info_t *info)
>> return handle_read(handler, v, info);
>> }
>>
>> +static int cmp_mmio_handler(const void *key, const void *elem)
>> +{
>> + const struct mmio_handler *handler0 = key;
>> + const struct mmio_handler *handler1 = elem;
>> +
>> + return (handler0->addr < handler1->addr) ? -1 : 0;
>> +}
>> +
>> void register_mmio_handler(struct domain *d,
>> const struct mmio_handler_ops *ops,
>> paddr_t addr, paddr_t size, void *priv)
>> @@ -122,6 +146,10 @@ void register_mmio_handler(struct domain *d,
>>
>> vmmio->num_entries++;
>>
>> + /* Sort mmio handlers in ascending order based on base address */
>> + sort(vmmio->handlers, vmmio->num_entries, sizeof(struct
> mmio_handler),
>> + cmp_mmio_handler, NULL);
>> +
>> spin_unlock(&vmmio->lock);
>> }
>>
>>
>
> Regards,
>
--
Shanker Donthineni
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-06-27 14:50 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-26 17:48 [PATCH V2 00/10] Add support for parsing per CPU Redistributor entry Shanker Donthineni
2016-06-26 17:48 ` [PATCH V2 01/10] arm/gic-v3: Fix bug in function cmp_rdist() Shanker Donthineni
2016-06-27 11:03 ` Julien Grall
2016-06-27 13:41 ` Shanker Donthineni
2016-06-26 17:48 ` [PATCH V2 02/10] arm/gic-v3: Do early GICD ioremap and clean up Shanker Donthineni
2016-06-27 11:08 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 03/10] arm/gic-v3: Fold GICR subtable parsing into a new function Shanker Donthineni
2016-06-27 11:26 ` Julien Grall
2016-06-27 13:50 ` Shanker Donthineni
2016-06-27 15:40 ` Shanker Donthineni
2016-06-27 15:41 ` Julien Grall
2016-06-27 16:07 ` Shanker Donthineni
2016-06-27 16:09 ` Julien Grall
2016-06-27 16:17 ` Shanker Donthineni
2016-06-27 16:20 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 04/10] arm/gic-v3: Parse per-cpu redistributor entry in GICC subtable Shanker Donthineni
2016-06-27 11:47 ` Julien Grall
2016-06-27 14:17 ` Shanker Donthineni
2016-06-27 15:13 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 05/10] xen/arm: vgic: Use dynamic memory allocation for vgic_rdist_region Shanker Donthineni
2016-06-27 12:38 ` Julien Grall
2016-06-27 12:43 ` Andrew Cooper
2016-06-27 14:28 ` Shanker Donthineni
2016-06-26 17:48 ` [PATCH V2 06/10] arm/gic-v3: Remove an unused macro MAX_RDIST_COUNT Shanker Donthineni
2016-06-27 13:52 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 07/10] arm: vgic: Split vgic_domain_init() functionality into two functions Shanker Donthineni
2016-06-27 12:45 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 08/10] arm/io: Use separate memory allocation for mmio handlers Shanker Donthineni
2016-06-27 13:55 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 09/10] xen/arm: io: Use binary search for mmio handler lookup Shanker Donthineni
2016-06-27 13:31 ` Julien Grall
2016-06-27 14:50 ` Shanker Donthineni [this message]
2016-06-27 15:27 ` Julien Grall
2016-06-26 17:48 ` [PATCH V2 10/10] arm/vgic: Change fixed number of mmio handlers to variable number Shanker Donthineni
2016-06-27 13:35 ` Julien Grall
2016-06-27 15:02 ` Shanker Donthineni
2016-06-28 10:51 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57713D32.4050806@codeaurora.org \
--to=shankerd@codeaurora.org \
--cc=julien.grall@arm.com \
--cc=pelcan@codeaurora.org \
--cc=sstabellini@kernel.org \
--cc=vikrams@codeaurora.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).