From: Julien Grall <julien.grall@linaro.org>
To: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Stefano Stabellini <sstabellini@kernel.org>,
stuart.yoder@arm.com
Subject: Re: [RFC] WIP: optee: add OP-TEE mediator
Date: Mon, 4 Dec 2017 16:29:10 +0000 [thread overview]
Message-ID: <6233ff5e-a745-c755-8cc5-01677b276385@linaro.org> (raw)
In-Reply-To: <20171204162437.GB23683@EPUAKYIW2556.kyiv.epam.com>
On 04/12/17 16:24, Volodymyr Babchuk wrote:
> On Mon, Dec 04, 2017 at 02:30:32PM +0000, Julien Grall wrote:
>> On 01/12/17 22:58, Stefano Stabellini wrote:
>>> On Mon, 27 Nov 2017, Volodymyr Babchuk wrote:
>>> = Page pinning =
>>>
>>> Guest pages passed to OP-TEE need to be pinned (otherwise Xen doesn't
>>> guarantee they'll be available). I think the right function to call for
>>> that is get_page_from_gfn or get_page.
>>
>> No direct use of get_page please. We already have plenty of helper to deal
>> with the translation GFN -> Page or even copying data. I don't want to see
>> more open-coding version because it makes difficult to interaction with
>> other features such as memaccess and PoD.
> Okay. Could you please suggest what API should be used in my case?
Please read my previous e-mail until the end. I provided suggestions how
to handle pinning...
>
>>>> ---
>>>>
>>>> Add OP-TEE mediator as an example how TEE mediator framework
>>>> works.
>>>>
>>>> OP-TEE mediator support address translation for DomUs.
>>>> It tracks execution of STD calls, correctly handles memory-related RPC
>>>> requests, tracks buffer allocated for RPCs.
>>>>
>>>> With this patch OP-TEE successfully passes own tests, while client is
>>>> running in DomU. Currently it lacks some code for exceptional cases,
>>>> because this patch was used mostly to debug virtualization in OP-TEE.
>>>> Nevertheless, it provides all features needed for OP-TEE mediation.
>>>>
>>>> WARNING: This is a development patch, it does not cover all corner
>>>> cases, so, please don't use it in production.
>>>>
>>>> It was tested on RCAR Salvator-M3 board.
>>>>
>>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>>> ---
>>>> xen/arch/arm/tee/Kconfig | 4 +
>>>> xen/arch/arm/tee/Makefile | 1 +
>>>> xen/arch/arm/tee/optee.c | 765 +++++++++++++++++++++++++++++++++++++++++++
>>>> xen/arch/arm/tee/optee_smc.h | 50 +++
>>>> 4 files changed, 820 insertions(+)
>>>> create mode 100644 xen/arch/arm/tee/optee.c
>>>>
>>>> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
>>>> index e69de29..7c6b5c6 100644
>>>> --- a/xen/arch/arm/tee/Kconfig
>>>> +++ b/xen/arch/arm/tee/Kconfig
>>>> @@ -0,0 +1,4 @@
>>>> +config ARM_OPTEE
>>>> + bool "Enable OP-TEE mediator"
>>>> + default n
>>>> + depends on ARM_TEE
>>>> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
>>>> index c54d479..9d93b42 100644
>>>> --- a/xen/arch/arm/tee/Makefile
>>>> +++ b/xen/arch/arm/tee/Makefile
>>>> @@ -1 +1,2 @@
>>>> obj-y += tee.o
>>>> +obj-$(CONFIG_ARM_OPTEE) += optee.o
>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>> new file mode 100644
>>>> index 0000000..59c3600
>>>> --- /dev/null
>>>> +++ b/xen/arch/arm/tee/optee.c
>>>> @@ -0,0 +1,765 @@
>>>> +/*
>>>> + * xen/arch/arm/tee/optee.c
>>>> + *
>>>> + * OP-TEE mediator
>>>> + *
>>>> + * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>>> + * Copyright (c) 2017 EPAM Systems.
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License version 2 as
>>>> + * published by the Free Software Foundation.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + */
>>>> +
>>>> +#include <xen/domain_page.h>
>>>> +#include <xen/types.h>
>>>> +#include <xen/sched.h>
>>>> +
>>>> +#include <asm/p2m.h>
>>>> +#include <asm/tee.h>
>>>> +
>>>> +#include "optee_msg.h"
>>>> +#include "optee_smc.h"
>>>> +
>>>> +/*
>>>> + * Global TODO:
>>>> + * 1. Create per-domain context, where call and shm will be stored
>>>> + * 2. Pin pages shared between OP-TEE and guest
>>>> + */
>>>> +/*
>>>> + * OP-TEE violates SMCCC when it defines own UID. So we need
>>>> + * to place bytes in correct order.
>>>> + */
>>>> +#define OPTEE_UID (xen_uuid_t){{ \
>>>> + (uint8_t)(OPTEE_MSG_UID_0 >> 0), (uint8_t)(OPTEE_MSG_UID_0 >> 8), \
>>>> + (uint8_t)(OPTEE_MSG_UID_0 >> 16), (uint8_t)(OPTEE_MSG_UID_0 >> 24), \
>>>> + (uint8_t)(OPTEE_MSG_UID_1 >> 0), (uint8_t)(OPTEE_MSG_UID_1 >> 8), \
>>>> + (uint8_t)(OPTEE_MSG_UID_1 >> 16), (uint8_t)(OPTEE_MSG_UID_1 >> 24), \
>>>> + (uint8_t)(OPTEE_MSG_UID_2 >> 0), (uint8_t)(OPTEE_MSG_UID_2 >> 8), \
>>>> + (uint8_t)(OPTEE_MSG_UID_2 >> 16), (uint8_t)(OPTEE_MSG_UID_2 >> 24), \
>>>> + (uint8_t)(OPTEE_MSG_UID_3 >> 0), (uint8_t)(OPTEE_MSG_UID_3 >> 8), \
>>>> + (uint8_t)(OPTEE_MSG_UID_3 >> 16), (uint8_t)(OPTEE_MSG_UID_3 >> 24), \
>>>> + }}
>>>> +
>>>> +#define MAX_NONCONTIG_ENTRIES 8
>>>> +
>>>> +/*
>>>> + * Call context. OP-TEE can issue mulitple RPC returns during one call.
>>>> + * We need to preserve context during them.
>>>> + */
>>>> +struct std_call_ctx {
>>>> + struct list_head list;
>>>> + struct optee_msg_arg *guest_arg;
>>>> + struct optee_msg_arg *xen_arg;
>>>> + void *non_contig[MAX_NONCONTIG_ENTRIES];
>>>> + int non_contig_order[MAX_NONCONTIG_ENTRIES];
>>>> + int optee_thread_id;
>>>> + int rpc_op;
>>>> + domid_t domid;
>>>> +};
>>>> +static LIST_HEAD(call_ctx_list);
>>>> +static DEFINE_SPINLOCK(call_ctx_list_lock);
>>>> +
>>>> +/*
>>>> + * Command buffer shared between OP-TEE and guest.
>>>> + * Warning! In the proper implementation this SHM buffer *probably* should
>>>> + * by shadowed by XEN.
>>>> + * TODO: Reconsider this.
>>>> + */
>>>> +struct shm {
>>>> + struct list_head list;
>>>> + struct optee_msg_arg *guest_arg;
>>>> + struct page *guest_page;
>>>> + mfn_t guest_mfn;
>>>> + uint64_t cookie;
>>>> + domid_t domid;
>>>> +};
>>>> +
>>>> +static LIST_HEAD(shm_list);
>>>> +static DEFINE_SPINLOCK(shm_list_lock);
>>>> +
>>>> +static int optee_init(void)
>>>> +{
>>>> + printk("OP-TEE mediator init done\n");
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static void optee_domain_create(struct domain *d)
>>>> +{
>>>> + register_t resp[4];
>>>> + call_smccc_smc(OPTEE_SMC_VM_CREATED,
>>>> + d->domain_id + 1, 0, 0, 0, 0, 0, 0, resp);
>>>> + if ( resp[0] != OPTEE_SMC_RETURN_OK )
>>>> + gprintk(XENLOG_WARNING, "OP-TEE don't want to support domain: %d\n",
>>>> + (uint32_t)resp[0]);
>>>> + /* TODO: Change function declaration to be able to retun error */
>>>> +}
>>>> +
>>>> +static void optee_domain_destroy(struct domain *d)
>>>> +{
>>>> + register_t resp[4];
>>>> + call_smccc_smc(OPTEE_SMC_VM_DESTROYED,
>>>> + d->domain_id + 1, 0, 0, 0, 0, 0, 0, resp);
>>>> + /* TODO: Clean call contexts and SHMs associated with domain */
>>>> +}
>>>> +
>>>> +static bool forward_call(struct cpu_user_regs *regs)
>>>> +{
>>>> + register_t resp[4];
>>>> +
>>>> + /* TODO: Use separate registers set to prevent leakage to guest */
>>>> + call_smccc_smc(get_user_reg(regs, 0),
>>>> + get_user_reg(regs, 1),
>>>> + get_user_reg(regs, 2),
>>>> + get_user_reg(regs, 3),
>>>> + get_user_reg(regs, 4),
>>>> + get_user_reg(regs, 5),
>>>> + get_user_reg(regs, 6),
>>>> + /* VM id 0 is reserved for hypervisor itself */
>>>> + current->domain->domain_id + 1,
>>>
>>> This doesn't look like it would wrap around safely.
>>
>> Well ordinary domain will always have a domain ID < DOMID_FIRST_RESERVER
>> (0x7FF0). So there are no issue here. Although, we may want a BUILD_BUG_ON()
>> to catch change of DOMID_FIRST_RESERVED.
>>
>>>
>>>
>>>> + resp);
>>>> +
>>>> + set_user_reg(regs, 0, resp[0]);
>>>> + set_user_reg(regs, 1, resp[1]);
>>>> + set_user_reg(regs, 2, resp[2]);
>>>> + set_user_reg(regs, 3, resp[3]);
>>>> +
>>>> + return true;
>>>> +}
>>>> +
>>>> +static struct std_call_ctx *allocate_std_call_ctx(void)
>>>> +{
>>>> + struct std_call_ctx *ret;
>>>> +
>>>> + ret = xzalloc(struct std_call_ctx);
>>>> + if ( !ret )
>>>> + return NULL;
>>>> +
>>>> + ret->optee_thread_id = -1;
>>
>> You set it to -1. But no-one is checking that value. So what is the purpose
>> of setting to -1 and not 0?
>>
>>>> + ret->domid = -1;
>>
>> Please use DOMID_INVALID rather than -1. You don't know whether the latter
>> will be used in the future for a domain.
>>
>>>> +
>>>> + spin_lock(&call_ctx_list_lock);
>>>> + list_add_tail(&ret->list, &call_ctx_list);
>>>> + spin_unlock(&call_ctx_list_lock);
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void free_std_call_ctx(struct std_call_ctx *ctx)
>>>> +{
>>>> + spin_lock(&call_ctx_list_lock);
>>>> + list_del(&ctx->list);
>>>> + spin_unlock(&call_ctx_list_lock);
>>>> +
>>>> + if (ctx->xen_arg)
>>>> + free_xenheap_page(ctx->xen_arg);
>>>> +
>>>> + if (ctx->guest_arg)
>>>> + unmap_domain_page(ctx->guest_arg);
>>>> +
>>>> + for (int i = 0; i < MAX_NONCONTIG_ENTRIES; i++) {
>>>> + if (ctx->non_contig[i])
>>>> + free_xenheap_pages(ctx->non_contig[i], ctx->non_contig_order[i]);
>>>> + }
>>>> +
>>>> + xfree(ctx);
>>>> +}
>>>> +
>>>> +static struct std_call_ctx *find_ctx(int thread_id, domid_t domid)
>>>> +{
>>>> + struct std_call_ctx *ctx;
>>>> +
>>>> + spin_lock(&call_ctx_list_lock);
>>>> + list_for_each_entry( ctx, &call_ctx_list, list )
>>>> + {
>>>> + if (ctx->domid == domid && ctx->optee_thread_id == thread_id )
>>>> + {
>>>> + spin_unlock(&call_ctx_list_lock);
>>>> + return ctx;
>>>> + }
>>>> + }
>>>> + spin_unlock(&call_ctx_list_lock);
>>>> +
>>>> + return NULL;
>>>> +}
>>>> +
>>>> +#define PAGELIST_ENTRIES_PER_PAGE \
>>>> + ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
>>>> +
>>>> +static size_t get_pages_list_size(size_t num_entries)
>>>> +{
>>>> + int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
>>>> +
>>>> + return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
>>>> +}
>>>> +
>>>> +static mfn_t lookup_guest_ram_addr(paddr_t gaddr)
>>>> +{
>>>> + mfn_t mfn;
>>>> + gfn_t gfn;
>>>> + p2m_type_t t;
>>>> + gfn = gaddr_to_gfn(gaddr);
>>>> + mfn = p2m_lookup(current->domain, gfn, &t);
>>>> + if ( t != p2m_ram_rw || mfn_eq(mfn, INVALID_MFN) ) {
>>>> + gprintk(XENLOG_INFO, "Domain tries to use invalid gfn\n");
>>>> + return INVALID_MFN;
>>>> + }
>>>> + return mfn;
>>>> +} >> +
>>>> +static struct shm *allocate_and_map_shm(paddr_t gaddr, uint64_t cookie)
>>>> +{
>>>> + struct shm *ret;
>>>> +
>>>> + ret = xzalloc(struct shm);
>>>> + if ( !ret )
>>>> + return NULL;
>>>> +
>>>> + ret->guest_mfn = lookup_guest_ram_addr(gaddr);
>>>> +
>>>> + if ( mfn_eq(ret->guest_mfn, INVALID_MFN) )
>>>> + {
>>>> + xfree(ret);
>>>> + return NULL;
>>>> + }
>>>> +
>>>> + ret->guest_arg = map_domain_page(ret->guest_mfn);
>>
>> map_domain_page() can never fail, but you use it the wrong way. The purpose
>> of this function is to map the memory for a very short lifetime, and only a
>> the current pCPU (when the all the RAM is not always mapped). Here, you seem
>> to use across SMC call (e.g for RPC).
>>
>> Looking at the usage in the code, you only map it in order to copy the
>> arguments to/from the guest.
>>
>> map_domain_page() will not take a reference on the page and prevent the page
>> to disappear from the guest. So this bits is unsafe.
>>
>> For the arguments, the best is to use guest copy helpers (see
>> access_guest_memory_by_ipa). You might want to look at [1] as it improves
>> the use of access_guest_memory_by_ipa.
>>
>>>> + if ( !ret->guest_arg )
>>>> + {
>>>> + gprintk(XENLOG_INFO, "Could not map domain page\n");
>>>> + xfree(ret);
>>>> + return NULL;
>>>> + }
>>>> + ret->cookie = cookie;
>>>> + ret->domid = current->domain->domain_id;
>>>> +
>>>> + spin_lock(&shm_list_lock);
>>>> + list_add_tail(&ret->list, &shm_list);
>>>> + spin_unlock(&shm_list_lock);
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void free_shm(uint64_t cookie, domid_t domid)
>>>> +{
>>>> + struct shm *shm, *found = NULL;
>>>> + spin_lock(&shm_list_lock);
>>>> +
>>>> + list_for_each_entry( shm, &shm_list, list )
>>>> + {
>>>> + if (shm->domid == domid && shm->cookie == cookie )
>>>> + {
>>>> + found = shm;
>>>> + list_del(&found->list);
>>>> + break;
>>>> + }
>>>> + }
>>>> + spin_unlock(&shm_list_lock);
>>>> +
>>>> + if ( !found ) {
>>>> + return;
>>>> + }
>>>> +
>>>> + if ( found->guest_arg )
>>>> + unmap_domain_page(found->guest_arg);
>>>> +
>>>> + xfree(found);
>>>> +}
>>>> +
>>>> +static struct shm *find_shm(uint64_t cookie, domid_t domid)
>>>> +{
>>>> + struct shm *shm;
>>>> +
>>>> + spin_lock(&shm_list_lock);
>>>> + list_for_each_entry( shm, &shm_list, list )
>>>> + {
>>>> + if ( shm->domid == domid && shm->cookie == cookie )
>>>> + {
>>>> + spin_unlock(&shm_list_lock);
>>>> + return shm;
>>>> + }
>>>> + }
>>>> + spin_unlock(&shm_list_lock);
>>>> +
>>>> + return NULL;
>>>> +}
>>>> +
>>>> +static bool translate_noncontig(struct std_call_ctx *ctx,
>>>> + struct optee_msg_param *param,
>>>> + int idx)
>>>> +{
>>>> + /*
>>>> + * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h for details.
>>>> + *
>>>> + * WARNING: This is test code. It works only with xen page size == 4096
>>
>> That's a call for a BUILD_BUG_ON().
>>
>>>> + */
>>>> + uint64_t size;
>>>> + int page_offset;
>>>> + int num_pages;
>>>> + int order;
>>>> + int entries_on_page = 0;
>>>> + paddr_t gaddr;
>>>> + mfn_t guest_mfn;
>>>> + struct {
>>>> + uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
>>>> + uint64_t next_page_data;
>>>> + } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;
>>>> +
>>>> + page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
>>>> +
>>>> + size = ROUNDUP(param->u.tmem.size + page_offset,
>>>> + OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>>> +
>>>> + num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>
>> What is the limit for num_pages? I can't see anything in the code that
>> prevent any high number and might exhaust Xen memory.
>>
>>>> +
>>>> + order = get_order_from_bytes(get_pages_list_size(num_pages));
>>>> +
>>>> + pages_data_xen_start = alloc_xenheap_pages(order, 0);
>>>> + if (!pages_data_xen_start)
>>>> + return false;
>>>> +
>>>> + gaddr = param->u.tmem.buf_ptr & ~(OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
>>>> + guest_mfn = lookup_guest_ram_addr(gaddr);
>>>> + if ( mfn_eq(guest_mfn, INVALID_MFN) )
>>>> + goto err_free;
>>>> +
>>>> + pages_data_guest = map_domain_page(guest_mfn);
>>
>> Similarly here, you may want to use access_guest_by_ipa. This will do all
>> the safety check for copy from guest memory.
>>
>> Furthermore, I think this is going to simplify a lot this code.
>>
>>
>>>> + if (!pages_data_guest)
>>>> + goto err_free;
>>>> +
>>>> + pages_data_xen = pages_data_xen_start;
>>>> + while ( num_pages ) {
>>>> + mfn_t entry_mfn = lookup_guest_ram_addr(
>>>> + pages_data_guest->pages_list[entries_on_page]);
>>>> +
>>>> + if ( mfn_eq(entry_mfn, INVALID_MFN) )
>>>> + goto err_unmap;
>>
>> You would need to get a reference on each page, and release it in err_unmap
>> or when the command is done. get_page_from_gfn could do it for you.
>>
>>>> +
>>>> + pages_data_xen->pages_list[entries_on_page] = mfn_to_maddr(entry_mfn);
>>>> + entries_on_page++;
>>>> +
>>>> + if ( entries_on_page == PAGELIST_ENTRIES_PER_PAGE ) {
>>>> + pages_data_xen->next_page_data = virt_to_maddr(pages_data_xen + 1);
>>>> + pages_data_xen++;
>>>> + gaddr = pages_data_guest->next_page_data;
>>>> + unmap_domain_page(pages_data_guest);
>>>> + guest_mfn = lookup_guest_ram_addr(gaddr);
>>>> + if ( mfn_eq(guest_mfn, INVALID_MFN) )
>>>> + goto err_free;
>>>> +
>>>> + pages_data_guest = map_domain_page(guest_mfn);
>>>> + if ( !pages_data_guest )
>>>> + goto err_free;
>>>> + /* Roll over to the next page */
>>>> + entries_on_page = 0;
>>>> + }
>>>> + num_pages--;
>>>> + }
>>>> +
>>>> + unmap_domain_page(pages_data_guest);
>>>> +
>>>> + param->u.tmem.buf_ptr = virt_to_maddr(pages_data_xen_start) | page_offset;
>>
>> I am not sure to understand why you are apply the offset of the guest buffer
>> to xen buffer.
>>
>>>> +
>>>> + ctx->non_contig[idx] = pages_data_xen_start;
>>>> + ctx->non_contig_order[idx] = order;
>>>> +
>>>> + unmap_domain_page(pages_data_guest);
>>>> + return true;
>>>> +
>>>> +err_unmap:
>>>> + unmap_domain_page(pages_data_guest);
>>>> +err_free:
>>>> + free_xenheap_pages(pages_data_xen_start, order);
>>>> + return false;
>>>> +}
>>>> +
>>>> +static bool translate_params(struct std_call_ctx *ctx)
>>>> +{
>>>> + unsigned int i;
>>>> + uint32_t attr;
>>>> +
>>>> + for ( i = 0; i < ctx->xen_arg->num_params; i++ ) {
>>>> + attr = ctx->xen_arg->params[i].attr;
>>>> +
>>>> + switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK ) {
>>>> + case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
>>>> + if ( attr & OPTEE_MSG_ATTR_NONCONTIG ) {
>>>> + if ( !translate_noncontig(ctx, ctx->xen_arg->params + i, i) )
>>>> + return false;
>>>> + }
>>>> + else {
>>>> + gprintk(XENLOG_WARNING, "Guest tries to use old tmem arg\n");
>>>> + return false;
>>>> + }
>>>> + break;
>>>> + case OPTEE_MSG_ATTR_TYPE_NONE:
>>>> + case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
>>>> + case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
>>>> + continue;
>>>> + }
>>>> + }
>>>> + return true;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Copy command buffer into xen memory to:
>>>> + * 1) Hide translated addresses from guest
>>>> + * 2) Make sure that guest wouldn't change data in command buffer during call
>>>> + */
>>>> +static bool copy_std_request(struct cpu_user_regs *regs,
>>>> + struct std_call_ctx *ctx)
>>>> +{
>>>> + paddr_t cmd_gaddr, xen_addr;
>>>> + mfn_t cmd_mfn;
>>>> +
>>>> + cmd_gaddr = (paddr_t)get_user_reg(regs, 1) << 32 |
>>>> + get_user_reg(regs, 2);
>>>> +
>>>> + /*
>>>> + * Command buffer should start at page boundary.
>>>> + * This is OP-TEE ABI requirement.
>>>> + */
>>>> + if ( cmd_gaddr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>> + return false;
>>>> +
>>>> + cmd_mfn = lookup_guest_ram_addr(cmd_gaddr);
>>>> + if ( mfn_eq(cmd_mfn, INVALID_MFN) )
>>>> + return false;
>>>> +
>>>> + ctx->guest_arg = map_domain_page(cmd_mfn);
>>>> + if ( !ctx->guest_arg )
>>>> + return false;
>>>> +
>>>> + ctx->xen_arg = alloc_xenheap_page();
>>>> + if ( !ctx->xen_arg )
>>>> + return false;
>>>> +
>>>> + memcpy(ctx->xen_arg, ctx->guest_arg, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>
>> Have a look a guest copy helpers.
>>
>> Cheers,
>>
>> [1]
>> https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01481.html
>>
>> --
>> Julien Grall
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2017-12-04 16:29 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-16 11:54 Next Xen Arm Community call - Wednesday 22nd November Julien Grall
2017-11-16 12:47 ` Artem Mygaiev
2017-11-16 18:58 ` Stefano Stabellini
2017-11-20 18:05 ` Julien Grall
2017-11-21 14:34 ` Julien Grall
2017-11-27 17:01 ` [RFC] WIP: optee: add OP-TEE mediator Volodymyr Babchuk
2017-12-01 22:58 ` Stefano Stabellini
2017-12-04 14:30 ` Julien Grall
2017-12-04 16:24 ` Volodymyr Babchuk
2017-12-04 16:29 ` Julien Grall [this message]
2017-12-06 22:39 ` Julien Grall
2017-12-04 14:46 ` Andrew Cooper
2017-12-04 16:15 ` Volodymyr Babchuk
2017-12-04 16:27 ` Julien Grall
2017-12-04 18:32 ` Volodymyr Babchuk
2017-12-04 22:04 ` Stefano Stabellini
2017-12-05 11:14 ` Julien Grall
[not found] ` <f9fe1cb6-8700-99ba-ee62-2217fb5eb041@linaro.org>
[not found] ` <20171204185929.GB30163@EPUAKYIW2556.kyiv.epam.com>
[not found] ` <a6a05ad8-f335-00a2-cdd0-add81deb87bf@linaro.org>
[not found] ` <2ee9297a-fcb8-c1c0-8ca7-91b9adbcd5b1@epam.com>
[not found] ` <a7af95e4-09cf-22f8-b8cd-23a0cb5baae8@linaro.org>
2017-12-05 15:36 ` Stuart Yoder
2017-12-06 22:31 ` Julien Grall
2017-12-07 16:32 ` Stuart Yoder
2017-12-04 22:06 ` Stefano Stabellini
2017-12-05 13:07 ` Volodymyr Babchuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6233ff5e-a745-c755-8cc5-01677b276385@linaro.org \
--to=julien.grall@linaro.org \
--cc=sstabellini@kernel.org \
--cc=stuart.yoder@arm.com \
--cc=volodymyr_babchuk@epam.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).