xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
To: Julien Grall <julien.grall@arm.com>,
	xen-devel@lists.xenproject.org, xen-devel@lists.xen.org
Cc: tee-dev@lists.linaro.org, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 08/13] optee: add support for RPC SHM buffers
Date: Tue, 11 Sep 2018 22:30:15 +0300	[thread overview]
Message-ID: <1e5a9f68-58a5-3e3c-c579-a30a42f165df@epam.com> (raw)
In-Reply-To: <fafaa75f-6ac9-87b7-db72-d4b366ed10c6@arm.com>

Hi Julien,

On 11.09.18 14:53, Julien Grall wrote:
> 
> 
> On 10/09/18 18:44, Volodymyr Babchuk wrote:
>> Hi Julien,
>>
>> On 10.09.18 16:01, Julien Grall wrote:
>>> Hi Volodymyr,
>>>
>>> On 03/09/18 17:54, Volodymyr Babchuk wrote:
>>>> OP-TEE usually uses the same idea with command buffers (see
>>>> previous commit) to issue RPC requests. Problem is that initially
>>>> it has no buffer, where it can write request. So the first RPC
>>>> request it makes is special: it requests NW to allocate shared
>>>> buffer for other RPC requests. Usually this buffer is allocated
>>>> only once for every OP-TEE thread and it remains allocated all
>>>> the time until shutdown.
>>>>
>>>> Mediator needs to pin this buffer(s) to make sure that domain can't
>>>> transfer it to someone else. Also it should be mapped into XEN
>>>> address space, because mediator needs to check responses from
>>>> guests.
>>>
>>> Can you explain why you always need to keep the shared buffer mapped 
>>> in Xen? Why not using access_guest_memory_by_ipa every time you want 
>>> to get information from the guest?
>> Sorry, I just didn't know about this mechanism. But for performance 
>> reasons,
>> I'd like to keep this buffers always mapped. You see, RPC returns are
>> very frequent (for every IRQ, actually). So I think, it will be costly
>> to map/unmap this buffer every time.
> 
> This is a bit misleading... This copy will *only* happen for IRQ during 
> an RPC. What are the chances for that? Fairly limited. If this is 
> happening too often, then the map/unmap here will be your least concern.
Now, this copy will happen for every IRQ when CPU is in S-EL1/S-EL0 
mode. Chances are quite high, I must say.
Look: OP-TEE or (TA) is doing something, like encrypting some buffer, 
for example. IRQ fires, OP-TEE immediately executes RPC return (right 
from interrupt handler), so NW can handle interrupt. Then NW returns 
control back to OP-TEE, if it wants to.

This is how long job in OP-TEE can be preempted by linux kernel, for 
example. Timer IRQ ensures that control will be returned to linux, 
scheduler schedules some other task and OP-TEE patiently waits until its 
caller is scheduled back, so it can resume the work.

> 
> However, I would like to see any performance comparison here to weight 
> with the memory impact in Xen (Arm32 have limited amount of VA available).
With current configuration, this is maximum 16 pages per guest.
As for performance comparison... This is doable, but will take  some time.

[...]
>>>> +static void free_shm_rpc(struct domain_ctx *ctx, uint64_t cookie)
>>>> +{
>>>> +    struct shm_rpc *shm_rpc;
>>>> +    bool found = false;
>>>> +
>>>> +    spin_lock(&ctx->lock);
>>>> +
>>>> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
>>>> +    {
>>>> +        if ( shm_rpc->cookie == cookie )
>>>
>>> What does guarantee you the cookie will be uniq?
>> Normal World guarantees. This is the part of the protocol.
> 
> By NW, do you mean the guest? You should know by now we should not trust 
> what the guest is doing. If you think it is still fine, then I would 
> like some writing to explain what is the impact of a guest putting twice 
> the same cookie ID.
Ah, I see your point. Yes, I'll add check to ensure that cookie is not 
reused.
Thank you for  pointing to this.

> 
>>> It feels quite suspicious to free the memory in Xen before calling 
>>> OP-TEE. I think this need to be done afterwards.
>>>
>> No, it is OP-TEE asked to free buffer. This function is called, when 
>> NW returns from the RPC. So at this moment NW freed the buffer.
> 
> But you forward that call to OP-TEE after. So what would OP-TEE do with 
> that?
Happily resume interrupted work. There is how RPC works:

1. NW client issues STD call (or yielding call in terms of SMCCC)
2. OP-TEE starts its work, but it is needed to be interrupted for some
    reason: IRQ arrived, it wants to block on a mutex, it asks NW to do
    some work (like allocating memory or loading TA). This is called "RPC
    return".
3. OP-TEE suspends thread and does return from SMC call with code
    OPTEE_SMC_RPC_VAL(SOME_CMD) in a0, and some optional parameters in
    other registers
4. NW sees that this is a RPC, and not completed STD call, so it does
    SOME_CMD and  issues another SMC with code
    OPTEE_SMC_CALL_RETURN_FROM_RPC in a0
5. OP-TEE wakes up suspended thread and continues execution
6. pts 2-5 are repeated until OP-TEE finishes the work
7. It returns from last SMC call with code OPTEE_SMC_RETURN_SUCCESS/
    OPTEE_SMC_RETURN_some_error in a0.
8. optee driver sees that call from pt.1 is finished at least and
    returns control back to client


> Looking at that code, I just noticed there potential race condition 
> here. Nothing prevent a guest to call twice with the same optee_thread_id.
OP-TEE has internal check against this.

> So it would be possible for two vCPU to call concurrently the same 
> command and free it.
Maybe you noticed that mediator uses shadow buffer to read cookie id. So 
it will free the buffer mentioned by OP-TEE.
Basically what happened:

1. OP-TEE asks "free buffer with cookie X" in RPC return
2. guests says "I freed that buffer" in SMC call
3. mediator frees buffer with cookie X on its side

In this particular order.

-- 
Volodymyr Babchuk

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-11 19:30 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03 16:54 [PATCH v2 00/13] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
2018-09-03 16:54 ` [PATCH v2 01/13] arm: add generic TEE mediator framework Volodymyr Babchuk
2018-09-03 17:22   ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 02/13] domctl: add tee_op domctl Volodymyr Babchuk
2018-09-03 17:16   ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 03/13] arm: tee: add OP-TEE header files Volodymyr Babchuk
2018-09-03 16:54 ` [PATCH v2 04/13] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
2018-09-03 17:38   ` Julien Grall
2018-09-03 17:55     ` Volodymyr Babchuk
2018-09-04 19:48       ` Julien Grall
2018-09-05 12:17         ` Volodymyr Babchuk
2018-09-05 13:16           ` Julien Grall
2018-09-05 13:38             ` Volodymyr Babchuk
2018-09-05 13:47               ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 05/13] optee: add fast calls handling Volodymyr Babchuk
2018-09-05 13:36   ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 06/13] optee: add domain contexts Volodymyr Babchuk
2018-09-05 14:10   ` Julien Grall
2018-09-05 14:18     ` Andrew Cooper
2018-09-05 14:23     ` Volodymyr Babchuk
2018-09-05 14:27       ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 07/13] optee: add std call handling Volodymyr Babchuk
2018-09-05 15:17   ` Julien Grall
2018-09-10 17:37     ` Volodymyr Babchuk
2018-09-11 11:19       ` Julien Grall
2018-09-11 11:31         ` Volodymyr Babchuk
2018-09-11 13:30           ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 08/13] optee: add support for RPC SHM buffers Volodymyr Babchuk
2018-09-10 13:01   ` Julien Grall
2018-09-10 17:44     ` Volodymyr Babchuk
2018-09-11 11:53       ` Julien Grall
2018-09-11 19:30         ` Volodymyr Babchuk [this message]
2018-09-12 10:59           ` Julien Grall
2018-09-12 13:51             ` Volodymyr Babchuk
2018-09-18 16:11               ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 09/13] optee: add support for arbitrary shared memory Volodymyr Babchuk
2018-09-10 14:02   ` Julien Grall
2018-09-10 18:04     ` Volodymyr Babchuk
2018-09-11 13:37       ` Julien Grall
2018-09-11 19:33         ` Volodymyr Babchuk
2018-09-12 11:02           ` Julien Grall
2018-09-12 12:45             ` Volodymyr Babchuk
2018-09-18 16:19               ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 10/13] optee: add support for RPC commands Volodymyr Babchuk
2018-09-10 15:34   ` Julien Grall
2018-09-10 18:14     ` Volodymyr Babchuk
2018-09-11 13:56       ` Julien Grall
2018-09-11 18:58         ` Volodymyr Babchuk
2018-09-18 16:50           ` Julien Grall
2018-09-19 15:21             ` Volodymyr Babchuk
2018-09-03 16:54 ` [PATCH v2 11/13] libxc: add xc_dom_tee_enable(...) function Volodymyr Babchuk
2018-09-06 10:59   ` Wei Liu
2018-09-03 16:54 ` [PATCH v2 12/13] xl: add "tee" option for xl.cfg Volodymyr Babchuk
2018-09-11 14:23   ` Julien Grall
2018-09-03 16:54 ` [PATCH v2 13/13] lixl: arm: create optee firmware node in DT if tee=1 Volodymyr Babchuk
2018-09-11 14:48   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1e5a9f68-58a5-3e3c-c579-a30a42f165df@epam.com \
    --to=volodymyr_babchuk@epam.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=tee-dev@lists.linaro.org \
    --cc=xen-devel@lists.xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).