From: Andrii Anisov <andrii_anisov@epam.com>
To: Julien Grall <julien.grall@arm.com>,
Andrii Anisov <andrii.anisov@gmail.com>,
xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC] scf: SCF device tree and configuration documentation
Date: Thu, 4 May 2017 19:13:54 +0300 [thread overview]
Message-ID: <1d2ccb3b-66d2-e02e-1a8e-00452bbd031f@epam.com> (raw)
In-Reply-To: <b8a38234-59ec-bac4-8ad4-b8e751ec16c5@arm.com>
Julien,
On 04.05.17 15:46, Julien Grall wrote:
>
>> I understand these concerns, but not sure should we be scared of attack
>> from a domain privileged enough to run domains?
>
> Whilst the domain is privileged enough to run domains, the
> configuration can be provided by a user (for instance in cloud
> environment). So you cannot trust what the user provided and any
> missing invalidation would lead to a security issue (see XSA-95 [1]
> for instance).
>
> That's why we specifically said only trusted device tree should be
> used with the option "device_tree".
I see. But I also could state the same.
>> It seems to me that system hypervisor attack through libfdt is the less
>> valuable benefit from compromised dom0.
>
> It is much more valuable, DOM0 may still have limited access to
> functionally whilst the hypervisor has access to everything.
Well, from dom0 you could start/stop any domain you want, grant access
to any hardware, but only from hypervisor you could map another domain
memory to access some runtime data. Is my understanding correct?
> Also, I do believe that the domain creation should be limited to
> create the domain and not configuring the devices other than the
> strict necessary. For anything else (UART, co-processor),
But vgic is configured at the earliest stages of the domain creation. So
we have to know at the moment which IRQs would be injected into the
domain. And that is my current problem.
> this should be done later on.
What is the proper moment to spawn virtual coprocessors for guest
domains from your point of view?
--
*Andrii Anisov*
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-05-04 16:14 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-04 9:32 [RFC] scf: SCF device tree and configuration documentation Andrii Anisov
2017-05-04 10:03 ` Andrii Anisov
2017-05-04 10:41 ` Julien Grall
2017-05-04 12:35 ` Andrii Anisov
2017-05-04 12:46 ` Julien Grall
2017-05-04 15:50 ` Andrii Anisov
2017-05-05 13:49 ` Julien Grall
2017-05-05 14:13 ` Ian Jackson
2017-05-05 17:07 ` Andrii Anisov
2017-05-05 17:20 ` Ian Jackson
2017-05-10 9:16 ` Andrii Anisov
2017-05-10 14:22 ` Ian Jackson
2017-05-10 16:26 ` Andrii Anisov
2017-05-04 16:13 ` Andrii Anisov [this message]
2017-05-05 14:12 ` Julien Grall
2017-05-05 15:27 ` Andrii Anisov
2017-05-05 17:51 ` Julien Grall
2017-05-10 15:30 ` Andrii Anisov
2017-05-10 17:40 ` Julien Grall
2017-05-10 17:47 ` Andrii Anisov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1d2ccb3b-66d2-e02e-1a8e-00452bbd031f@epam.com \
--to=andrii_anisov@epam.com \
--cc=andrii.anisov@gmail.com \
--cc=julien.grall@arm.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).