All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sebastian Ene <sebastianene@google.com>
To: Marc Zyngier <maz@kernel.org>
Cc: catalin.marinas@arm.com, oupton@kernel.org, will@kernel.org,
	joey.gouly@arm.com, korneld@google.com, kvmarm@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, android-kvm@google.com,
	mrigendra.chaubey@gmail.com, perlarsen@google.com,
	suzuki.poulose@arm.com, vdonnefort@google.com,
	yuzenghui@huawei.com, Sudeep Holla <sudeep.holla@kernel.org>
Subject: Re: [PATCH] KVM: arm64: Forward FFA_NOTIFICATION* calls to TrustZone
Date: Fri, 8 May 2026 13:04:27 +0000	[thread overview]
Message-ID: <af3fW468-f1KXCrC@google.com> (raw)
In-Reply-To: <86qznnxptx.wl-maz@kernel.org>

On Thu, May 07, 2026 at 03:21:46PM +0100, Marc Zyngier wrote:
> On Thu, 07 May 2026 15:13:06 +0100,
> Sebastian Ene <sebastianene@google.com> wrote:
> > 
> > On Thu, May 07, 2026 at 02:36:46PM +0100, Marc Zyngier wrote:
> > > On Thu, 07 May 2026 11:48:46 +0100,
> > > Sebastian Ene <sebastianene@google.com> wrote:
> > > > 
> > > > On Wed, May 06, 2026 at 05:29:22PM +0100, Marc Zyngier wrote:
> > > > 
> > > > Hello Marc,
> > > > 
> > > > > [+ Sudeep]
> > > > > 
> > > > > On Fri, 01 May 2026 12:44:48 +0100,
> > > > > Sebastian Ene <sebastianene@google.com> wrote:
> > > > > > 
> > > > > > Remove the FFA_NOTIFICATION* calls from the blocklist used by the pKVM
> > > > > > FF-A proxy. This restriction was preventing the use of asynchronous
> > > > > > signaling mechanisms defined by the Arm FF-A specification to
> > > > > > communicate with the secure services.
> > > > > > While these calls are markes as optional, there is no reason why the
> > > > > > hypervisor proxy would block them because:
> > > > > > 
> > > > > > 1. Host is the Sole Non-Secure Endpoint: The Host operates as the
> > > > > >    only Non-Secure VM ID (VM ID 0) recognized by the Secure World.
> > > > > 
> > > > > Where is this enforced?
> > > > > 
> > > > 
> > > > There is no enforcement in place in the hypervisor since we don't proxy
> > > > FF-A from guest VMs, there is only one non-secure user of this which is the host.
> > > 
> > > And again: what makes that VM ID 0? Why can't the host pick VM ID 32
> > > and use that?
> > > 
> > 
> > The host discovers its id through the FFA_ID_GET and TZ returns 0 in
> 
> Does it? How do you verify this?
> 
 
It is written in the spec under 13.10 FFA_ID_GET ("ID value 0 must be
returned at the Non-secure physical FF-A instance"). If this contract is
broken and TZ in not spec compliant I am afraid there is not too much
that we can do.

> > this case. However if it wants to use VM ID 32 in any other call it
> > absolutely can but what would it be the attack here, what is your
> > concern ?
> 
> Let's be clear: I don't give a damn about a potential attack vector.
> The moment you add Secure to the mix, security is gone (funny, isn't
> it?). I care about being strict about the spec, and not letting
> through things that will eventually break.
>

Understood. 

> > 
> > > > > >    Because all forwarded notifications are inherently attributed to
> > > > > >    the Host by the SPMC, there is no risk of VM ID spoofing
> > > > > >    originating from the Normal World.
> > > > > 
> > > > > I don't understand: either the host is always using VM ID 0, and we
> > > > > have ways to check and enforce this (how?), or the simple fact that
> > > > > the request comes from NS is a guarantee that the SPMC will treat the
> > > > > VM ID as 0.
> > > > > 
> > > > > Which one is it?
> > > > 
> > > > My understanding is that when the hypervisor doesn't handle the allocation of
> > > > the non-secure IDs (through FFA_ID_GET), everything that comes from non-secure
> > > > is treated as having the VM ID 0 by the SPMC.
> > > 
> > > This looks terribly fragile. I'd rather you *enforce* these things
> > > rather than allowing any random stuff from the host and relying on
> > > the EL3 firmware to get it right (odds are that it won't).
> > > 
> > 
> > I can verify the vmid is 0 for the notification calls that I enable.
> 
> Yes, please.
>

Ack.

> > 
> > > This also ties into this:
> > > 
> > > > > > diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
> > > > > > index 1af722771178..a82d0cd22a17 100644
> > > > > > --- a/arch/arm64/kvm/hyp/nvhe/ffa.c
> > > > > > +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
> > > > > > @@ -675,14 +675,6 @@ static bool ffa_call_supported(u64 func_id)
> > > > > >  	case FFA_RXTX_MAP:
> > > > > >  	case FFA_MEM_DONATE:
> > > > > >  	case FFA_MEM_RETRIEVE_REQ:
> > > > > > -       /* Optional notification interfaces added in FF-A 1.1 */
> > > > > > -	case FFA_NOTIFICATION_BITMAP_CREATE:
> > > > > > -	case FFA_NOTIFICATION_BITMAP_DESTROY:
> > > > > > -	case FFA_NOTIFICATION_BIND:
> > > > > > -	case FFA_NOTIFICATION_UNBIND:
> > > > > > -	case FFA_NOTIFICATION_SET:
> > > > > > -	case FFA_NOTIFICATION_GET:
> > > > > > -	case FFA_NOTIFICATION_INFO_GET:
> > > > > >  	/* Optional interfaces added in FF-A 1.2 */
> > > > > >  	case FFA_MSG_SEND_DIRECT_REQ2:		/* Optional per 7.5.1 */
> > > > > >  	case FFA_MSG_SEND_DIRECT_RESP2:		/* Optional per 7.5.1 */
> > > > > 
> > > > > Shouldn't these be sanitised in a way? A bunch of registers are SBZ in
> > > > > the spec, and I'd expect this to be enforced.
> > > 
> > > which still remains unanswered.
> > 
> > Missed this sorry. We can reject them in the hyp proxy if the caller
> > uses non zero values in those registers.
> 
> I think we need that indeed.


While at it I discovered that none of the FF-A calls in the proxy
currently check for these SBZ registers. Would you be ok with a diff that
fixes this before the patch with the notifications ?


Refactor the handling logic in pKVM FF-A proxy to support checking for
SBZ/MBZ values. While at it, drop the do_ffa_mem_xfer macro and replace
it with two functions that make it clear that we re-write the
function-id with a 64-bit variant, to keep the same behavior as before.
Keep each handler in an array of structures together with a mask that
corresponds to the SBZ registers the spec expects.


diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
index a82d0cd22a17..35443a894172 100644
--- a/arch/arm64/kvm/hyp/nvhe/ffa.c
+++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
@@ -561,13 +561,6 @@ static void __do_ffa_mem_xfer(const u64 func_id,
 	goto out_unlock;
 }
 
-#define do_ffa_mem_xfer(fid, res, ctxt)				\
-	do {							\
-		BUILD_BUG_ON((fid) != FFA_FN64_MEM_SHARE &&	\
-			     (fid) != FFA_FN64_MEM_LEND);	\
-		__do_ffa_mem_xfer((fid), (res), (ctxt));	\
-	} while (0);
-
 static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
 			       struct kvm_cpu_context *ctxt)
 {
@@ -854,9 +847,60 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs *res,
 	hyp_spin_unlock(&host_buffers.lock);
 }
 
+static void do_ffa_mem_share(struct arm_smccc_1_2_regs *res, struct kvm_cpu_context *ctxt)
+{
+	__do_ffa_mem_xfer(FFA_FN64_MEM_SHARE, res, ctxt);
+}
+
+static void do_ffa_mem_lend(struct arm_smccc_1_2_regs *res, struct kvm_cpu_context *ctxt)
+{
+	__do_ffa_mem_xfer(FFA_FN64_MEM_LEND, res, ctxt);
+}
+
+struct ffa_handler {
+	u32 func_id;
+	void (* do_ffa_handle)(struct arm_smccc_1_2_regs *res, struct kvm_cpu_context *ctxt);
+	u32 sbz_mask;
+};
+
+#define REG_RANGE_SBZ	GENMASK
+#define FFA_HANDLER(fid, cb, sbz) {	\
+	.func_id = (fid),	\
+	.do_ffa_handle = (cb),	\
+	.sbz_mask = (sbz),	\
+}
+
+static const struct ffa_handler host_handlers[] = {
+	FFA_HANDLER(FFA_FN64_RXTX_MAP,	do_ffa_rxtx_map,	REG_RANGE_SBZ(17, 4)),
+	FFA_HANDLER(FFA_RXTX_UNMAP,	do_ffa_rxtx_unmap,	REG_RANGE_SBZ(17, 2)),
+	FFA_HANDLER(FFA_MEM_SHARE,	do_ffa_mem_share,	REG_RANGE_SBZ(17, 5)),
+	FFA_HANDLER(FFA_FN64_MEM_SHARE, do_ffa_mem_share,	REG_RANGE_SBZ(17, 5)),
+	FFA_HANDLER(FFA_MEM_RECLAIM,	do_ffa_mem_reclaim,	REG_RANGE_SBZ(17, 4)),
+	FFA_HANDLER(FFA_MEM_LEND,	do_ffa_mem_lend,	REG_RANGE_SBZ(17, 5)),
+	FFA_HANDLER(FFA_FN64_MEM_LEND,	do_ffa_mem_lend,	REG_RANGE_SBZ(17, 5)),
+	FFA_HANDLER(FFA_MEM_FRAG_TX,	do_ffa_mem_frag_tx,	REG_RANGE_SBZ(17, 5)),
+	FFA_HANDLER(FFA_VERSION,	do_ffa_version,		REG_RANGE_SBZ(17, 2)),
+	FFA_HANDLER(FFA_PARTITION_INFO_GET,	do_ffa_part_get,	REG_RANGE_SBZ(17, 6)),
+};
+
+static bool is_sbz_error(const struct ffa_handler *cb, struct kvm_cpu_context *ctxt)
+{
+	int reg_idx, reg_end = fls(cb->sbz_mask);
+
+	if (!ARM_SMCCC_IS_64(cb->func_id) && reg_end > 7)
+		reg_end = 7;
+
+	for (reg_idx = 0; reg_idx <= reg_end; reg_idx++)
+		if (((BIT(reg_idx) & cb->sbz_mask)) && cpu_reg(ctxt, reg_idx))
+			return true;
+
+	return false;
+}
+
 bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
 {
 	struct arm_smccc_1_2_regs res;
+	const struct ffa_handler *cb;
 
 	/*
 	 * There's no way we can tell what a non-standard SMC call might
@@ -880,37 +924,22 @@ bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
 		goto out_handled;
 	}
 
-	switch (func_id) {
-	case FFA_FEATURES:
+	if (func_id == FFA_FEATURES) {
 		if (!do_ffa_features(&res, host_ctxt))
 			return false;
 		goto out_handled;
-	/* Memory management */
-	case FFA_FN64_RXTX_MAP:
-		do_ffa_rxtx_map(&res, host_ctxt);
-		goto out_handled;
-	case FFA_RXTX_UNMAP:
-		do_ffa_rxtx_unmap(&res, host_ctxt);
-		goto out_handled;
-	case FFA_MEM_SHARE:
-	case FFA_FN64_MEM_SHARE:
-		do_ffa_mem_xfer(FFA_FN64_MEM_SHARE, &res, host_ctxt);
-		goto out_handled;
-	case FFA_MEM_RECLAIM:
-		do_ffa_mem_reclaim(&res, host_ctxt);
-		goto out_handled;
-	case FFA_MEM_LEND:
-	case FFA_FN64_MEM_LEND:
-		do_ffa_mem_xfer(FFA_FN64_MEM_LEND, &res, host_ctxt);
-		goto out_handled;
-	case FFA_MEM_FRAG_TX:
-		do_ffa_mem_frag_tx(&res, host_ctxt);
-		goto out_handled;
-	case FFA_VERSION:
-		do_ffa_version(&res, host_ctxt);
-		goto out_handled;
-	case FFA_PARTITION_INFO_GET:
-		do_ffa_part_get(&res, host_ctxt);
+	}
+
+	for (cb = host_handlers; cb < host_handlers + ARRAY_SIZE(host_handlers); cb++) {
+		if (cb->func_id != func_id)
+			continue;
+
+		if (is_sbz_error(cb, host_ctxt)) {
+			ffa_to_smccc_error(&res, FFA_RET_INVALID_PARAMETERS);
+			goto out_handled;
+		}
+
+		cb->do_ffa_handle(&res, host_ctxt);
 		goto out_handled;
 	}
 
-- 
2.54.0.563.g4f69b47b94-goog

> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

Thanks,
Sebastian

  reply	other threads:[~2026-05-08 13:04 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-01 11:44 [PATCH] KVM: arm64: Forward FFA_NOTIFICATION* calls to TrustZone Sebastian Ene
2026-05-06 16:29 ` Marc Zyngier
2026-05-07 10:48   ` Sebastian Ene
2026-05-07 13:36     ` Marc Zyngier
2026-05-07 14:13       ` Sebastian Ene
2026-05-07 14:21         ` Marc Zyngier
2026-05-08 13:04           ` Sebastian Ene [this message]
2026-05-08 16:57             ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=af3fW468-f1KXCrC@google.com \
    --to=sebastianene@google.com \
    --cc=android-kvm@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=korneld@google.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=mrigendra.chaubey@gmail.com \
    --cc=oupton@kernel.org \
    --cc=perlarsen@google.com \
    --cc=sudeep.holla@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=vdonnefort@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.