From: "Huang, Kai" <kai.huang@intel.com>
To: "corbet@lwn.net" <corbet@lwn.net>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
"bp@alien8.de" <bp@alien8.de>,
"shuah@kernel.org" <shuah@kernel.org>,
"sathyanarayanan.kuppuswamy@linux.intel.com"
<sathyanarayanan.kuppuswamy@linux.intel.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"x86@kernel.org" <x86@kernel.org>,
"mingo@redhat.com" <mingo@redhat.com>
Cc: "Yu, Guorui" <guorui.yu@linux.alibaba.com>,
"linux-kselftest@vger.kernel.org"
<linux-kselftest@vger.kernel.org>,
"wander@redhat.com" <wander@redhat.com>,
"hpa@zytor.com" <hpa@zytor.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Aktas, Erdem" <erdemaktas@google.com>,
"kirill.shutemov@linux.intel.com"
<kirill.shutemov@linux.intel.com>,
"Luck, Tony" <tony.luck@intel.com>, "Du, Fan" <fan.du@intel.com>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>
Subject: Re: [PATCH v1 1/3] x86/tdx: Add TDX Guest event notify interrupt support
Date: Tue, 28 Mar 2023 02:38:19 +0000 [thread overview]
Message-ID: <3c88945515eba868056906f4a269e6ffcf49e1ec.camel@intel.com> (raw)
In-Reply-To: <20230326062039.341479-2-sathyanarayanan.kuppuswamy@linux.intel.com>
On Sat, 2023-03-25 at 23:20 -0700, Kuppuswamy Sathyanarayanan wrote:
> Host-guest event notification via configured interrupt vector is useful
> in cases where a guest makes an asynchronous request and needs a
> callback from the host to indicate the completion or to let the host
> notify the guest about events like device removal. One usage example is,
> callback requirement of GetQuote asynchronous hypercall.
>
> In TDX guest, SetupEventNotifyInterrupt hypercall can be used by the
> guest to specify which interrupt vector to use as an event-notify
> vector to the VMM.
>
"to the VMM" -> "from the VMM"?
> Details about the SetupEventNotifyInterrupt
> hypercall can be found in TDX Guest-Host Communication Interface
> (GHCI) Specification, sec 3.5 "VP.VMCALL<SetupEventNotifyInterrupt>".
It seems we shouldn't mention the exact section number.
>
> As per design, VMM will post the event completion IRQ using the same
> CPU in which SetupEventNotifyInterrupt hypercall request is received.
"in which" -> "on which"
> So allocate an IRQ vector from "x86_vector_domain", and set the CPU
> affinity of the IRQ vector to the current CPU.
IMHO "current CPU" is a little bit vague. Perhaps just "to the CPU on which
SetupEventNotifyInterrupt hypercall is made".
Also, perhaps it's better to mention to use IRQF_NOBALANCING to prevent the IRQ
from being migrated to another cpu.
>
> Add tdx_register_event_irq_cb()/tdx_unregister_event_irq_cb()
> interfaces to allow drivers register/unregister event noficiation
> handlers.
>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Andi Kleen <ak@linux.intel.com>
> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Acked-by: Wander Lairson Costa <wander@redhat.com>
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> ---
> arch/x86/coco/tdx/tdx.c | 163 +++++++++++++++++++++++++++++++++++++
> arch/x86/include/asm/tdx.h | 6 ++
> 2 files changed, 169 insertions(+)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index 055300e08fb3..d03985952d45 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -7,12 +7,18 @@
> #include <linux/cpufeature.h>
> #include <linux/export.h>
> #include <linux/io.h>
> +#include <linux/string.h>
> +#include <linux/uaccess.h>
Do you need above two headers?
Also, perhaps you should explicitly include <.../list.h> and <.../spinlock.h>.
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/numa.h>
> #include <asm/coco.h>
> #include <asm/tdx.h>
> #include <asm/vmx.h>
> #include <asm/insn.h>
> #include <asm/insn-eval.h>
> #include <asm/pgtable.h>
> +#include <asm/irqdomain.h>
>
> /* TDX module Call Leaf IDs */
> #define TDX_GET_INFO 1
> @@ -27,6 +33,7 @@
> /* TDX hypercall Leaf IDs */
> #define TDVMCALL_MAP_GPA 0x10001
> #define TDVMCALL_REPORT_FATAL_ERROR 0x10003
> +#define TDVMCALL_SETUP_NOTIFY_INTR 0x10004
>
> /* MMIO direction */
> #define EPT_READ 0
> @@ -51,6 +58,16 @@
>
> #define TDREPORT_SUBTYPE_0 0
>
> +struct event_irq_entry {
> + tdx_event_irq_cb_t handler;
> + void *data;
> + struct list_head head;
> +};
> +
> +static int tdx_event_irq;
__ro_after_init?
> +static LIST_HEAD(event_irq_cb_list);
> +static DEFINE_SPINLOCK(event_irq_cb_lock);
> +
> /*
> * Wrapper for standard use of __tdx_hypercall with no output aside from
> * return code.
> @@ -873,3 +890,149 @@ void __init tdx_early_init(void)
>
> pr_info("Guest detected\n");
> }
> +
> +static irqreturn_t tdx_event_irq_handler(int irq, void *dev_id)
> +{
> + struct event_irq_entry *entry;
> +
> + spin_lock(&event_irq_cb_lock);
> + list_for_each_entry(entry, &event_irq_cb_list, head) {
> + if (entry->handler)
> + entry->handler(entry->data);
> + }
> + spin_unlock(&event_irq_cb_lock);
> +
> + return IRQ_HANDLED;
> +}
> +
> +/* Reserve an IRQ from x86_vector_domain for TD event notification */
> +static int __init tdx_event_irq_init(void)
> +{
> + struct irq_alloc_info info;
> + cpumask_t saved_cpumask;
> + struct irq_cfg *cfg;
> + int cpu, irq;
> +
> + if (!cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> + return 0;
> +
> + init_irq_alloc_info(&info, NULL);
> +
> + /*
> + * Event notification vector will be delivered to the CPU
> + * in which TDVMCALL_SETUP_NOTIFY_INTR hypercall is requested.
> + * So set the IRQ affinity to the current CPU.
> + */
> + cpu = get_cpu();
> + cpumask_copy(&saved_cpumask, current->cpus_ptr);
> + info.mask = cpumask_of(cpu);
> + put_cpu();
The 'saved_cpumask' related code is ugly. If you move put_cpu() to the end of
this function, I think you can remove all related code:
cpu = get_cpu();
/*
* Set @info->mask to local cpu to make sure a valid vector is
* pre-allocated when TDX event notification IRQ is allocated
* from x86_vector_domain.
*/
init_irq_alloc_info(&info, cpumask_of(cpu));
// rest staff: request_irq(), hypercall ...
put_cpu();
> +
> + irq = irq_domain_alloc_irqs(x86_vector_domain, 1, NUMA_NO_NODE, &info);
Should you use cpu_to_node(cpu) instead of NUMA_NO_NODE?
> + if (irq <= 0) {
> + pr_err("Event notification IRQ allocation failed %d\n", irq);
> + return -EIO;
> + }
> +
> + irq_set_handler(irq, handle_edge_irq);
> +
> + cfg = irq_cfg(irq);
> + if (!cfg) {
> + pr_err("Event notification IRQ config not found\n");
> + goto err_free_irqs;
> + }
You are depending on irq_domain_alloc_irqs() to have already allocated a vector
on the local cpu. Then if !cfg, your code of calling irq_domain_alloc_irqs() to
allocate vector is broken.
So, perhaps you should just WARN() if vector hasn't been allocated to catch bug.
WARN(!(irq_cfg(irq)->vector));
> +
> + if (request_irq(irq, tdx_event_irq_handler, IRQF_NOBALANCING,
It's better to add a comment to explain why using IRQF_NOBALANCING.
> + "tdx_event_irq", NULL)) {
> + pr_err("Event notification IRQ request failed\n");
> + goto err_free_irqs;
> + }
> +
> + set_cpus_allowed_ptr(current, cpumask_of(cpu));
> +
> + /*
> + * Register callback vector address with VMM. More details
> + * about the ABI can be found in TDX Guest-Host-Communication
> + * Interface (GHCI), sec titled
> + * "TDG.VP.VMCALL<SetupEventNotifyInterrupt>".
> + */
> + if (_tdx_hypercall(TDVMCALL_SETUP_NOTIFY_INTR, cfg->vector, 0, 0, 0)) {
> + pr_err("Event notification hypercall failed\n");
> + goto err_restore_cpus;
> + }
> +
> + set_cpus_allowed_ptr(current, &saved_cpumask);
> +
> + tdx_event_irq = irq;
> +
> + return 0;
> +
> +err_restore_cpus:
> + set_cpus_allowed_ptr(current, &saved_cpumask);
> + free_irq(irq, NULL);
> +err_free_irqs:
> + irq_domain_free_irqs(irq, 1);
> +
> + return -EIO;
> +}
> +arch_initcall(tdx_event_irq_init)
> +
next prev parent reply other threads:[~2023-03-28 2:38 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-26 6:20 [PATCH v1 0/3] TDX Guest Quote generation support Kuppuswamy Sathyanarayanan
2023-03-26 6:20 ` [PATCH v1 1/3] x86/tdx: Add TDX Guest event notify interrupt support Kuppuswamy Sathyanarayanan
2023-03-28 2:38 ` Huang, Kai [this message]
2023-03-28 2:50 ` Sathyanarayanan Kuppuswamy
2023-03-28 4:02 ` Huang, Kai
2023-04-05 1:02 ` Sathyanarayanan Kuppuswamy
2023-04-05 2:45 ` Huang, Kai
2023-03-26 6:20 ` [PATCH v1 2/3] virt: tdx-guest: Add Quote generation support Kuppuswamy Sathyanarayanan
2023-03-26 6:34 ` Greg KH
2023-03-26 19:06 ` Sathyanarayanan Kuppuswamy
2023-03-27 6:30 ` Greg KH
2023-03-26 6:20 ` [PATCH v1 3/3] selftests/tdx: Test GetQuote TDX attestation feature Kuppuswamy Sathyanarayanan
2023-03-28 20:24 ` Shuah Khan
2023-03-27 17:36 ` [PATCH v1 0/3] TDX Guest Quote generation support Erdem Aktas
2023-03-28 19:59 ` Dionna Amalie Glaze
2023-03-28 20:43 ` Chong Cai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3c88945515eba868056906f4a269e6ffcf49e1ec.camel@intel.com \
--to=kai.huang@intel.com \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=guorui.yu@linux.alibaba.com \
--cc=hpa@zytor.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=shuah@kernel.org \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=wander@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox