From: Bjorn Helgaas <helgaas@kernel.org>
To: Michael Kelley <mikelley@microsoft.com>
Cc: hpa@zytor.com, kys@microsoft.com, haiyangz@microsoft.com,
sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com,
luto@kernel.org, peterz@infradead.org, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
lpieralisi@kernel.org, robh@kernel.org, kw@linux.com,
bhelgaas@google.com, arnd@arndb.de, hch@infradead.org,
m.szyprowski@samsung.com, robin.murphy@arm.com,
thomas.lendacky@amd.com, brijesh.singh@amd.com,
tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
dave.hansen@linux.intel.com, Tianyu.Lan@microsoft.com,
kirill.shutemov@linux.intel.com,
sathyanarayanan.kuppuswamy@linux.intel.com, ak@linux.intel.com,
isaku.yamahata@intel.com, dan.j.williams@intel.com,
jane.chu@oracle.com, seanjc@google.com, tony.luck@intel.com,
x86@kernel.org, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, netdev@vger.kernel.org,
linux-pci@vger.kernel.org, linux-arch@vger.kernel.org,
iommu@lists.linux.dev
Subject: Re: [PATCH 11/12] PCI: hv: Add hypercalls to read/write MMIO space
Date: Thu, 20 Oct 2022 14:04:25 -0500 [thread overview]
Message-ID: <20221020190425.GA139674@bhelgaas> (raw)
In-Reply-To: <1666288635-72591-12-git-send-email-mikelley@microsoft.com>
On Thu, Oct 20, 2022 at 10:57:14AM -0700, Michael Kelley wrote:
> To support PCI pass-thru devices in Confidential VMs, Hyper-V
> has added hypercalls to read and write MMIO space. Add the
> appropriate definitions to hyperv-tlfs.h and implement
> functions to make the hypercalls. These functions are used
> in a subsequent patch.
>
> Co-developed-by: Dexuan Cui <decui@microsoft.com>
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> Signed-off-by: Michael Kelley <mikelley@microsoft.com>
> ---
> arch/x86/include/asm/hyperv-tlfs.h | 3 ++
> drivers/pci/controller/pci-hyperv.c | 62 +++++++++++++++++++++++++++++++++++++
> include/asm-generic/hyperv-tlfs.h | 22 +++++++++++++
> 3 files changed, 87 insertions(+)
>
> diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
> index 3089ec3..f769b9d 100644
> --- a/arch/x86/include/asm/hyperv-tlfs.h
> +++ b/arch/x86/include/asm/hyperv-tlfs.h
> @@ -117,6 +117,9 @@
> /* Recommend using enlightened VMCS */
> #define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED BIT(14)
>
> +/* Use hypercalls for MMIO config space access */
> +#define HV_X64_USE_MMIO_HYPERCALLS BIT(21)
> +
> /*
> * CPU management features identification.
> * These are HYPERV_CPUID_CPU_MANAGEMENT_FEATURES.EAX bits.
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index e7c6f66..02ebf3e 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -1054,6 +1054,68 @@ static int wslot_to_devfn(u32 wslot)
> return PCI_DEVFN(slot_no.bits.dev, slot_no.bits.func);
> }
>
> +static void hv_pci_read_mmio(phys_addr_t gpa, int size, u32 *val)
> +{
> + struct hv_mmio_read_input *in;
> + struct hv_mmio_read_output *out;
> + u64 ret;
> +
> + /*
> + * Must be called with interrupts disabled so it is safe
> + * to use the per-cpu input argument page. Use it for
> + * both input and output.
> + */
> + in = *this_cpu_ptr(hyperv_pcpu_input_arg);
> + out = *this_cpu_ptr(hyperv_pcpu_input_arg) + sizeof(*in);
> + in->gpa = gpa;
> + in->size = size;
> +
> + ret = hv_do_hypercall(HVCALL_MMIO_READ, in, out);
> + if (hv_result_success(ret)) {
> + switch (size) {
> + case 1:
> + *val = *(u8 *)(out->data);
> + break;
> + case 2:
> + *val = *(u16 *)(out->data);
> + break;
> + default:
> + *val = *(u32 *)(out->data);
> + break;
> + }
> + } else
> + pr_err("MMIO read hypercall failed with status %llx\n", ret);
Too bad there's not more information to give the user/administrator
here. Seeing "MMIO read hypercall failed with status -5" in the log
doesn't give many clues about where to look or who to notify. I don't
know what's even feasible, but driver name, device, address (gpa),
size would all be possibilities.
Bjorn
next prev parent reply other threads:[~2022-10-20 19:04 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-20 17:57 [PATCH 00/12] Add PCI pass-thru support to Hyper-V Confidential VMs Michael Kelley
2022-10-20 17:57 ` [PATCH 01/12] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() Michael Kelley
2022-10-20 17:57 ` [PATCH 02/12] x86/ioapic: Gate decrypted mapping on cc_platform_has() attribute Michael Kelley
2022-11-02 13:03 ` Wei Liu
2022-10-20 17:57 ` [PATCH 03/12] x86/hyperv: Reorder code in prep for subsequent patch Michael Kelley
2022-11-02 17:28 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 04/12] Drivers: hv: Explicitly request decrypted in vmap_pfn() calls Michael Kelley
2022-11-03 14:02 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 05/12] x86/hyperv: Change vTOM handling to use standard coco mechanisms Michael Kelley
2022-11-03 14:00 ` Tianyu Lan
2022-11-03 14:12 ` Michael Kelley (LINUX)
2022-11-09 1:09 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 06/12] swiotlb: Remove bounce buffer remapping for Hyper-V Michael Kelley
2022-10-24 15:36 ` Christoph Hellwig
2022-10-20 17:57 ` [PATCH 07/12] Drivers: hv: vmbus: Remove second mapping of VMBus monitor pages Michael Kelley
2022-11-03 14:27 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 08/12] Drivers: hv: vmbus: Remove second way of mapping ring buffers Michael Kelley
2022-11-03 15:07 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 09/12] hv_netvsc: Remove second mapping of send and recv buffers Michael Kelley
2022-11-03 15:09 ` Tianyu Lan
2022-11-08 10:36 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 10/12] Drivers: hv: Don't remap addresses that are above shared_gpa_boundary Michael Kelley
2022-11-03 15:25 ` Tianyu Lan
2022-10-20 17:57 ` [PATCH 11/12] PCI: hv: Add hypercalls to read/write MMIO space Michael Kelley
2022-10-20 19:04 ` Bjorn Helgaas [this message]
2022-10-21 14:04 ` Michael Kelley (LINUX)
2022-10-20 17:57 ` [PATCH 12/12] PCI: hv: Enable PCI pass-thru devices in Confidential VMs Michael Kelley
2022-11-07 21:41 ` Boqun Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221020190425.GA139674@bhelgaas \
--to=helgaas@kernel.org \
--cc=Tianyu.Lan@microsoft.com \
--cc=ak@linux.intel.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=bp@alien8.de \
--cc=brijesh.singh@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=decui@microsoft.com \
--cc=edumazet@google.com \
--cc=haiyangz@microsoft.com \
--cc=hch@infradead.org \
--cc=hpa@zytor.com \
--cc=iommu@lists.linux.dev \
--cc=isaku.yamahata@intel.com \
--cc=jane.chu@oracle.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kuba@kernel.org \
--cc=kw@linux.com \
--cc=kys@microsoft.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=luto@kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=mikelley@microsoft.com \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peterz@infradead.org \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=sthemmin@microsoft.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=wei.liu@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).