public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/12] x86/Hyper-V: Add Hyper-V Isolation VM support
@ 2021-02-28 15:03 Tianyu Lan
  2021-02-28 15:03 ` [RFC PATCH 9/12] x86/Hyper-V: Add new parameter for vmbus_sendpacket_pagebuffer()/mpb_desc() Tianyu Lan
  2021-02-28 15:03 ` [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc Tianyu Lan
  0 siblings, 2 replies; 7+ messages in thread
From: Tianyu Lan @ 2021-02-28 15:03 UTC (permalink / raw)
  To: kys, haiyangz, sthemmin, wei.liu, tglx, mingo, bp, x86, hpa,
	davem, kuba, gregkh, arnd, akpm, jejb, martin.petersen
  Cc: Tianyu Lan, linux-arch, linux-hyperv, linux-kernel, linux-mm,
	linux-scsi, netdev, vkuznets, thomas.lendacky, brijesh.singh,
	sunilmut

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.

The memory of these vms are encrypted and host can't access guest
memory directly. Hyper-V provides new host visibility hvcall and
the guest needs to call new hvcall to mark memory visible to host
before sharing memory with host. For security, all network/storage
stack memory should not be shared with host and so there is bounce
buffer requests.

Vmbus channel ring buffer already plays bounce buffer role because
all data from/to host needs to copy from/to between the ring buffer
and IO stack memory. So mark vmbus channel ring buffer visible.

There are two exceptions - packets sent by vmbus_sendpacket_
pagebuffer() and vmbus_sendpacket_mpb_desc(). These packets
contains IO stack memory address and host will access these memory.
So add allocation bounce buffer support in vmbus for these packets.

For SNP isolation VM, guest needs to access the shared memory via
extra address space which is specified by Hyper-V CPUID HYPERV_CPUID_
ISOLATION_CONFIG. The access physical address of the shared memory
should be bounce buffer memory GPA plus with shared_gpa_boundary
reported by CPUID.

Tianyu Lan (12):
  x86/Hyper-V: Add visibility parameter for vmbus_establish_gpadl()
  x86/Hyper-V: Add new hvcall guest address host visibility support
  x86/HV: Initialize GHCB page and shared memory boundary
  HV: Add Write/Read MSR registers via ghcb
  HV: Add ghcb hvcall support for SNP VM
  HV/Vmbus: Add SNP support for VMbus channel initiate message
  hv/vmbus: Initialize VMbus ring buffer for Isolation VM
  x86/Hyper-V: Initialize bounce buffer page cache and list
  x86/Hyper-V: Add new parameter for
    vmbus_sendpacket_pagebuffer()/mpb_desc()
  HV: Add bounce buffer support for Isolation VM
  HV/Netvsc: Add Isolation VM support for netvsc driver
  HV/Storvsc: Add bounce buffer support for Storvsc

 arch/x86/hyperv/Makefile           |   2 +-
 arch/x86/hyperv/hv_init.c          |  70 +++-
 arch/x86/hyperv/ivm.c              | 257 ++++++++++++
 arch/x86/include/asm/hyperv-tlfs.h |  22 +
 arch/x86/include/asm/mshyperv.h    |  26 +-
 arch/x86/kernel/cpu/mshyperv.c     |   2 +
 drivers/hv/Makefile                |   2 +-
 drivers/hv/channel.c               | 103 ++++-
 drivers/hv/channel_mgmt.c          |  30 +-
 drivers/hv/connection.c            |  68 +++-
 drivers/hv/hv.c                    | 196 ++++++---
 drivers/hv/hv_bounce.c             | 619 +++++++++++++++++++++++++++++
 drivers/hv/hyperv_vmbus.h          |  42 ++
 drivers/hv/ring_buffer.c           |  83 +++-
 drivers/net/hyperv/hyperv_net.h    |   5 +
 drivers/net/hyperv/netvsc.c        | 111 +++++-
 drivers/scsi/storvsc_drv.c         |  46 ++-
 drivers/uio/uio_hv_generic.c       |  13 +-
 include/asm-generic/hyperv-tlfs.h  |   1 +
 include/asm-generic/mshyperv.h     |  24 +-
 include/linux/hyperv.h             |  46 ++-
 mm/ioremap.c                       |   1 +
 mm/vmalloc.c                       |   1 +
 23 files changed, 1614 insertions(+), 156 deletions(-)
 create mode 100644 arch/x86/hyperv/ivm.c
 create mode 100644 drivers/hv/hv_bounce.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH 9/12] x86/Hyper-V: Add new parameter for vmbus_sendpacket_pagebuffer()/mpb_desc()
  2021-02-28 15:03 [RFC PATCH 00/12] x86/Hyper-V: Add Hyper-V Isolation VM support Tianyu Lan
@ 2021-02-28 15:03 ` Tianyu Lan
  2021-02-28 15:03 ` [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc Tianyu Lan
  1 sibling, 0 replies; 7+ messages in thread
From: Tianyu Lan @ 2021-02-28 15:03 UTC (permalink / raw)
  To: kys, haiyangz, sthemmin, wei.liu, davem, kuba, jejb,
	martin.petersen
  Cc: Tianyu Lan, linux-hyperv, linux-kernel, netdev, linux-scsi,
	vkuznets, thomas.lendacky, brijesh.singh, sunilmut

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Add new parameter io_type and struct bounce_pkt for vmbus_sendpacket_pagebuffer()
and vmbus_sendpacket_mpb_desc() in order to add bounce buffer support
later.

Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Co-Developed-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/hv/channel.c            |  7 +++++--
 drivers/hv/hyperv_vmbus.h       | 12 ++++++++++++
 drivers/net/hyperv/hyperv_net.h |  1 +
 drivers/net/hyperv/netvsc.c     |  5 ++++-
 drivers/scsi/storvsc_drv.c      | 23 +++++++++++++++++------
 include/linux/hyperv.h          | 16 ++++++++++++++--
 6 files changed, 53 insertions(+), 11 deletions(-)

diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 4c05b1488649..976ef99dda28 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -1044,7 +1044,8 @@ EXPORT_SYMBOL(vmbus_sendpacket);
 int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
 				struct hv_page_buffer pagebuffers[],
 				u32 pagecount, void *buffer, u32 bufferlen,
-				u64 requestid)
+				u64 requestid, u8 io_type,
+				struct hv_bounce_pkt **bounce_pkt)
 {
 	int i;
 	struct vmbus_channel_packet_page_buffer desc;
@@ -1101,7 +1102,9 @@ EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
 int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
 			      struct vmbus_packet_mpb_array *desc,
 			      u32 desc_size,
-			      void *buffer, u32 bufferlen, u64 requestid)
+			      void *buffer, u32 bufferlen, u64 requestid,
+			      u32 pfn_count, u8 io_type,
+			      struct hv_bounce_pkt **bounce_pkt)
 {
 	u32 packetlen;
 	u32 packetlen_aligned;
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 7edf2be60d2c..7677f083d33a 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -57,6 +57,18 @@ union hv_monitor_trigger_state {
 	};
 };
 
+/*
+ * Hyper-V bounce packet. Each in-use bounce packet is mapped to a vmbus
+ * transaction and contains a list of bounce pages for that transaction.
+ */
+struct hv_bounce_pkt {
+	/* Link to the next bounce packet, when it is in the free list */
+	struct list_head link;
+	struct list_head bounce_page_head;
+	u32 flags;
+};
+
+
 /*
  * All vmbus channels initially start with zero bounce pages and are required
  * to set any non-zero size, if needed.
diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index b3a43c4ec8ab..11266b92bcf0 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -130,6 +130,7 @@ struct hv_netvsc_packet {
 	u32 total_bytes;
 	u32 send_buf_index;
 	u32 total_data_buflen;
+	struct hv_bounce_pkt *bounce_pkt;
 };
 
 #define NETVSC_HASH_KEYLEN 40
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 08d73401bb28..77657c5acc65 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -926,14 +926,17 @@ static inline int netvsc_send_pkt(
 
 	trace_nvsp_send_pkt(ndev, out_channel, rpkt);
 
+	packet->bounce_pkt = NULL;
 	if (packet->page_buf_cnt) {
 		if (packet->cp_partial)
 			pb += packet->rmsg_pgcnt;
 
+		/* The I/O type is always 'write' for netvsc */
 		ret = vmbus_sendpacket_pagebuffer(out_channel,
 						  pb, packet->page_buf_cnt,
 						  &nvmsg, sizeof(nvmsg),
-						  req_id);
+						  req_id, IO_TYPE_WRITE,
+						  &packet->bounce_pkt);
 	} else {
 		ret = vmbus_sendpacket(out_channel,
 				       &nvmsg, sizeof(nvmsg),
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 2e4fa77445fd..c5b4974eb41f 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -31,6 +31,7 @@
 #include <scsi/scsi_dbg.h>
 #include <scsi/scsi_transport_fc.h>
 #include <scsi/scsi_transport.h>
+#include <asm/mshyperv.h>
 
 /*
  * All wire protocol details (storage protocol between the guest and the host)
@@ -427,6 +428,7 @@ struct storvsc_cmd_request {
 	u32 payload_sz;
 
 	struct vstor_packet vstor_packet;
+	struct hv_bounce_pkt *bounce_pkt;
 };
 
 
@@ -1390,7 +1392,8 @@ static struct vmbus_channel *get_og_chn(struct storvsc_device *stor_device,
 
 
 static int storvsc_do_io(struct hv_device *device,
-			 struct storvsc_cmd_request *request, u16 q_num)
+			 struct storvsc_cmd_request *request, u16 q_num,
+			 u32 pfn_count)
 {
 	struct storvsc_device *stor_device;
 	struct vstor_packet *vstor_packet;
@@ -1493,14 +1496,18 @@ static int storvsc_do_io(struct hv_device *device,
 
 	vstor_packet->operation = VSTOR_OPERATION_EXECUTE_SRB;
 
+	request->bounce_pkt = NULL;
 	if (request->payload->range.len) {
+		struct vmscsi_request *vm_srb = &request->vstor_packet.vm_srb;
 
 		ret = vmbus_sendpacket_mpb_desc(outgoing_channel,
 				request->payload, request->payload_sz,
 				vstor_packet,
 				(sizeof(struct vstor_packet) -
 				vmscsi_size_delta),
-				(unsigned long)request);
+				(unsigned long)request,
+				pfn_count,
+				vm_srb->data_in, &request->bounce_pkt);
 	} else {
 		ret = vmbus_sendpacket(outgoing_channel, vstor_packet,
 			       (sizeof(struct vstor_packet) -
@@ -1510,8 +1517,10 @@ static int storvsc_do_io(struct hv_device *device,
 			       VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
 	}
 
-	if (ret != 0)
+	if (ret != 0) {
+		request->bounce_pkt = NULL;
 		return ret;
+	}
 
 	atomic_inc(&stor_device->num_outstanding_req);
 
@@ -1825,14 +1834,16 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 	cmd_request->payload_sz = payload_sz;
 
 	/* Invokes the vsc to start an IO */
-	ret = storvsc_do_io(dev, cmd_request, get_cpu());
+	ret = storvsc_do_io(dev, cmd_request, get_cpu(), sg_count);
 	put_cpu();
 
-	if (ret == -EAGAIN) {
+	if (ret) {
 		if (payload_sz > sizeof(cmd_request->mpb))
 			kfree(payload);
 		/* no more space */
-		return SCSI_MLQUEUE_DEVICE_BUSY;
+		if (ret == -EAGAIN || ret == -ENOSPC)
+			return SCSI_MLQUEUE_DEVICE_BUSY;
+		return ret;
 	}
 
 	return 0;
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index d518aba17565..d1a936091665 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1184,19 +1184,31 @@ extern int vmbus_sendpacket(struct vmbus_channel *channel,
 				  enum vmbus_packet_type type,
 				  u32 flags);
 
+#define IO_TYPE_WRITE	0
+#define IO_TYPE_READ	1
+#define IO_TYPE_UNKNOWN 2
+
+struct hv_bounce_pkt;
+
 extern int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
 					    struct hv_page_buffer pagebuffers[],
 					    u32 pagecount,
 					    void *buffer,
 					    u32 bufferlen,
-					    u64 requestid);
+					    u64 requestid,
+					    u8 io_type,
+					    struct hv_bounce_pkt **bounce_pkt);
 
 extern int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
 				     struct vmbus_packet_mpb_array *mpb,
 				     u32 desc_size,
 				     void *buffer,
 				     u32 bufferlen,
-				     u64 requestid);
+				     u64 requestid,
+				     u32 pfn_count,
+				     u8 io_type,
+				     struct hv_bounce_pkt **bounce_pkt);
+
 
 extern int vmbus_establish_gpadl(struct vmbus_channel *channel,
 				      void *kbuffer,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc
  2021-02-28 15:03 [RFC PATCH 00/12] x86/Hyper-V: Add Hyper-V Isolation VM support Tianyu Lan
  2021-02-28 15:03 ` [RFC PATCH 9/12] x86/Hyper-V: Add new parameter for vmbus_sendpacket_pagebuffer()/mpb_desc() Tianyu Lan
@ 2021-02-28 15:03 ` Tianyu Lan
  2021-03-01  6:54   ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Tianyu Lan @ 2021-02-28 15:03 UTC (permalink / raw)
  To: kys, haiyangz, sthemmin, wei.liu, jejb, martin.petersen
  Cc: Tianyu Lan, linux-hyperv, linux-scsi, linux-kernel, vkuznets,
	thomas.lendacky, brijesh.singh, sunilmut

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Storvsc driver needs to reverse additional bounce
buffers to receive multipagebuffer packet and copy
data from brounce buffer when get response messge
from message.

Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Co-Developed-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/scsi/storvsc_drv.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index c5b4974eb41f..4ae8e2a427e4 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -33,6 +33,8 @@
 #include <scsi/scsi_transport.h>
 #include <asm/mshyperv.h>
 
+#include "../hv/hyperv_vmbus.h"
+
 /*
  * All wire protocol details (storage protocol between the guest and the host)
  * are consolidated here.
@@ -725,6 +727,10 @@ static void handle_sc_creation(struct vmbus_channel *new_sc)
 	/* Add the sub-channel to the array of available channels. */
 	stor_device->stor_chns[new_sc->target_cpu] = new_sc;
 	cpumask_set_cpu(new_sc->target_cpu, &stor_device->alloced_cpus);
+
+	if (hv_bounce_resources_reserve(device->channel,
+			stor_device->max_transfer_bytes))
+		pr_warn("Fail to reserve bounce buffer\n");
 }
 
 static void  handle_multichannel_storage(struct hv_device *device, int max_chns)
@@ -964,6 +970,18 @@ static int storvsc_channel_init(struct hv_device *device, bool is_fc)
 	stor_device->max_transfer_bytes =
 		vstor_packet->storage_channel_properties.max_transfer_bytes;
 
+	/*
+	 * Reserve enough bounce resources to be able to support paging
+	 * operations under low memory conditions, that cannot rely on
+	 * additional resources to be allocated.
+	 */
+	ret =  hv_bounce_resources_reserve(device->channel,
+			stor_device->max_transfer_bytes);
+	if (ret < 0) {
+		pr_warn("Fail to reserve bounce buffer\n");
+		goto done;
+	}
+
 	if (!is_fc)
 		goto done;
 
@@ -1263,6 +1281,11 @@ static void storvsc_on_channel_callback(void *context)
 
 		request = (struct storvsc_cmd_request *)(unsigned long)cmd_rqst;
 
+		if (desc->type == VM_PKT_COMP && request->bounce_pkt) {
+			hv_pkt_bounce(channel, request->bounce_pkt);
+			request->bounce_pkt = NULL;
+		}
+
 		if (request == &stor_device->init_request ||
 		    request == &stor_device->reset_request) {
 			memcpy(&request->vstor_packet, packet,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc
  2021-02-28 15:03 ` [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc Tianyu Lan
@ 2021-03-01  6:54   ` Christoph Hellwig
  2021-03-01 13:43     ` Tianyu Lan
  0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2021-03-01  6:54 UTC (permalink / raw)
  To: Tianyu Lan
  Cc: kys, haiyangz, sthemmin, wei.liu, jejb, martin.petersen,
	Tianyu Lan, linux-hyperv, linux-scsi, linux-kernel, vkuznets,
	thomas.lendacky, brijesh.singh, sunilmut

This should be handled by the DMA mapping layer, just like for native
SEV support.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc
  2021-03-01  6:54   ` Christoph Hellwig
@ 2021-03-01 13:43     ` Tianyu Lan
  2021-03-01 19:45       ` [EXTERNAL] " Sunil Muthuswamy
  0 siblings, 1 reply; 7+ messages in thread
From: Tianyu Lan @ 2021-03-01 13:43 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kys, haiyangz, sthemmin, wei.liu, jejb, martin.petersen,
	Tianyu Lan, linux-hyperv, linux-scsi, linux-kernel, vkuznets,
	thomas.lendacky, brijesh.singh, sunilmut

Hi Christoph:
      Thanks a lot for your review. There are some reasons.
      1) Vmbus drivers don't use DMA API now.
      2) Hyper-V Vmbus channel ring buffer already play bounce buffer 
role for most vmbus drivers. Just two kinds of packets from 
netvsc/storvsc are uncovered.
      3) In AMD SEV-SNP based Hyper-V guest, the access physical address 
of shared memory should be bounce buffer memory physical address plus
with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
called virtual top of memory(vTom) in AMD spec and works as a watermark. 
So it needs to ioremap/memremap the associated physical address above 
the share memory boundary before accessing them. swiotlb_bounce() uses
low end physical address to access bounce buffer and this doesn't work 
in this senario. If something wrong, please help me correct me.

Thanks.


On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
> This should be handled by the DMA mapping layer, just like for native
> SEV support.
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [EXTERNAL] Re: [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc
  2021-03-01 13:43     ` Tianyu Lan
@ 2021-03-01 19:45       ` Sunil Muthuswamy
  2021-03-02 16:03         ` Tianyu Lan
  0 siblings, 1 reply; 7+ messages in thread
From: Sunil Muthuswamy @ 2021-03-01 19:45 UTC (permalink / raw)
  To: Tianyu Lan, Christoph Hellwig
  Cc: KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
	wei.liu@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, Tianyu Lan,
	linux-hyperv@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-kernel@vger.kernel.org, vkuznets, thomas.lendacky@amd.com,
	brijesh.singh@amd.com

> Hi Christoph:
>       Thanks a lot for your review. There are some reasons.
>       1) Vmbus drivers don't use DMA API now.
What is blocking us from making the Hyper-V drivers use the DMA API's? They
will be a null-op generally, when there is no bounce buffer support needed.

>       2) Hyper-V Vmbus channel ring buffer already play bounce buffer
> role for most vmbus drivers. Just two kinds of packets from
> netvsc/storvsc are uncovered.
How does this make a difference here?

>       3) In AMD SEV-SNP based Hyper-V guest, the access physical address
> of shared memory should be bounce buffer memory physical address plus
> with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
> called virtual top of memory(vTom) in AMD spec and works as a watermark.
> So it needs to ioremap/memremap the associated physical address above
> the share memory boundary before accessing them. swiotlb_bounce() uses
> low end physical address to access bounce buffer and this doesn't work
> in this senario. If something wrong, please help me correct me.
> 
There are alternative implementations of swiotlb on top of the core swiotlb
API's. One option is to have Hyper-V specific swiotlb wrapper DMA API's with
the custom logic above.

> Thanks.
> 
> 
> On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
> > This should be handled by the DMA mapping layer, just like for native
> > SEV support.
I agree with Christoph's comment that in principle, this should be handled using
the DMA API's

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [EXTERNAL] Re: [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc
  2021-03-01 19:45       ` [EXTERNAL] " Sunil Muthuswamy
@ 2021-03-02 16:03         ` Tianyu Lan
  0 siblings, 0 replies; 7+ messages in thread
From: Tianyu Lan @ 2021-03-02 16:03 UTC (permalink / raw)
  To: Sunil Muthuswamy, Christoph Hellwig
  Cc: KY Srinivasan, Haiyang Zhang, Stephen Hemminger,
	wei.liu@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, Tianyu Lan,
	linux-hyperv@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-kernel@vger.kernel.org, vkuznets, thomas.lendacky@amd.com,
	brijesh.singh@amd.com

Hi Sunil:
      Thanks for your review.

On 3/2/2021 3:45 AM, Sunil Muthuswamy wrote:
>> Hi Christoph:
>>        Thanks a lot for your review. There are some reasons.
>>        1) Vmbus drivers don't use DMA API now.
> What is blocking us from making the Hyper-V drivers use the DMA API's? They
> will be a null-op generally, when there is no bounce buffer support needed.
> 
>>        2) Hyper-V Vmbus channel ring buffer already play bounce buffer
>> role for most vmbus drivers. Just two kinds of packets from
>> netvsc/storvsc are uncovered.
> How does this make a difference here?
> 
>>        3) In AMD SEV-SNP based Hyper-V guest, the access physical address
>> of shared memory should be bounce buffer memory physical address plus
>> with a shared memory boundary(e.g, 48bit) reported Hyper-V CPUID. It's
>> called virtual top of memory(vTom) in AMD spec and works as a watermark.
>> So it needs to ioremap/memremap the associated physical address above
>> the share memory boundary before accessing them. swiotlb_bounce() uses
>> low end physical address to access bounce buffer and this doesn't work
>> in this senario. If something wrong, please help me correct me.
>>
> There are alternative implementations of swiotlb on top of the core swiotlb
> API's. One option is to have Hyper-V specific swiotlb wrapper DMA API's with
> the custom logic above.

Agree. Hyper-V should have its own DMA ops and put Hyper-V bounce buffer
code in DMA API callback. For vmbus channel ring buffer, it doesn't need 
additional bounce buffer and there are two options. 1) Not call DMA API 
around them 2) pass a flag in DMA API to notify Hyper-V DMA callback
and not allocate bounce buffer for them.

> 
>> Thanks.
>>
>>
>> On 3/1/2021 2:54 PM, Christoph Hellwig wrote:
>>> This should be handled by the DMA mapping layer, just like for native
>>> SEV support.
> I agree with Christoph's comment that in principle, this should be handled using
> the DMA API's
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-03-02 19:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-28 15:03 [RFC PATCH 00/12] x86/Hyper-V: Add Hyper-V Isolation VM support Tianyu Lan
2021-02-28 15:03 ` [RFC PATCH 9/12] x86/Hyper-V: Add new parameter for vmbus_sendpacket_pagebuffer()/mpb_desc() Tianyu Lan
2021-02-28 15:03 ` [RFC PATCH 12/12] HV/Storvsc: Add bounce buffer support for Storvsc Tianyu Lan
2021-03-01  6:54   ` Christoph Hellwig
2021-03-01 13:43     ` Tianyu Lan
2021-03-01 19:45       ` [EXTERNAL] " Sunil Muthuswamy
2021-03-02 16:03         ` Tianyu Lan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox